Quantcast
Channel: Ask the Directory Services Team
Viewing all 67 articles
Browse latest View live

Friday Mail Sack: Carl Sandburg Edition

$
0
0

Hi folks, Jonathan again. Ned is taking some time off visiting his old stomping grounds – the land of Mother-in-Laws and heart-breaking baseball. Or, as Sandburg put it:

Hog Butcher for the World,
Tool Maker, Stacker of Wheat,
Player with Railroads and the Nation's Freight Handler;
Stormy, husky, brawling,
City of the Big Shoulders”

Cool, huh?

Anyway, today we talk about:

And awayyy we go!

Question

When thousands of clients are rebooted for Windows Update or other scheduled tasks, my domain controllers log many KDC 7 System event errors:

Log Name: System
Source: Microsoft-Windows-Kerberos-Key-Distribution-Center
Event ID: 7
Level: Error
Description:

The Security Account Manager failed a KDC request in an unexpected way. The error is in the data field.

Error 170000C0

I’m trying to figure out if this is a performance issue, if the mass reboots are related, if my DCs are over-utilized, or something else.

Answer

That extended error is:

C0000017 = STATUS_NO_MEMORY - {Not Enough Quota} - Not enough virtual memory or paging file quota is available to complete the specified operation.

The DCs are being pressured with so many requests that they are running out of Kernel memory. We see this very occasionally with applications that make heavy use of the older SAMR protocol for lookups (instead of say, LDAP). In some cases we could change the client application's behavior. In others, the customer just had to add more capacity. The mass reboots alone are not the problem here - it's the software that runs at boot up on each client that is then creating what amounts to a denial of service attack against the domain controllers.

Examine one of the client computers mentioned in the event for all non-Windows-provided services, scheduled tasks that run at startup, SCCM/SMS at boot jobs, computer startup scripts, or anything else that runs when the computer is restarted. Then get promiscuous network captures of that computer starting (any time, not en masse) while also running Process Monitor in boot mode, and you'll probably see some very likely candidates. You can also use SPA or AD Data Collector sets (http://blogs.technet.com/b/askds/archive/2010/06/08/son-of-spa-ad-data-collector-sets-in-win2008-and-beyond.aspx) in combination with network captures to see exactly what protocol is being used to overwhelm the DC, if you want to troubleshoot the issue as it happens. Probably at 3AM, that sounds sucky.

Ultimately, the application causing the issue must be stopped, reconfigured, or removed - the only alternative is to add more DCs as a capacity Band-Aid or stagger your mass reboots.

Question

Is it possible to have 2003 and 2008 servers co-exist in the same DFS namespace? I don’t see it documented either “for” or “against” on the blog anywhere.

Answer

It's totally ok to mix OSes in the DFSN namespace, as long as you don't use Windows Server 2008 ("V2 mode") namespaces, which won't allow any Win2003 servers. If you are using DFSR to replicate the data, make sure all server have the latest DFSR hotfixes (here and here), as there areincompatibilities in DFSR that these hotfixes resolve.

Question

Should I create DFS namespace folders (used by the DFS service itself) under NTFS mount points? Is there any advantage to this?

Answer

DFSN management tools do not allow you to create DFSN roots and links under mount points ordinarily, and once you do through alternate hax0r means, they are hard to remove (you have to use FSUTIL). Ergo, do not do it – the management tools blocking you means that it is not supported.

There is no real value in placing the DFSN special folders under mount points - the DFSN special folders consume no space, do not contain files, and exist only to provide reparse point tags to the DFSN service and its file IO driver goo. By default, they are configured on the root of the C: drive in a folder called c:\dfsroots. That ensures that they are available when the OS boots. If clustering you'd create them on one of your drive-lettered shared disks.

Question

How do you back up the Themes folder using USMT4 in Windows 7?

Answer

The built-in USMT migration code copies the settings but not the files, as it knows the files will exist somewhere on the user’s source profile and that those are being copied by the migdocs.xml/miguser.xml. It also knows that the Themes system will take care of the rest after migration; the Themes system creates the transcoded image files using the theme settings and copies the image files itself.

Note here how after scanstate, my USMT store’s Themes folder is empty:

clip_image001

After I loadstate that user, the Themes system fixed it all up in that user’s real profile when the user logged on:

clip_image002

However, if you still specifically need to copy the Themes folder intact for some reason, here’s a sample custom XML file:

<?xmlversion="1.0"encoding="UTF-8"?>

<migrationurlid="http://www.microsoft.com/migration/1.0/migxmlext/migratethemefolder">

<componenttype="Documents"context="User">

<!-- sample theme folder migrator -->

<displayName>ThemeFolderMigSample</displayName>

 <rolerole="Data">

  <rules>

   <includefilter='MigXmlHelper.IgnoreIrrelevantLinks()'>

   <objectSet>

    <patterntype="File">%CSIDL_APPDATA%\Microsoft\Windows\Themes\* [*]</pattern>

   </objectSet>

  </include>

 </rules>

 </role>

And here it is in action:

clip_image004

Question

I've recently been working on extending my AD schema with a new back-linked attribute pair, and I used the instructions on this blog to auto-generate the linkIDs for my new attributes. Confusingly, the resulting linkIDs are negative values (-912314983 and -912314984). The attributes and backlinks seem to work as expected, but when looking at the MSDN definition of the linkID attribute, it specifically states that the linkID should be a positive value. Do you know why I'm getting a negative value, and if I should be concerned?

Answer

The only hard and fast rule is that the forward link (flink) be an even number and the backward link (blink) be the flink's ID plus one. In your case, the flink is -912314984 then the blink had better be -912314983, which I assume is the case since things are working. But, we were curious when you posted the linkID documentation from MSDN so we dug a little deeper.

The fact that your linkIDs are negative numbers is correct and expected, and is the result of a feature called AutoLinkID. Automatically generated linkIDs are in the range of 0xC0000000-0xFFFFFFFC (-1,073,741,824 to -4). This means that it is a good idea to use positive numbers if you are going to set the linkID manually. That way you are guaranteed not to conflict with automatically generated linkIDs.

The bottom line is, you're all good.

Question

I am trying to delegate permissions to the DBA team to create, modify, and delete SPNs since they're the team that swaps out the local accounts SQL is installed under to the domain service accounts we create to run SQL.

Documentation on the Internet has led me down the rabbit hole to no end.  Can you tell me how this is done in a W2K8 R2 domain and a W2K3 domain?

Answer

So you will want to delegate a specific group of users -- your DBA team -- permissions to modify the SPN attribute of a specific set of objects -- computer accounts for servers running SQL server and user accounts used as service accounts under which SQL Server can run.

The easiest way to accomplish this is to put all such accounts in one OU, ie OU=SQL Server Accounts, and run the following commands:

Dsacls "OU=SQL Server Accounts,DC=corp,DC=contoso,DC=com" /I:S /G "CORP\DBA Team":WPRP;servicePrincipalName;user
Dsacls "OU=SQL Server Accounts,DC=corp,DC=contoso,DC=com" /I:S /G "CORP\DBA Team":WPRP;servicePrincipalName;computer

These two commands will grant the DBA Team group permission to read and write the servicePrincipalName attribute on user and computer objects in the SQL Server Accounts OU.

Your admins should then be able to use setspn.exe to modify that property on the designated accounts.

But…what if you have a large number of accounts spread across multiple OUs? The above solution only works well if all of your accounts are concentrated in a few (preferably one) OUs. In this case, you basically have two options:

  1. You can run the two commands specifying the root of the domain as the object, but you would be delegating permissions for EVERY user and computer in the domain. Do you want your DBA team to be able to modify accounts for which they have no legitimate purpose?
  2. Compile a list of specific accounts the DBA team can manage and modify each of them individually. That can be done with a single command line. Create a text file that contains the DNs of each account for which you want to delegate permissions and then use the following command:

    for /f "tokens=*" %i in (object-list.txt) do dsacls "%i" /G "CORP\DBA Team":WPRP;servicePrincipalName

None of these are really great options, however, because you’re essentially giving a group of non-AD Administrators the ability to screw up authentication to what are perhaps critical business resources. You might actually be better off creating an expedited process whereby these DBAs can submit a request to a real Administrator who already has permissions to make the required changes, as well as the experience to verify such a change won’t cause any problems.

Author’s Note: This gentleman pointed out in a reply that these DBAs wouldn’t want him messing with tables, rows and the SA account, so he doesn’t want them touching AD. I thought that was sort of amusing.

Question

What is Powershell checking when your run get-adcomputer -properties * -filter * | format-table Name,Enabled?  Is Enabled an attribute, a flag, a bit, a setting?  What, if at all, would that setting show up as in something like ADSIEdit.msc?

I get that stuff like samAccountName, sn, telephonenumber, etc.  are attributes but what the heck is enabled?

Answer

All objects in PowerShell are PSObjects, which essentially wrap the underlying .NET or COM objects and expose some or all of the methods and properties of the wrapped object. In this case, Enabled is an attribute ultimately inherited from the System.DirectoryServices.AccountManagement.AuthenticablePrincipal .NET class. This answer isn’t very helpful, however, as it just moves your search for answers from PowerShell to the .NET Framework, right? Ultimately, you want to know how a computer’s or user’s account state (enabled or disabled) is stored in Active Directory.

Whether or not an account is disabled is reflected in the appropriate bit being set on the object’s userAccountControl attribute. Check out the following KB: How to use the UserAccountControl flags to manipulate user account properties. You’ll find that the penultimate least significant bit of the userAccountControl bitmask is called ACCOUNTDISABLE, and reflects the appropriate state; 1 is disabled and 0 is enabled.

If you find that you need to use an actual LDAP query to search for disabled accounts, then you can use a bitwise filter. The appropriate LDAP filter would be:

(UserAccountControl:1.2.840.113556.1.4.803:=2)

Other stuff

I watched this and, despite the lack of lots of moving arms and tools, had sort of a Count Zero moment:

And just for Ned (because he REALLY loves this stuff!): Kittens!

No need to rush back, dude.

Jonathan “Payback is a %#*@&!” Stephens


Purging Old NT Security Protocols

$
0
0

Hi folks, Ned here again (with somefriends). Everyone knows that Kerberos is Microsoft’s preeminent security protocol and that NTLM is both inefficient and, in some iterations, not strong enough to avoid concerted attack. NTLM V2 using complex passwords stands up well to common hash cracking tools like Cain and Abel, Ophcrack, or John the Ripper. On the other hand, NTLM V1 is defeated far faster and LM is effectively no protection at all.

I discussed NTLM auditing years ago, when Windows 7 and Windows Server 2008 R2 introduced the concept of NTLM blocking. That article was for well-controlled environments where you thought that there was some chance of disabling NTLM – only modern clients and servers, the latest applications, and Active Directory. In a few other articles, I gave some further details on the limitations of the Windows auditing system logging. It turns out that while we’re ok at telling when NTLM was used, we’re not great at describing which flavor. For instance, Windows Server 2008+security auditing can tell you about the NTLM version through the 4624 event that states a Package Name (NTLM only): NTLM V1 or Package Name (NTLM only): NTLM V2, but all prior operating systems cannot. None of the older auditing can tell you if LM is used either. Windows Server 2008 R2 NTLM auditing only shows you NTLM usage in general.

Today the troika of Dave, Jonathan, and Ned are here to help you discover which computers and applications are using NTLM V1 and LM security, regardless of your operating system. It’s safe to say that some people aren’t going to like our answers or how much work this entails, but that’s life; when LM security was created as part of LAN Manager and OS/2 by Microsoft and IBM, Dave and I were in grade school and Jonathan was only 48.

If you need to keep using NTLM V2 and simply want to hunt down the less secure precursors, this should help.

Finding NTLM V1 and LM Usage via network captures

The only universal, OS-agnostic way you can tell which clients are sending NTLMv1 and LM challenges is by examining a network trace taken from destination computers. Using Netmon 3.4 or another network capture tool, look for packets with a negotiated NTLM security mechanism.

This first example is with LMCompatibilityLevel set to 0 on clients. This example is an SMB session request packet, specifying NTLM authentication.

Here is the SMB SESSION SETUP request, which specifies the security token mechanism:

  Frame: Number = 15, Captured Frame Length = 220, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-44],SourceAddress:[00-15-5D-05-B4-49]

+ Ipv4: Src = 10.10.10.20, Dest = 10.10.10.27, Next Protocol = TCP, Packet ID = 747, Total IP Length = 206

+ Tcp: Flags=...AP..., SrcPort=49235, DstPort=Microsoft-DS(445), PayloadLen=166, Seq=2204022974 - 2204023140, Ack=820542383, Win=32724 (scale factor 0x2) = 130896

+ SMBOverTCP: Length = 162

- SMB2: C   SESSION SETUP (0x1)

    SMBIdentifier: SMB

  + SMB2Header: C SESSION SETUP (0x1),TID=0x0000, MID=0x0002, PID=0xFEFF, SID=0x0000

  - CSessionSetup:

     StructureSize: 25 (0x19)

     VcNumber: 0 (0x0)

   + SecurityMode: 1 (0x1)

   + Capabilities: 0x1

     Channel: 0 (0x0)

     SecurityBufferOffset: 88 (0x58)

     SecurityBufferLength: 74 (0x4A)

     PreviousSessionId: 0 (0x0)

   - securityBlob:

    - GSSAPI:

     - InitialContextToken:

      + ApplicationHeader:

      + ThisMech: SpnegoToken (1.3.6.1.5.5.2)

      - InnerContextToken: 0x1

       - SpnegoToken: 0x1

        + ChoiceTag:

        - NegTokenInit:

         + SequenceHeader:

         + Tag0:

         + MechTypes: Prefer NLMP (1.3.6.1.4.1.311.2.2.10)

         + Tag2:

         + OctetStringHeader:

         -MechToken: NTLM NEGOTIATE MESSAGE

          - NLMP: NTLM NEGOTIATE MESSAGE

             Signature: NTLMSSP

             MessageType: Negotiate Message (0x00000001)

           + NegotiateFlags: 0xE2088297 (NTLM v2128-bit encryption, Always Sign)

           + DomainNameFields: Length: 0, Offset: 0

           + WorkstationFields: Length: 0, Offset: 0

           + Version: Windows 6.1 Build 7601 NLMPv15

Next, the server sends its NTLM challenge back to the client:

  Frame: Number = 16, Captured Frame Length = 447, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-49],SourceAddress:[00-15-5D-05-B4-44]

+ Ipv4: Src = 10.10.10.27, Dest = 10.10.10.20, Next Protocol = TCP, Packet ID = 24310, Total IP Length = 433

+ Tcp: Flags=...AP..., SrcPort=Microsoft-DS(445), DstPort=49235, PayloadLen=393, Seq=820542383 - 820542776, Ack=2204023140, Win=512 (scale factor 0x8) = 131072

+ SMBOverTCP: Length = 389

- SMB2: R  - NT Status: System - Error, Code = (22) STATUS_MORE_PROCESSING_REQUIRED  SESSION SETUP (0x1), SessionFlags=0x0

    SMBIdentifier: SMB

  + SMB2Header: R SESSION SETUP (0x1),TID=0x0000, MID=0x0002, PID=0xFEFF, SID=0x0019

  - RSessionSetup:

     StructureSize: 9 (0x9)

   + SessionFlags: 0x0

     SecurityBufferOffset: 72 (0x48)

     SecurityBufferLength: 317 (0x13D)

   - securityBlob:

    - GSSAPI:

     - NegotiationToken:

      + ChoiceTag:

      - NegTokenResp:

       + SequenceHeader:

       + Tag0:

       + NegState: accept-incomplete (1)

       + Tag1:

       + SupportedMech: NLMP (1.3.6.1.4.1.311.2.2.10)

       + Tag2:

       + OctetStringHeader:

       - ResponseToken: NTLM CHALLENGE MESSAGE

        - NLMP: NTLM CHALLENGE MESSAGE

          Signature: NTLMSSP

           MessageType: Challenge Message (0x00000002)

        + TargetNameFields: Length: 12, Offset: 56

         + NegotiateFlags: 0xE2898215 (NTLM v2128-bit encryption, Always Sign)

         + ServerChallenge: 67F9C5F851F2CD73

           Reserved: Binary Large Object (8 Bytes)

         + TargetInfoFields: Length: 214, Offset: 68

         + Version: Windows 6.1 Build 7601 NLMPv15

           TargetNameString: CORP01

         + AvPairs: 7 pairs

The client calculates the response to the challenge, using the various available hashes of the password. Note how this response includes both LM and NTLMv1 challenge responses.

  Frame: Number = 17, Captured Frame Length = 401, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-44],SourceAddress:[00-15-5D-05-B4-49]

+ Ipv4: Src = 10.10.10.20, Dest = 10.10.10.27, Next Protocol = TCP, Packet ID = 748, Total IP Length = 387

+ Tcp: Flags=...AP..., SrcPort=49235, DstPort=Microsoft-DS(445), PayloadLen=347, Seq=2204023140 - 2204023487, Ack=820542776, Win=32625 (scale factor 0x2) = 130500

+ SMBOverTCP: Length = 343

- SMB2: C   SESSION SETUP (0x1)

    SMBIdentifier: SMB

  + SMB2Header: C SESSION SETUP (0x1),TID=0x0000, MID=0x0003, PID=0xFEFF, SID=0x0019

  - CSessionSetup:

     StructureSize: 25 (0x19)

     VcNumber: 0 (0x0)

   + SecurityMode: 1 (0x1)

   + Capabilities: 0x1

     Channel: 0 (0x0)

   SecurityBufferOffset: 88 (0x58)

     SecurityBufferLength: 255 (0xFF)

     PreviousSessionId: 0 (0x0)

   - securityBlob:

    - GSSAPI:

     - NegotiationToken:

      + ChoiceTag:

      - NegTokenResp:

       + SequenceHeader:

       + Tag0:

       + NegState: accept-incomplete (1)

       + Tag2:

       + OctetStringHeader:

       - ResponseToken: NTLM AUTHENTICATE MESSAGEVersion:v1, Domain: CORP01, User: Administrator, Workstation: CONTOSO-CLI-01

        - NLMP: NTLM AUTHENTICATE MESSAGEVersion:v1, Domain: CORP01, User: Administrator, Workstation: CONTOSO-CLI-01

           Signature: NTLMSSP

           MessageType: Authenticate Message (0x00000003)

         + LmChallengeResponseFields: Length: 24, Offset: 154

         + NtChallengeResponseFields: Length: 24, Offset: 178

         + DomainNameFields: Length: 12, Offset: 88

         + UserNameFields: Length: 26, Offset: 100

         + WorkstationFields: Length: 28, Offset: 126

         + EncryptedRandomSessionKeyFields: Length: 16, Offset: 202

         + NegotiateFlags: 0xE2888215 (NTLM v2128-bit encryption, Always Sign)

         + Version: Windows 6.1 Build 7601 NLMPv15

         + MessageIntegrityCheckNotPresent: 6243C42AF68F9DFE30BD31BFC722B4C0

           DomainNameString: CORP01

           UserNameString: Administrator

           WorkstationString: CONTOSO-CLI-01

         + LmChallengeResponseStruct: 3995E087245B6F7100000000000000000000000000000000

         + NTLMV1ChallengeResponse: B0751BDCB116BA5737A51962328D5CCD19EEBEBB15A69B1E

         + SessionKeyString: 397DACB158C9F10EF4903F10D4CBE032

       + Tag3:

       + OctetStringHeader:

       + MechListMic: Version: 1

The server then responds with successful negotiation state:

  Frame: Number = 18, Captured Frame Length = 159, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-49],SourceAddress:[00-15-5D-05-B4-44]

+ Ipv4: Src = 10.10.10.27, Dest = 10.10.10.20, Next Protocol = TCP, Packet ID = 24312, Total IP Length = 145

+ Tcp: Flags=...AP..., SrcPort=Microsoft-DS(445), DstPort=49235, PayloadLen=105, Seq=820542776 - 820542881, Ack=2204023487, Win=510 (scale factor 0x8) = 130560

+ SMBOverTCP: Length = 101

- SMB2: R   SESSION SETUP (0x1), SessionFlags=0x0

    SMBIdentifier: SMB

  + SMB2Header: R SESSION SETUP (0x1),TID=0x0000, MID=0x0003, PID=0xFEFF, SID=0x0019

  - RSessionSetup:

     StructureSize: 9 (0x9)

   + SessionFlags: 0x0

     SecurityBufferOffset: 72 (0x48)

     SecurityBufferLength: 29 (0x1D)

   - securityBlob:

    - GSSAPI:

     - NegotiationToken:

      + ChoiceTag:

      - NegTokenResp:

       + SequenceHeader:

       + Tag0:

       +NegState: accept-completed (0)

       + Tag3:

       + OctetStringHeader:

       + MechListMic: Version: 1

To contrast this, consider the challenge response packet when LMCompatibility is set to 4 or 5 on the client (meaning it is not allowed to send anything but NTLM V2). The LM response is null, while the NTLMv1 response isn't included at all.

  Frame: Number = 17, Captured Frame Length = 763, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-44],SourceAddress:[00-15-5D-05-B4-49]

+ Ipv4: Src = 10.10.10.20, Dest = 10.10.10.27, Next Protocol = TCP, Packet ID = 844, Total IP Length = 749

+ Tcp: Flags=...AP..., SrcPort=49231, DstPort=Microsoft-DS(445), PayloadLen=709, Seq=4045369997 - 4045370706, Ack=881301203, Win=32625 (scale factor 0x2) = 130500

+ SMBOverTCP: Length = 705

- SMB2: C   SESSION SETUP (0x1)

    SMBIdentifier: SMB

  + SMB2Header: C SESSION SETUP (0x1),TID=0x0000, MID=0x0003, PID=0xFEFF, SID=0x0021

  - CSessionSetup:

     StructureSize: 25 (0x19)

     VcNumber: 0 (0x0)

  + SecurityMode: 1 (0x1)

   + Capabilities: 0x1

     Channel: 0 (0x0)

     SecurityBufferOffset: 88 (0x58)

     SecurityBufferLength: 617 (0x269)

     PreviousSessionId: 0 (0x0)

   - securityBlob:

    - GSSAPI:

     - NegotiationToken:

      + ChoiceTag:

      - NegTokenResp:

       + SequenceHeader:

       + Tag0:

       + NegState: accept-incomplete (1)

       + Tag2:

       + OctetStringHeader:

       - ResponseToken: NTLM AUTHENTICATE MESSAGEVersion:v2, Domain: CORP01, User: Administrator, Workstation: CONTOSO-CLI-01

        - NLMP: NTLM AUTHENTICATE MESSAGEVersion:v2, Domain: CORP01, User: Administrator, Workstation: CONTOSO-CLI-01

           Signature: NTLMSSP

           MessageType: Authenticate Message (0x00000003)

         + LmChallengeResponseFields: Length: 24, Offset: 154

         + NtChallengeResponseFields: Length: 382, Offset: 178

         + DomainNameFields: Length: 12, Offset: 88

         + UserNameFields: Length: 26, Offset: 100

         + WorkstationFields: Length: 28, Offset: 126

         + EncryptedRandomSessionKeyFields: Length: 16, Offset: 560

         + NegotiateFlags: 0xE2888215 (NTLM v2128-bit encryption, Always Sign)

         + Version: Windows 6.1 Build 7601 NLMPv15

         + MessageIntegrityCheck: 2B69C069DD922D4A841D0EC43939DF0F

           DomainNameString: CORP01

           UserNameString: Administrator

           WorkstationString: CONTOSO-CLI-01

         + LmChallengeResponseStruct: 000000000000000000000000000000000000000000000000

         + NTLMV2ChallengeResponse: CD22D7CC09140E02C3D8A5AB623899A8

         + SessionKeyString: AF31EDFAAF8F38D1900D7FBBDCB43760

       + Tag3:

       + OctetStringHeader:

       + MechListMic: Version: 1

By taking traces and filtering on the NTLMV1ChallengeResponse field, you find those hosts that are sending NTLMv1 responses and determine if you need to upgrade them or if they simply have the wrong LMcompatibility values set through security policy.

Finding LM usage via Netlogon debug logs

If you just want to detect LM authentication and not looking to spend time in network captures, you can instead enable Netlogon logging on all DCs and servers in the environment.

Nltest /dbflag:2080ffff
net stop NetLogon
net start NetLogon

This creates the netlogon.log in the C:\Windows\Debug folder and it can grow to a maximum of 20 Mb by default. At that point, the server renames the file to netlogon.bak and a new netlogon.log file started. At 20Mb, the server deletes netlogon.bak, renames the netlogon.log to netlogon.bak, and a new netlogon.log file started. To make these log files larger, you can use a registry entry or group policy:

Registry

Path: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters
Value Name: MaximumLogFileSize
Value Type: REG_DWORD
Value Data: <maximum log file size in bytes>

Group Policy

\Computer Configuration\Administrative Templates\System\Net Logon\Maximum Log File Size

You aren't trying to capture all data here - just useful samples - but if they wrap so much that you're unsure if they are accurate at all, increasing size is a good idea. As an alternative, you can create a scheduled task that runs ONSTART or a computer startup script. Either of them can use this batch file to make backups of the netlogon log by date/time and the computer name:

REM Sample script to copy the netlogon.bak to a netlogon_DATETIME_COMPUTERNAME.log backup form every 5 minutes

:start
if exist %windir%\debug\netlogon.bak goto copylog

:
copylog_return
sleep 300
goto start

:copylog
for /f "tokens=1-7 delims=/:., " %%a in ("%DATE% %TIME%") do (set DATETIME=%%a-%%b-%%c_%%d-%%e-%%f)
copy /v %windir%\debug\netlogon.bak %windir%\debug\netlogon_%DATETIME%_%COMPUTERNAME%.log
if %ERRORLEVEL% EQU 0 del %windir%\debug\netlogon.bak
goto copylog_return

Periodically, gather all of the NetLogon logs from the DCs and servers and place them in a single folder. Once you have assembled the NetLogon logs into a single spot, you may then use the following LogParser command from that folder to parse them all for a count of unique UAS logons to the domain controller by workstation:

Logparser.exe "SELECT TO_UPPERCASE(EXTRACT_SUFFIX(TEXT,0,'returns ')) AS ERR, TO_UPPERCASE (extract_prefix(extract_suffix(TEXT, 0, 'NetrLogonUasLogon of '), 0, 'from ')) as USER, TO_UPPERCASE (extract_prefix(extract_suffix(TEXT, 0, 'from '), 0, 'returns ')) as WORKSTATION, COUNT(*) FROM '*netlogon.*' WHERE INDEX_OF(TO_UPPERCASE (TEXT),'LOGON') >0 AND INDEX_OF(TO_UPPERCASE(TEXT),'RETURNS') >0 AND INDEX_OF(TO_UPPERCASE(TEXT),'NETRLOGONUASLOGON') >0 GROUP BY ERR, USER, WORKSTATION ORDER BY COUNT(*) DESC" -i:TEXTLINE -rtp:-1 >UASLOGON_USER_BY_WORKSTATION.txt

UASLOGON_USER_BY_WORKSTATION.txt contains the unique computers and counts. LogParser is available for download from here.

FIND and PowerShell are options here as well. The simplest approach is just to return the lines, perhaps into a text file for later sorting in say, Excel (which is very fast at sorting and allows you to organize your data).

image

image

image

I'll wager someone in the comments will take on the rather boring challenge of exactly duplicating what LogParser does. I didn't have the energy this time around. :)

Final thoughts

Microsoft stopped using LM after Windows 95/98/ME. If you do find specific LM-only usage and you don't have any (unsupported) Win9X computers, this is a third party application. A really heinous one.

All supported versions of Windows obey the LMCompatibility registry setting, and can use NTLMv2 just as easily as NTLMv1. At that point, analyzing network traces just becomes useful for tracking down those hosts that have applied the policy, but have not yet been rebooted. Considering how unsafe LM and NTLMv1 are, enabling NoLMHash and LMCompatibility 4 or 5 on all computers may be a faster alternative to auditing. It could cause some temporary outages, but would definitely catch anyone requiring unsafe protocols. There's no better auditing that a complaining application administrator.

Finally, do not limit your NTLM inventory to domain controllers and file or application servers. A comprehensive project requires you examine all computers in the environment, as even a Windows XP workstation can be a "server" for some application. Use a multi-pronged approach, where you also inventory operating systems through network probing - if you have Windows 95 or old SAMBA lying around somewhere on a shop floor, they are almost guaranteed to use insecure protocols.

Until next time,

- Ned “and Dave and Jonathan and Jonathan's in-home elderly care nurse” Pyle

Friday Mail Sack: Get Off My Lawn Edition

$
0
0

Hi folks, Ned here again. I know this is supposed to be the Friday Mail Sack but things got a little hectic and... ah heck, it doesn't need explaining, you're in IT. This week - with help from the ever-crotchety Jonathan Stephens - we talk about:

Now that Jonathan's Rascal Scooter has finished charging, on to the Q & A.

Question

We want to create a group policy for an OU that contains various computers needs to run for just Windows 7 notebooks only. All of our notebooks are named starting with an "N". Does group policy WMI filtering allows stacking conditions on the same group policy? 

Answer

Yes, you can chain together multiple query criteria, and they can even be from different classes or namespaces. For example, here I use both the Win32_OperatingSystem and Win32_ComputerSystem classes:

image

And here I use only the Win32_OperatingSystem class, with multiple filter criteria:

image

As long as they all evaluate TRUE, you get the policy. If you had a hundred of these criteria (please don’t) and 99 evaluate true but just one is false, the policy is skipped.

Note that my examples above would catch Win2008 R2 servers also; if you’ve read my previous posts, you know that you can also limit queries to client operating systems using the Win32_OperatingSystem property OperatingSystemSKU. Moreover, if you hadn’t used a predictable naming convention, you can also filter on with Win32_SystemEnclosure and query the ChassisTypes property for 8, 9, or 10 (respectively: “Portable”, “Laptop”, and “Notebook”). And no, I do not know the difference between these, it is OEM-specific. Just like “pizza box” is for servers. You stay classy, WMI.

Question

Is changing LDAP MaxPoolThreads a good or bad idea?

Answer

MaxPoolThreads controls the maximum number of simultaneous threads per-processor that a DC uses to work on LDAP requests. By default, it’s four per processor core. Increasing this value would allow a DC/GC to handle more LDAP requests. So if you have too many LDAP clients talking to too few DCs at once, raising this can reduce LDAP application timeouts and periodic “hangs”. As you might have guessed, the biggest complainer here is often MS Exchange and Outlook. If the performance counters “ATQ Threads LDAP" & "ATQ Threads Total" are constantly at the maximum number based on the number of processor and MaxPoolThreads value, then you are bottlenecking LDAP.

However!

DCs are already optimized to quickly return data from LDAP requests. If your hardware is even vaguely new and if you are not seeing actual issues, you should not increase this default value. MaxPoolThreads depends on non-paged pool memory, which on a Win2003 32-bit Windows OS is limited to 256MB (more on Win2008 32-bit). Meaning that if you still have not moved to at least x64 Windows Server 2003, don’t touch this value at all – you can easily hang your DCs. It also means you need to get with the times; we stopped making a 32-bit server OS nearly three years ago and OEMS stopped selling the hardware even before that. A 64-bit system's non-paged pool limit is 128GB.

In addition, changing the LDAP settings is often a Band-Aid that doesn’t address the real issue of DC capacity for your client/server base.  Use SPA or AD Data Collector sets to determine "Clients with the Most CPU Usage" under section "Ldap Requests”. Especially if the LDAP queries are not just frequent but also gross - there are also built-in diagnostics logs to find poorly-written requests:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics\
15 Field Engineering

To categorize search operations as expensive or inefficient, two DWORD registry keys are used:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\
Expensive Search Results Threshold

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\
Inefficient Search Results Threshold

These DWORD registry keys have the following default values:

  • Expensive Search Results Threshold: 10000
  • Inefficient Search Results Threshold: 1000

For example, here’s an inefficient result written in the DS event log; yuck, ick, argh!:

Event Type: Information
Event Source: NTDS General
Event Category: Field Engineering
Event ID: 1644
Description:
The Search operation based at RootDSE
using the filter:
& ( | ( & ( (objectCategory = <val>) (objectSid = *) ! ( (sAMAccountType | <bit_val>) ) ) & ( (objectCategory = <val>) ! ( (objectSid = *) ) ) & ( (objectCategory = <val>) (groupType | <bit_val>) ) ) (aNR = <substr>) <startSubstr>*) )

visited 40 entries and returned 0 entries.

Finally, this article should be required reading to any application developers in your company:

Creating More Efficient Microsoft Active Directory-Enabled Applications -
http://msdn.microsoft.com/en-us/library/windows/desktop/ms808539.aspx#efficientadapps_topic04

(The title should be altered to “Creating even slightly efficient…” in my experience).

Question

I want to implement many-to-one certificate mappings by using Issuer and Subject DN match. In altSecurityIdentities I put the following string:

X509:<I>DC=com,DC=contoso,CN=Contoso CA<S>DC=com,DC=contoso,CN=users,CN=user name

In a given example, a certificate with “cn=user name, cn=users, dc=contoso, dc=com” in the Subject field will be mapped to a user account, where I define the mappings. But in that example I get one-to-one mapping. Can I use wildcards here, say:

X509:<I>DC=com,DC=contoso,CN=Contoso CA<S>DC=com,DC=contoso,CN=users,CN=*

So that any certificate that contains “cn=<any value>, cn=users, dc=contoso, dc=com” will be mapped to the same user account?

Answer

[Sent from Jonathan while standing in the 4PM dinner line at Bob Evans]

Unfortunately, no. All that would do is map a certificate with a wildcard subject to that account. The only type of one-to-many mapping supported by the Active Directory mapper is configuring it to ignore the subject completely. Using this method, you can configure the AD mappings so that any certificate issued by a particular CA can be mapped to a single user account. See the following: http://technet.microsoft.com/en-us/library/bb742438.aspx#ECAA

Question

I've recently been working on extending my AD schema with a new back-linked attribute pair, and I used the instructions on this blog and MSDN to auto-generate the linkIDs for my new attributes. Confusingly, the resulting linkIDs are negative values (-912314983 and -912314984). The attributes and backlinks seem to work as expected, but when looking at the MSDN definition of the linkID attribute, it specifically states that the linkID should be a positive value. Do you know why I'm getting a negative value, and if I should be concerned?

Answer

[Sent from Jonathan’s favorite park bench where he feeds the pigeons]

The negative numbers are correct and expected, and are the result of a feature called AutoLinkID. Automatically generated linkIDs are in the range of 0xC0000000-0xFFFFFFFC (-1,073,741,824 to -4). This means that it is a good idea to use positive numbers if you are going to set the linkID manually. That way you are guaranteed not to conflict with automatically generated linkIDs.

The bottom line is, this is expected under the circumstances and you're all good.

Question

Is there any performance advantage to turning off the DFSR debug logging, lowering the number of logs, or moving the logs to another drive? You explained how to do this here in the DFSR debug series, but never mentioned it in your DFSR performance tuning article.

Answer

Yes, you will see some performance improvements turning off the logging or lowering the log count; naturally, all this logging isn’t free, it takes CPU and disk time. But before you run off to make changes, remember that if there are any problems, these logs are the only thing standing between you and the unemployment line. Your server will be much faster without any anti-virus software too, and your company’s profits higher without fire insurance; there are trade-offs in life. That’s why – after some brief agonizing, followed by heavy drinking – I decided not to include it in the performance article.

Moving the logs to another physical disk than Windows is safe and may take some pressure of the OS drive.

Question

When I try to join this Win2008 R2 computer to the domain, it gives an error I’ve never seen before:

"The following error occurred attempting to join the domain "contoso.com":
The request is not supported."

Answer

This server was once a domain controller. During demotion, something prevented the removal of the following registry value name:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters\
DSA Database file

Delete that "Dsa Database File" value name and attempt to join the domain again. It should work this time. If you take a gander at the %systemroot%\debug\netsetup.log, you’ll see another clue that this is your issue:

NetpIsTargetImageADC: Determined this is a DC image as RegQueryValueExW loaded Services\NTDS\Parameters\DSA Database file: 0x0
NetpInitiateOfflineJoin: The image at C:\Windows\system32\config\SYSTEM is a DC: 0x32

We started performing this check in Windows Server 2008 R2, as part of the offline domain join code changes. Hurray for unintended consequences!

Question

We have a largish AD LDS (ADAM) instance we update daily through by importing CSV files that deletes all of yesterday’s user objects and import today’s. Since we don’t care about deleted objects, we reduced the tombstoneLifetime to 3 days. The NTDS.DIT usage, as shown by the 1646 Garbage Collection Event ID, shows 1336mb free with a total allocation of 1550mb – this would suggest that there is a total of 214MB of data in the database.

The problem is that Task Manager shows a total of 1,341,208K of Memory (Private Working Set) in use. The memory usage is reduced to around the 214MB size when LDS is restarted; however, when Garbage Collection runs the memory usage starts to climb. I have read many KB articles regarding GC but nothing explains what I am seeing here.

Answer

Generally speaking, LSASS (and DSAMAIN, it’s red-headed ADLDS cousin) is designed to allocate and retain more memory – especially ESE (aka “Jet”) cache memory – than ordinary processes, because LSASS/DSAMAIN are the core processes of a DC or AD/LDS server. I would expect memory usage to grow heavily during the import, the deletions, and then garbage collection; unless something else put pressure on the machine for memory, I’d expect the memory usage to remain. That’s how well-written Jet database applications work – they don’t give back the memory unless someone asks, because LSASS and Jet can reuse it much faster when needed if it’s already loaded; why return memory if no one wants it? That would be a performance bug unto itself.

The way to show this in practical terms is to start some other high-memory process and validate that DSAMAIN starts to return the demanded memory. There are test applications like this on the internet, or you can install some app that likes to gobble a lot of RAM. Sometimes I’ll just install Wireshark and load a really big saved network capture – that will do it in a pinch. :-D You can also use the ESE performance counters under the “Database” and “Database ==> Instances” to see more about how much of the memory usage is Jet database cache size.

Regular DCs have this behavior too, as does DFSR and do other applications. You paid for all that memory; you might as well use it.

(Follow up from the customer where he provided a useful PowerShell “memory gobbler” example)

I ran the following Windows PowerShell script a few times to consume all available memory and the DSAMAIN process started releasing memory immediately as expected:

$chunk = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
for ($i = 0; $i -lt 5000; $i++)

       $chunk += $chunk
}

Question

When I migrate users from Windows 7 to Windows 7 using USMT 4.0, their pinned and automatic taskbar jump lists are lost. Is this expected?

Answer

Yes. For those poor $#%^&#s readers still using XP, Windows 7 introduced application taskbar pinning and a special menu called a jump list:

image

Pinned and Recent jump lists are not migrated by USMT, because the built-in OS Shell32 manifest called by USMT (c:\windows\winsxs\manifests\*_microsoft-windows-shell32_31bf3856ad364e35_6.1.7601.17514_non_ca4f304d289b7800.manifest) contains this specific criterion:

<pattern type="File">%CSIDL_APPDATA%\Microsoft\Windows\Recent [*]</pattern>

Note how it is notRecent\* [*], which would grab the subfolder contents of Recent. It only copies the direct file contents of Recent. The pinned/automatic jump lists are stored in special files under the CustomDestinations and AutomaticDestinations folders inside the Recent folder. All the other contents of Recent are shortcut files to recently opened documents anywhere on the system:

image

If you examine these special files, you'll see that they are binary, unreadable, and totally proprietary:

image

Since these files are binary and embed all their data in a big blob of goo, they cannot simply be copied safely between operating systems using USMT. The paths they reference could easily change in the meantime, or the data they reference could have been intentionally skipped. The only way this would work is if the Shell team extended their shell migration plugin code to handle it. Which would be a fair amount of work, and at the time these manifests were being written, customers were not going to be migrating from Win7 to Win7. So no joy. You could always try copying them with custom XML, but I have no idea if it would work at all and you’re on your own anyway – it’s not supported.

Question

We have a third party application that requires DES encryption for Kerberos. It wasn’t working from our Windows 7 clients though, so we enabled the security group policy “Network security: Configure encryption types allowed for Kerberos” to allow DES. After that though, these Windows 7 clients stopped working in many other operations, with event log errors like:

Event ID: 4
Source: Kerberos
Type: Error
"The kerberos client received a KRB_AP_ERR_MODIFIED error from the server host/myserver.contoso.com. This indicates that the password used to encrypt the kerberos service ticket is different than that on the target server. Commonly, this is due to identically named machine accounts in the target realm (domain.com), and the client realm. Please contact your system administrator."

And “The target principal name is incorrect” or “The target account name is incorrect” errors connecting to network resources.

Answer

When you enable DES on Windows 7, you need to ensure you are not accidentally disabling the other cipher suites. So don’t do this:

image

That means only DES is supported and you just disabled RC4, AES, etc.

Instead, do this:

image

If it exists at all and you want DES, this registry DWORD value to be 0x7fffffff on Windows 7 or Win2008 R2:

MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\System\Kerberos\Parameters\
SupportedEncryptionTypes

If it’s set to 0x3, all heck will break loose. This security policy interface is admittedly tiresome in that it has no “enabled/disabled” toggle. Use GPRESULT /H or /Z to see how it’s applying if you’re not sure about the actual settings.

Other Stuff

Windows 8 Consumer Preview releases February 29th, as if you didn’t already know it. Don’t ask me if this also means Windows Server 8 Beta the same exact day, I can’t say. But it definitely means the last 16 months of my life finally start showing some results. As will this blog…

Apparently we’ve been wrong about Han and Greedo since day one. I want to be wrong though. Thanks for passing this along Tony. And speaking of which, thanks to Ted O and the rest of the gang at LucasArts for the awesome tee!

This is a … creepily good music video? Definitely a nice find, Mark!


This is basically my home video collection

My new favorite site of the week? The Awesomer. Do not visit if you have to be somewhere in an hour.

Wait, no… my new favorite site is That’s Nerdaliscious. Do not read if hungry or dorky. 

Sick of everyone going on about Angry Birds? Love Chuck Norris? Go here now. There are a lot of these; don't miss Mortal Combat versus Donkey Kong.

Ah, there’s Waldo.

Likely the coolest advertisement for something that doesn’t yet exist that you will see this year.


I need to buy stock in SC Johnson. Can you imagine the Windex sales?!

Until next time.

- Ned “Generation X” Pyle with Jonathan “The Greatest Generation” Stephens

Saturday Mail Sack: Because it turns out, Friday night was alright for fighting edition

$
0
0

Hello all, Ned here again with our first mail sack in a couple months. I have enough content built up here that I actually created multiple posts, which means I can personally guarantee there will be another one next week. Unless there isn't!

Today we answer your questions around:

One side note: as I was groveling old responses, I came across a handful of emails I'd overlooked and never responded to; <insert various excuses here>. People who know me know that I don’t ignore email lightly. Even if I hadn't the foggiest idea how to help, I'd have at least responded with a "Duuuuuuuuuuurrrrrrrr, no clue, sorry".

Therefore, I'll make you deal: if you sent us an email in the past few months and never heard back, please resend your question and I'll answer them as best I can. That way I don’t spend cycles answering something you already figured out later, but if you’re still stuck, you have another chance. Sorry about all that - what with Windows 8 work, writing our internal support engineer training, writing public content, Jonathan having some kind of south pacific death flu, and presenting at internal conferences… well, only the usual insane Microsoft Office clipart can sum up why we missed some of your questions:

clip_image002

On to the goods!

Question

Is it possible to create a WMI Filter that detects only virtual machines? We want a group policy that will apply specifically to our virtualized guests.

Answer

Totally possible for Hyper-V virtual machines: You can use the WMI class Win32_ComputerSystem with a property of Model like “Virtual Machine” and property Manufacturer of “Microsoft Corporation”. You can also use class Win32_BaseBoard for the Product property, which will be “Virtual Machine” and property Manufacturer that will be “Microsoft Corporation”.

image

Technically speaking, this might also capture Virtual PC machines, but I don’t have one handy to see, and I doubt you are allowing those to handle production workloads anyway. As for EMC VMWare, Citrix Xen, KVM, Oracle Virtual Box, etc. you’ll have to see what shows for Win32_BaseBoard/Win32_ComputerSystem in those cases and make sure your WMI filter looks for that too. I don’t have any way to test them, and even if I did, I'd still make you do it out of spite. Gimme money!

Which reminds me - Tad is back:

image

Question

The Understand and Troubleshoot AD DS Simplified Administration in Windows Server "8" Beta guide states:

Microsoft recommends that all domain controllers provide DNS and GC services for high availability in distributed environments; these options default to on when installing a domain controller in any mode or domain.

But when I run Install-ADDSDomainController -DomainName corp.contoso.com -whatif it returns that the cmdlet will not install the DNS Server (DNS Server: No).

If Microsoft recommends that all domain controllers provide DNS, why do I need to specify -InstallDNS argument?

Answer

The output of DNS Server: No is a cosmetic issue with the output of -whatif. It should say YES, but doesn't unless you specifically use the $true parameter. You don't have to specify -installdns; the cmdlet will automatically* install DNS server unless you specify -installdns:$false.

* If you are using Windows DNS on domain controllers, that is. The UTG isn't totally accurate in this version (but will be in the next). The logic is that if that domain already hosts the DNS, all subsequent DCs will also host the DNS by default. So to be very specific:

1. New forest: always install DNS
2. New child or new tree domain: if the parent/tree domain hosts DNS, install DNS
3. Replica: if the current domain hosts DNS, install DNS

Question

How can I disable a user on all domain controllers, without waiting for (or forcing) AD replication?

Answer

The universal in-box way that works in all operating systems would be to use DSMOD.EXE USER and feed it the DC names in a list. For example:

1. Create a text file that contains all your DC in a forest, in a line-separated list:

2008r2-01
2008r2-02

2. Run a FOR loop command to read that list and disable the specified user against each domain controller.

FOR /f %i IN (some text file) DO dsmod user "some DN" -disabled -yes -s %i

For instance:

image

You also have the AD PowerShell option in your Win2008 R2 DC environment, and it’s much easier to automate and maintain. You just tell it the domain controllers' OU and the user and let it rip:

get-adcomputer -searchbase "your DC OU" -filter * | foreach {disable-adaccount "user logon ID" -server $_.dnshostname}

For instance:

image

If you weren't strictly opposed to AD replication (short circuiting it like this isn't going to stop eventual replication traffic) you can always disable the user on one DC then force just that single object to replicate to all the other DCs. Check out repadmin /replsingleobj or the new Windows Server "8" Beta " sync-adobject cmdlet.

image

 The Internet also has many further thoughts on this. It's a very opinionated place.

Question

We have found that modifying the security on a DFSR replicated folder and its contents causes a big DFSR replication backlog. We need to make these permissions changes though; is there any way to avoid that backlog?

Answer

Not the way you are doing it. DFSR has to replicate changes and you are changing every single file; after all, how can you trust a replication system that does not replicate? You could consider changing permissions "from the bottom up" - where you modify perms on lower level folders first - in some sort of staged fashion to minimize the amount of replication that has to occur, but it just sounds like a recipe to get things wrong or end up replicating things twice, making it worse. You will just have to bite the bullet in Windows Server 2008 R2 and older DFSR. Do it on a weekend and next time, treat this as a lesson learned and plan your security design better so that all of your user base fits into the model using groups.

However…

It is a completely different story if you switch to Windows Server "8" Beta - well really, the RTM version when it ships. There you can use Central Access Policies (similar to Windows Server 2008 R2's global object access auditing). This new kind of security system is part of the Dynamic Access Control feature and abstracts the user access from NTFS, meaning you can change security using claims policy and not actually change the files on the disk (under some but not all circumstances - more on this when I write a proper post after RTM). It's amazing stuff; in my opinion, DAC is the first truly huge change in Windows file access control since Windows NT gave us NTFS.

image

Central Access Policy is not a trivial thing to implement, but this is the future of file servers. Admins should seriously evaluate this feature when testing Windows Server "8" Beta in their lab environments and thinking about future designs. Our very own Mike Stephens has written at length about this in the Understand and Troubleshoot Dynamic Access Control in Windows Server "8" Beta guide as well.

Question

[Perhaps interestingly to you the reader, this was my question to the developers of AD PowerShell. I don’t know everything after all… - Ned]

I am periodically seeing error "invalid enumeration context" when querying the Redmond domain using get-adcomputer. It’s a simple query to return all the active Windows 8 and Windows Server "8" computers that were logged into since February 15th and write them to a CSV file:

image

It runs for quite a while and sometimes works, sometimes fails. I don’t find any well-explained reference to what this error means or how to avoid it, but it smells like a “too much data asked for over too long a period of time” kind of issue.

Answer

The enumeration contexts do have a finite hardcoded lifetime and you will get an error if they expire. You might see this error when executing searches that search a huge quantity of data using limited indexed attributes and return a small data set. If we hit a DC that is not very busy then the query will run faster and could have enough time to complete for a big dataset like this query. Server hardware would also be a factor here. You can also try searching starting at a deeper level. You could also tweak the indexes, although obviously not in this case.

[For those interested, when the query worked, it returned roughly 75,000 active Windows 8 family machines from that domain alone. Microsoft dogfoods in production like nobody else, baby - Ned]

Question

Is there any chance that DFSR could lock a file while it is replicating outbound and prevent user access to their data?

Answer

DFSR uses the BackupRead() function when copying a file into the staging folder (i.e. any file over 64KB, by default), so that should prevent any “file in use” issues with applications or users; the file "copying" to the staging folder is effectively instantaneous and non-exclusive. Once staged and marshaled, the copy of the file is replicated and no user has any access to that version of the file.

For a file under 64KB, it is simply replicated without staging and that operation of making a copy and sending it into RPC is so fast there’s no reasonable way for anyone to ever see any issues there. I have certainly never seen it, for sure, and I should have by now after six years.

Question

Why does TechNet state that USMT 4.0 offline migrations don’t work for certain OS settings? How do I figure out the complete list?

Answer

Manifests that use migration plugin DLLs aren’t processed when running offline migrations. It's just a by design limitation of USMT and not a bug or anything. To see which manifests you need to examine and consider creating custom XML to handle, review the complete list at Understanding what the USMT 4.0 CONFIG manifests migrate (Part 1: Introduction).

Question

One of my customers has found that the "Everyone" group is added to the below folders in Windows 2003 and Windows 2008:

Windows Server 2008

C:\ProgramData\Microsoft\Crypto\DSS\MachineKeys

C:\ProgramData\Microsoft\Crypto/RSA\MachineKeys

Windows Server 2003

C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\DSS\MachineKeys

C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys

1. Can we remove the "Everyone" group and give permissions to another group like - Authenticated users for example?

2. Will replacing that default cause issues?

3. Why is this set like this by default?

Answer

[Courtesy of:

image

]

These permissions are intentional. They are intended to allow any process to generate a new private key, even an Anonymous one. You'll note that the permissions on the MachineKeys folder are limited to the folder only. Also, you should note that inheritance has been disabled, so the permissions on the MachineKeys folder will not propagate to new files created therein. Finally, the key generation code itself modifies the permissions on new key container files before the private key is actually written to the container file.

In short, messing with these permissions will probably lead to failures in creating or accessing keys belonging to the computer. So please don't touch them.

1. Exchanging Authenticated Users with Everyoneprobably won't cause any problems. Microsoft, however, doesn't test cryptographic operations after such a permission change; therefore, we cannot predict what will happen in all cases.

2. See my answer above. We haven't tested it. We have, however, been performing periodic security reviews of the default Windows system permissions, tightening them where possible, for the last decade. The default Everyone permissions on the MachineKeys folder have cleared several of these reviews.

3. In local operations, Everyone includes unidentified or anonymous users. The theory is that we always want to allow a process to generate a private key. When the key container is actually created and the key written to it, the permissions on the key container file are updated with a completely different set of default permissions. All the default permissions allow are the ability to create a file, read and write data. The permissions do not allow any process except System to launch any executable code.

Question

If I specify a USMT 4.0 config.xml child node to prevent migration, I am still seeing the settings migrate. But if I set the parent node, those settings do not migrate. The consequence being that no child nodesmigrate, which I do not want.

For example, on XP the Dot3Svc service is set to Manual startup.  On Win7, I want the Dot3Svc service set to Automatic startup.  If I use this config.xml on the loadstate, the service is set to manual like the XP machine and my "no" setting is ignored:

<componentdisplayname="Networking Connections"migrate="yes"ID="network_and_internet\networking_connections">

  <componentdisplayname="Microsoft-Windows-Wlansvc"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-VWiFi"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-RasConnectionManager"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-RasApi"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-PeerToPeerCollab"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-Native-80211"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-MPR"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-Dot3svc"migrate="no"ID="<snip>"/>

</component>

Answer

Two different configurations can cause this symptom:

1. You are using a config.xml file created on Windows 7, then running it on a Windows XP computer with scanstate /config

2. The source computer was Windows XP and it did not have a config.xml file set to block migration.

When coming from XP, where downlevel manifests were used, loadstate does not process those differently-named child nodes on the destination Win7 computer. So while the parent node set to NO would work, the child nodes would not, as they have different displayname and ID.

It’s a best practice to use a config.xml in scanstate as described in http://support.microsoft.com/kb/2481190, if going from x86 to x64; otherwise, you end up with damaged COM settings. Otherwise, you only need to generate per-OS config.xml files if you plan to change default behavior. All the manifests run by default if there is a config.xml with no modifications or if there is no config.xml at all.

Besides being required for XP to block settings, you should also definitely lean towards using config.xml on the scanstate rather than the loadstate. If using Vista to Vista, Vista to 7, or 7 to 7, you could use the config.xml on either side, but I’d still recommend sticking with the scanstate; it’s typically better to block migration from adding things to the store, as it will be faster and leaner.

Other Stuff

[Many courtesy of our pal Mark Morowczynski -Ned]

Happy belated 175th birthday Chicago. Here's a list of things you can thank us for, planet Earth; where would you be without your precious Twinkies!?

Speaking of Chicago…

All the new MCSE and certification news reminded me of the other side to that coin.

Do you know where your nearest gun store is located? Map of the Dead does. Review now; it will be too late when the zombies rise from their graves, and I don't plan to share my bunker, Jim.

image

If you call yourself an IT Pro, you owe it to yourself to visit moviecarposters.com right now and buy… everything. They make great alpha geek conversation pieces. To get things started, I recommend these:

clip_image002[6]clip_image004clip_image006
Sigh - there is never going to be another Firefly

And finally…

I started re-reading Terry Pratchett, picking up where from where I left off as a kid. Hooked again. Damn you English writers, with your understated awesomeness!

Ok, maybe not all English Writers…

image

Until next time,

- Ned "Jonathan is seriously going to kill me" Pyle

Friday Mail Sack: Mothers day pfffft… when is son’s day?

$
0
0

Hi folks, Ned here again. It’s been a little while since the last sack, but I have a good excuse: I just finished writing a poop ton of Windows Server 2012 depth training that our support folks around the world will use to make your lives easier (someday). If I ever open MS Word again it will be too soon, and I’ll probably say the same thing about PowerPoint by June.

Anyhoo, let’s get to it. This week we talk about:

Question

Is it possible to use any ActiveDirectory module cmdlets through invoke-command against a remote non-Windows Server 2012 DC where the module is installed? It always blows up for me as it tries to “locally” (remotely) use the non-existent ADWS with error “Unable to contact the server. This may be because the server does not exist, it is currently down, or it does not have the active directory web services running”

image

Answer

Yes, but you have to ignore that terribly misleading error and put your thinking cap on: the problem is your credentials. When you invoke-command, you make the remote server run the local PowerShell on your behalf. In this case that remote command has to go off-box to yet another remote server – a DC running ADWS. This means a multi-hop credential scenario. Provide –credential (get-credential) to your called cmdlets inside the curly braces and it’ll work fine.

Question

We are using a USMT /hardlink migration to preserve disk space and increase performance. However, performance is crazy slow and we’re actually running out of disk space on some machines that have very large files like PSTs. My scanstate log shows:

Error [0x000000] Write error 112 for C:\users\ned\Desktop [somebig.pst]. Windows error 112 description: There is not enough space on the disk.[gle=0x00000070]

Error [0x080000] Error 2147942512 while gathering object C:\users\ned\Desktop\somebig.pst. Shell application requested abort![gle=0x00000070]

Answer

These files are encrypted and you are using /efs:copyraw instead of /efs:hardlink. Encrypted files are copied into the store whole instead of hardlink'ing, unless you specify /efs:hardlink. If you had not included /efs, this file would have failed with, "File X is encrypted. Use the /efs option to specify a different way to handle this file".

Yes, I realize that we should probably just require that option. But think of all the billable hours we just gave you!

Question

I was using your DFSR pre-seeding post and am finding that robocopy /B is slows down my migration compared to not using it. Is that required for preseeding?

Answer

The /B mode, while inherently slower, ensures that files are copied using a backup API regardless of permissions. It is the safest way, so I took the prudent route when I wrote the sample command. It’s definitely expected to be slower – in my semi-scientific repro’s the difference was ~1.75 times slower on average.

However, /B not required if you are 100% sure you have at least READ permissions to all files.  The downside here is a lot of failures due to permissions might end up making things even slower than just going /B; you will have to test it.

If you are using Windows Server 2012 and have plenty of hardware to back it up, you can use the following options that really make the robocopy fly, at the cost of memory, CPU, and network utilization (and possibly, some files not copying at all):

Robocopy <foo> <bar> /e /j /copyall /xd dfsrprivate /log:<sna.foo> /tee /t:128 /r:1

For those that have used this before, it will look pretty similar – but note:

  • Adds /J option (first introduced in Win8 robocopy) - now performs unbuffered IO, which means gigantic files like ISO and VHD really fly and a 1Gbps network is finally heavily utilized. Adds significant memory overhead, naturally.
  • Add /MT:128 to use 128 simultaneous file copy threads. Adds CPU overhead, naturally.
  • Removes /B and /R:6 in order to guarantee fastest copy method. Make sure you review the log and recopy any failures individually, as you are now skipping any files that failed to copy on the first try.

 

Question

Recently I came across an user account that keeps locking out (yes, I've read several of your blogs where you say account lockout policies are bad "Turning on account lockouts is a way to guarantee someone with no credentials can deny service to your entire domain"). We get the Event ID of 4740 saying the account has been locked out, but the calling computer name is blank:

 

Log Name:     Security

 

Event ID:     4740

 

Level:         Information

 

Description:

 

A user account was locked out.

 

Subject:

 

Security ID: SYSTEM

 

Account Name: someaccount

 

Account Domain: somedomain

 

Logon ID: 0x3e7

 

Account That Was Locked Out:

 

Security ID: somesid

 

Account Name: someguy

 

Additional Information:

 

Caller Computer Name:

 

The 0xC000006A indicates a bad password attempt. This happens every 5 minutes and eventually results in the account being locked out. We can see that the bad password attempts are coming via COMP1 (which is a proxy server) however we can't work out what is sending the requests to COMP1 as the computer is blank again (there should be a computer name).

Are we missing something here? Is there something else we could be doing to track this down? Is the calling computer name being blank indicative of some other problem or just perhaps means the calling device is a non-Microsoft device?

Answer

(I am going to channel my inner Eric here):

A blank computer name is not unexpected, unfortunately. The audit system relies on the sending computers to provide that information as part of the actual authentication attempt. Kerberos does not have a reliable way to provide the remote computer info in many cases. Name resolution info about a sending computer is also easily spoofed. This is especially true with transitive NTLM logons, where we are relying on one computer to provide info for another computer. NTLM provides names but they are also easily spoofed so even when you see a computer name in auditing, you are mainly asking an honest person to tell you the truth.

Since it happens very frequently and predictably, I’d configure a network capture on the sending server to run in a circular fashion, then wait for the lock out and stop the event. You’d see all of the traffic and now know exactly who sent it. If the lockout was longer running and less predictable, I’d recommend using a network capture configured to trace in a circular fashion until that 4740 event writes. Then you can see what the sending IP address is and hunt down that machine. Different techniques here:

[And the customer later noted that since it’s a proxy server, it has lots of logs – and they told him the offender]

Question

I am testing USMT 5.0 and finding that if I migrate certain Windows 7 computers to Windows 8 Consumer Preview, Modern Apps won’t start. Some have errors, some just start then go away.

Answer

Argh. The problem here is Windows 7’s built-in manifest that implements microsoft-windows-com-base , which then copies this registry key:

HKEY_LOCAL_MACHINE\Software\Microsoft\OLE

If the DCOM permissions are modified in that key, they migrate over and interfere with the ones needed by Modern Apps to run. This is a known issue and already fixed so that we don’t copy those values onto Windows 8 anymore. It was never a good idea in the first place, as any applications needing special permissions will just set their own anyways when installed.

And it’s burned us in the past too…

Question

Are there any available PowerShell, WMI, or command-line options for configuring an OCSP responder? I know that I can install the feature with the Add-WindowsFeature, but I'd like to script configuring the responder and creating the array.

Answer

[Courtesy of the Jonathan “oh no, feet!” Stephens– Ned]

There are currently no command line tools or dedicated PowerShell cmdlets available to perform management tasks on the Online Responder. You can, however, use the COM interfaces IOCSPAdmin and IOSCPCAConfiguration to manage the revocation providers on the Online Responder.

  1. Create an IOSCPAdmin object.
  2. The IOSCPAdmin::OCSPCAConfigurationCollection property will return an IOCSPCAConfigurationCollection object.
  3. Use IOCSPCAConfigurationCollection::CreateCAConfiguration to create a new revocation provider.
  4. Make sure you call IOCSPAdmin::SetConfiguration when finished so the online responder gets updated with the new revocation configuration.

Because these are COM interfaces, you can call them from VBScript or PowerShell, so you have great flexibility in how you write your script.

Question

I want to use Windows Desktop Search with DFS Namespaces but according to this TechNet Forum thread it’s not possible to add remote indexes on namespaces. What say you?

Answer

There is no DFSN+WDS remote index integration in any OS, including Windows 8 Consumer Preview. At its heart, this comes down to being a massive architectural change in WDS that just hasn’t gotten traction. You can still point to the targets as remote indexes, naturally.

Question

Certain files – as pointed out here by AlexSemi– that end with invalid characters like a dot or a space break USMT migration. One way to create these files is to use the echo command into a device path like so:

image

These files can’t be opened by anything in Windows, it seems.

image

When you try to migrate, you end up with a fatal “windows error 2” “the system cannot find the file specified” error unless you skip the files using /C:

image

What gives?

Answer

Quit making invalid files! :-)

USMT didn’t invent CreateFile() so its options here are rather limited… USMT 5.0 handles this case correctly through error control - it skips these files when hardlink’ing because Windows returns that they “don’t exist”. Here is my scanstate log using USMT 5.0 beta, where I used /hardlink and did NOT provide /C:

image

In the case of non-hardlink, scanstate copies them without their invalid names and they become non-dotted/non-spaced valid files (even in USMT 4.0). To make it copy these invalid files with the actual invalid name would require a complete re-architecting of USMT or the Win32 file APIs. And why – so that everyone could continue to not open them?

Other Stuff

In case you missed it, Windows 8 Enterprise Edition details. With all the new licensing and activation goodness, Enterprise versions are finally within reach of any size customer. Yes, that means you!

Very solid Mother’s Day TV mash up (a little sweary, but you can’t fight a something that combines The Wire, 30 Rock, and The Cosbys)

Zombie mall experience. I have to fly to Reading in June to teach… this might be on the agenda

Well, it’s about time - Congress doesn't "like" it when employers ask for Facebook login details

Your mother is not this awesome:

image
That, my friend, is a Skyrim birthday cake

SportsCenter wins again (thanks Mark!)

Don’t miss the latest Between Two Ferns (veeerrrry sweary, but Zach Galifianakis at his best; I just wish they’d add the Tina Fey episode)

But what happens if you eat it before you read the survival tips, Land Rover?!

 

Until next time,

- Ned “demon spawn” Pyle

Friday Mail Sack: LeBron is not Jordan Edition

$
0
0

Hi folks, Ned here again. Today we discuss trusts rules around domain names, attribute uniqueness, the fattest domains we’ve ever seen, USMT data-only migrations, kicking FRS while it’s down, and a few amusing side topics.

Scottie, don’t be that way. Go Mavs.

Question

I have two forests with different DNS names, but with duplicate NetBIOS names on the root domain. Can I create a forest (Kerberos) trust between them? What about NTLM trusts between their child domains?

Answer

You cannot create external trusts between domains with the same name or SID, nor can you create Kerberos trusts between two forests with the same name or SID. This includes both the NetBIOS and FQDN version of the name – even if using a forest trust where you might think that the NB name wouldn’t matter – it does. Here I am trying to create a trust between fabrikam.com and fabrikam.net forests – I get the super useful error:

“This operation cannot be performed on the current domain”

image

But if you are creating external (NTLM, legacy) trusts between two non-root domains in two forests, as long as the FQDN and NB name of those two non-root domains are unique, it will work fine. They have no transitive relationship.

So in this example:

  • You cannot create a domain trust nor a forest trust between fabrikam.com and fabrikam.net
  • You can create a domain (only) trust between left.fabrikam.com and right.fabrikam.net
  • You cannot create a domain trust between fabrikam.com and right.fabrikam.net
  • You cannot create a domain trust between fabrikam.net and left.fabrikam.com

fabrikam1

Why don’t the last two work? Because the trust process thinks that the trust already exists due to the NetBIOS name match with the child’s parent. Arrrgh!

image

BUT!

You could still have serious networking problems in this scenario regardless of the trust. If there are two same-named domains physically accessible through the network from the same computer, there may be a lot of misrouted communication when people just use NetBIOS domain names. They need to make sure that no one ever has to broadcast NetBIOS to find anything – their WINS environments must be perfect in both forests and they should convert all their DFS to using DfsDnsConfig. Alternatively they could block all communication between the two root domains’ DCs, perhaps at a firewall level.

Note: I am presuming your NetBIOS domain name matches the left-most part of the FQDN name. Usually it does, but that’s not a requirement (and not possible if you are using more than 15 characters in that name).

Question

Is possible to enforce uniqueness for the sAMAccountName attribute within the forest?

Answer

[Courtesy of Jonathan Stephens]

The Active Directory schema does not have a mechanism for enforcing uniqueness of an attribute. Those cases where Active Directory does require an attribute to be unique in either the domain (sAMAccountName) or forest (objectGUID) are enforced by other code – for example, AD Users and Computers won’t let you do it:

image

The only way you could actually achieve this is to have a custom user provisioning application that would perform a GC lookup for an account with a particular sAMAccountName, and would only permit creation of the new object should no existing object be found.

[Editor’s note: If you want to see what happens when duplicate user samaccountname entries are created, try this on for size in your test lab:

1. Enable AD Recycle Bin.
2. Create an OU called Sales.
3. Create a user called 'Sara Davis' with a logon name and pre-windows 2000 logon name of 'saradavis'.
4. Delete the user.
5. In the Users container, create a user called 'Sara Davis' with a logon name and pre-windows 2000 logon name of 'saradavis' (simulating someone trying to get that user back up and running by creating it new, like a help desk would do for a VIP in a hurry).
6. Restore the deleted 'Sara Davis' user back to her previous OU (this will work because the DN's do not match and the recreated user is not really the restored one), using:

get-adobject -filter 'samaccountname -eq "saradavis"' -includedeletedobjects | restore-adobject -targetpath "ou=sales,dc=consolidatedmessenger,=dc=com"

(note the 'illegal modify operation' error).

7. Despite the above error, the user account will in fact be restored successfully and will now exist in both the Sales OU and the Users container, with the same sAMAccountName and userPrincipalName.
8. Logon as SaraDavis using the NetBIOS-style name.
9. Logoff.
10. Note in DSA.MSC how 'Sara Davis' in the Sales OU now has a 'pre-windows 2000' logon name of $DUPLICATE-<something>.
11. Note how both copies of the user have the same UPN.
12. Logon with the UPN name of saradavis@consolidatedmessenger.com and note that this attribute does not get mangled.

Fun, eh? – Ned]

Question

Which customer in the world has the most number of objects in a production AD domain?

Answer

Without naming specific companies - I have to protect their privacy - the single largest “real” domain I have ever heard of had ~8 million user objects and nearly nothing else. It was used as auth for a web system. That was back in Windows 2000 so I imagine it’s gotten much bigger since then.

I have seen two other customers (inappropriately) use AD as a quasi-SQL database, storing several hundred million objects in it as ‘transactions’ or ‘records’ of non-identity data, while using a custom schema. This scaled fine for size but not for performance, as they were constantly writing to the database (sometimes at a rate of hundreds of thousands of new objects a day) and the NTDS.DIT is - naturally - optimized for reading, not writing.  The performance overall was generally terrible as you might expect. You can also imagine that promoting a new DC took some time (one of them called about how initial replication of a GC had been running for 3 weeks; we recommended IFM, a better WAN link, and to stop doing that $%^%^&@).

For details on both recommended and finite limits, see:

Active Directory Maximum Limits - Scalability

http://technet.microsoft.com/en-us/library/active-directory-maximum-limits-scalability(WS.10).aspx

The real limit on objects created per DC is 2,147,483,393 (or 231 minus 255). The real limit on users/groups/computers (security principals) in a domain is 1,073,741,823 (or 230). If you find yourself getting close on the latter you need to open a support case immediately!

Question

Is a “Data Only” migration possible with USMT? I.e. no application settings or configuration is migrated, only files and folders.

Answer

Sure thing.

1. Generate a config file with:

scanstate.exe /genconfig:config.xml

2. Open that config.xml in Notepad, then search and replace “yes” with “no” (including the quotation marks) for all entries. Save that file. Do not delete the lines, or think that not including the config.xml has the same effect – that will lead to those rules processing normally.

3. Run your scanstate, including config.xml and NOT including migapp.xml. For example:

scanstate.exe c:\store /config:config.xml /i:migdocs.xml /v:5

Normally, your scanstate log should be jammed with entries around the registry data migration:

Processing Registry HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\AllowedDragImageExts [.ico]
Processing Registry HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\AllowedDragImageExts [.jfif]
Processing Registry HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\AllowedDragImageExts [.jpe]
Processing Registry HKCU\Control Panel\Accessibility\HighContrast
Processing Registry HKCU\Control Panel\Accessibility\HighContrast [Flags]
Processing Registry HKCU\Control Panel\Accessibility\HighContrast [High Contrast Scheme]

If you look through your log after using the steps above, none of those will appear.

You might also think that you could just rename the DLManifests and ReplacementManifests folders to get the same effect and you’d almost be right. The problem is that Vista or Windows 7also use the built in %systemroot%\winsxs\manifests folders, and you certainly cannot remove those. Just go with the config.xml technique.

Question

After we migrate SYSVOL from FRS to DFSR on Windows Server 2008 R2, we still see that the FRS service is set to automatic. Is it ok to disable?

Answer

Absolutely. Once an R2 server stops replicating SYSVOL with FRS, it cannot use that service for any other data. If you try to start the FRS service or replicate with it will log events like these:

Log Name: File Replication Service
Source: NtFrs
Date: 1/6/2009 11:12:45 AM
Event ID: 13574
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: 7015-SRV-03.treyresearch.net
Description:
The File Replication Service has detected that this server is not a domain controller. Use of the File Replication Service for replication of non-SYSVOL content sets has been deprecated and therefore, the service has been stopped. The DFS Replication service is recommended for replication of folders, the SYSVOL share on domain controllers and DFS link targets.

Log Name: File Replication Service
Source: NtFrs
Date: 1/6/2009 2:16:14 PM
Event ID: 13576
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: 7015-srv-01.treyresearch.net
Description:
Replication of the content set "PUBLIC|FRS-REPLICATED-1" has been blocked because use of the File Replication Service for replication of non-SYSVOL content sets has been deprecated. The DFS Replication service is recommended for replication of folders, the SYSVOL share on domain controllers and DFS link targets.

We document this in the SYSVOL Replication Migration Guide but it’s easy to miss and a little confusing – this article applies to both R2 and Win2008, and Win2008 can still use FRS:

7. Stop and disable the FRS service on each domain controller in the domain unless you were using FRS for purposes other than SYSVOL replication. To do so, open a command prompt window and type the following commands, where <servername> is the Universal Naming Convention (UNC) path to the remote server:

Sc <servername>stop ntfrs

Sc <servername>config ntfrs start=disabled

Other stuff

Another week, another barrage of cloud.


(courtesy of the http://xkcd.com/ blog)

Finally… friggin’ Word. Play this at 720p, full screen.

 

----

Have a great weekend folks.

- Ned “waiting on yet more email from humor-impaired MS colleagues about his lack of professionalism” Pyle

Target Group Policy Preferences by Container, not by Group

$
0
0

Hello again AskDS readers, Mike here again. This post reflects on Group Policy Preference targeting items, specifically targeting by security groups. Targeting preference items by security groups is a bad idea. There is a better way that most environments can accomplish the same result, at a fraction of the cost.

Group Membership dependent

The world of Windows has been dependent on group membership for a long time. This dependency is driven by the way Windows authorizes access to resources. The computer or user must be a member of the group in order to access the printer or file server. Groups are and have been the bane of our existence. Nevertheless, we should not let group membership dominate all aspects of our design. One example where we can move away from using security groups is with Group Policy Preference (GPP) targeting.

Targeting by Security Group

GPP Targeting items control the scope of application for GPP items. Think of targeting items as Group Policy filtering on steroids, but they only apply to GPP items included in a Group Policy object. They introduce an additional layer of administration that provides more control over "how" GPP items apply to a specific user or computer.

image
Figure 1 - List of Group Policy Preference Targeting items

The most common scenario we see using the Security Group targeting item is with the Drive Map preference item. IT Professionals have been creating network drive mappings based on security groups since Moby Dick was a sardine-- it's what we do. The act is intuitive because we typically apply permissions to the group and add users to the group.

The problem with this is that not all applications determine group membership the same way. Also, the addition of Universal Groups and the numerous permutations of group nesting make this a complicated task. And let's not forget that some groups are implicitly added when you log on, like Domain Users, because it’s the designated primary group. Programmatically determining group membership is simple -- until it's implemented, and its implementation's performance is typically indirectly proportional to its accuracy. It either takes a long time to get an accurate list, or a short time to get a somewhat accurate list.

Security Group Computer Targeting

Using GPP Security Group targeting for computers is a really bad idea. Here's why: in most circumstances, the application retrieves group memberships from a domain controller. This means network traffic from the client to the domain controller and back again. Using the network introduces latency. Latency introduces slow process, and slow processing is the last thing you want when the computer is processing Group Policy. Also, Preference Targeting allows you to create complex targeting scenarios using Boolean operators such as AND, OR, and NOT. This is powerful stuff and lets you combine one or more logon scripts into a single GP item. However, the power comes at a cost. Remember that network traffic we created by make queries to the domain controller for group memberships? Well, that information is not cached; each Security Group targeting item in the GPO must perform that query again- yes, the same one it just did. Don't hate, that's just the way it works. This behavior does not take into account nest groups. You need to increase the number of round trips to the domain controller if you want to include groups of groups of groups etcetera ad nauseam (trying to make my Latin word quota).

Security Group User Targeting

User Security Group targeting is not as bad as computer Security Group targeting. During user Security Group targeting, the Group Policy Preferences extension determines group membership from the user's authentication token. This process if more efficient and does not require round trips to the domain controller. One caveat with depending on group membership is the risk of the computer or user's group membership containing too many groups. Huh- too many Groups? Yes, this happens more often than many realize. Windows creates an authentication token from information in the Kerberos TGT. The Kerberos TGT has a finite amount of storage for this information. User and computers with large group memberships (groups nested with groups…) can maximize the finite storage available in the TGT. When this happens, the remaining groups memberships are truncated, which creates the effect that the user is not a member of that group. Groups truncated from the authentication token results in the computer or user not receiving a particular Group Policy preference item.

You got any better ideas?

A better choice of targeting Group Policy Preference items is to use Organization Unit targeting items. It's da bomb!!! Let's look at how Organizational Unit targeting items work.

image
Figure 2 Organizational Unit Targeting item

The benefits Organizational Unit Targeting Items

Organization Unit targeting items determines OU container membership by parsing the distinguished name of the computer or user. So, it simply uses string manipulation to determine what OUs are in scope with the user or computer. Furthermore, it can determine if the computer or user has direct container membership of an OU by simply looking for the first occurrence of OU immediately following the principal name in the distinguished name.

Computer Targeting using OUs

Computer Preference targeting with OUs still has to contact a domain controller. However, it’s an LDAP call and because we are not chasing nested groups, it's quick and efficient. First, the preference client-side extension gets the name of the computer. The CSE gets name from the local computer, either from the environment variable or from the registry, in that order. The CSE then uses the name to look up the security identifier (SID) for the computer. Windows performs an LDAP bind to the computer object in Active Directory using the SID. The bind completes and retrieves the computer object's distinguished name. The CSE then parses the distinguished name as needed to satisfy the Organizational Unit targeting item.

User Targeting using OUs

User Preference targeting requires fewer steps because the client-side extension already knows the user's SID. The remaining work performed by the CSE is to LDAP bind to the user object using the user's SID and retrieve the distinguished name from the user object. Then, business as usual, the CSE parses the distinguished name to satisfy the Organizational Unit targeting item.

Wrap Up

So there you have it. The solution is clean and it takes full advantage of your existing Active Directory hierarchy. Alternatively, it could be the catalyst needed to start a redesign project. Understandably, this only works for Group Policy Preferences items; however-- every little bit helps when consolidating the number of groups to which computer and users belong-- and it makes us a little less dependent on groups. Also, it's a better, faster, and more efficient alternative over Security Group targeting. So try it.

Update

We recently published a new article around behavior changes with Group Policy Preferences Computer Security Group Targeting.  Read more here.

- Mike "This is U.S. History; I see the globe right there" Stephens

 

 

What is the Impact of Upgrading the Domain or Forest Functional Level?

$
0
0

Hello all, Jonathan here again. Today, I want to address a question that we see regularly. As customers upgrade Active Directory, and they inevitably reach the point where they are ready to change the Domain or Forest Functional Level, they sometimes become fraught. Why is this necessary? What does this mean? What’s going to happen? How can this change be undone?

What Does That Button Do?

Before these question can be properly addressed, if must first be understood exactly what purposes the Domain and Forest Functional Levels serve. Each new version of Active Directory on Windows Server incorporates new features that can only be taken advantage of when all domain controllers (DC) in either the domain or forest have been upgraded to the same version. For example, Windows Server 2008 R2 introduces the AD Recycle Bin, a feature that allows the Administrator to restore deleted objects from Active Directory. In order to support this new feature, changes were made in the way that delete operations are performed in Active Directory, changes that are only understood and adhered to by DCs running on Windows Server 2008 R2. In mixed domains, containing both Windows Server 2008 R2 DCs as well as DCs on earlier versions of Windows, the AD Recycle Bin experience would be inconsistent as deleted objects may or may not be recoverable depending on the DC on which the delete operation occurred. To prevent this, a mechanism is needed by which certain new features remain disabled until all DCs in the domain, or forest, have been upgraded to the minimum OS level needed to support them.

After upgrading all DCs in the domain, or forest, the Administrator is able to raise the Functional Level, and this Level acts as a flag informing the DCs, and other components as well, that certain features can now be enabled. You'll find a complete list of Active Directory features that have a dependency on the Domain or Forest Functional Level here:

Appendix of Functional Level Features
http://technet.microsoft.com/en-us/library/understanding-active-directory-functional-levels(WS.10).aspx

There are two important restrictions of the Domain or Forest Functional Level to understand, and once they are, these restrictions are obvious. Once the Functional Level has been upgraded, new DCs on running on downlevel versions of Windows Server cannot be added to the domain or forest. The problems that might arise when installing downlevel DCs become pronounced with new features that change the way objects are replicated (i.e. Linked Value Replication). To prevent these issues from arising, a new DC must be at the same level, or greater, than the functional level of the domain or forest.

The second restriction, for which there is a limited exception on Windows Server 2008 R2, is that once upgraded, the Domain or Forest Functional Level cannot later be downgraded. The only purpose that having such ability would serve would be so that downlevel DCs could be added to the domain. As has already been shown, this is generally a bad idea.

Starting in Windows Server 2008 R2, however, you do have a limited ability to lower the Domain or Forest Functional Levels. The Windows Server 2008 R2 Domain or Forest Functional level can be lowered to Windows Server 2008, and no lower, if and only if none of the Active Directory features that require a Windows Server 2008 R2 Functional Level has been activated. You can find details on this behavior - and how to revert the Domain or Forest Functional Level - here.

What Happens Next?

Another common question: what impact does changing the Domain or Forest Functional Level have on enterprise applications like Exchange or Lync, or on third party applications? First, new features that rely on the Functional Level are generally limited to Active Directory itself. For example, objects may replicate in a new and different way, aiding in the efficiency of replication or increasing the capabilities of the DCs. There are exceptions that have nothing to do with Active Directory, such as allowing NTFRS replacement by DFSR to replicate SYSVOL, but there is a dependency on the version of the operating system. Regardless, changing the Domain or Forest Functional Level should have no impact on an application that depends on Active Directory.

Let's fall back on a metaphor. Imagine that Active Directory is just a big room. You don't actually know what is in the room, but you do know that if you pass something into the room through a slot in the locked door you will get something returned to you that you could use. When you change the Domain or Forest Functional Level, what you can pass in through that slot does not change, and what is returned to you will continue to be what you expect to see. Perhaps some new slots added to the door through which you pass in different things, and get back different things, but that is the extent of any change. How Active Directory actually processes the stuff you pass in to produce the stuff you get back, what happens behind that locked door, really isn't relevant to you.

If you carry this metaphor forward into the real world, if an application like Exchange uses Active Directory to store its objects, or to perform various operations, none of that functionality should be affected if the Domain or Forest Functional Mode changes. In fact, if your applications are also written to take advantage of new features introduced in Active Directory, you may find that the capabilities of your applications increase when the Level changes.

The answer to the question about the impact of changing the Domain or Forest Functional Level is there should be no impact. If you still have concerns about any third party applications, then you should contact the vendor to find out if they tested the product at the proposed Level, and if so, with what result. The general expectation, however, should be that nothing will change. Besides, you do test your applications against proposed changes to your production AD, do you not? Discuss any issues with the vendor before engaging Microsoft Support.

Where’s the Undo Button?

Even after all this, however, there is a great concern about the change being irreversible, so that you must have a rollback plan just in case something unforeseen and catastrophic occurs to Active Directory. This is another common question, and there is a supported mechanism to restore the Domain or Forest Functional Level. You take a System State back up of one DC in each domain in the forest. To recover, flatten all the DCs in the forest, restore one for each domain from the backup, and then DCPROMO the rest back into their respective domains. This is a Forest Restore, and the steps are outlined in detail in the following guide:

Planning for Active Directory Forest Recovery
http://technet.microsoft.com/en-us/library/planning-active-directory-forest-recovery(WS.10).aspx

By the way, do you know how often we’ve had to help a customer perform a complete forest restore because something catastrophic happened when they raised the Domain or Forest Functional Level? Never.

Best Practices

What can be done prior to making this change to ensure that you have as few issues as possible? Actually, there are some best practices here that you can follow:

1. Verify that all DCs in the domain are, at a minimum, at the OS version to which you will raise the functional level. Yes… I know this sounds obvious, but you’d be surprised. What about that DC that you decommissioned but for which you failed to perform metadata cleanup? Yes, this does happen.
Another good one that is not so obvious is the Lost and Found container in the Configuration container. Is there an NTDS Settings object in there for some downlevel DC? If so, that will block raising the Domain Functional Level, so you’d better clean that up.

2. Verify that Active Directory is replicating properly to all DCs. The Domain and Forest Functional Levels are essentially just attributes in Active Directory. The Domain Functional Level for all domains must be properly replicated before you’ll be able to raise the Forest Functional level. This practice also addresses the question of how long one should wait to raise the Forest Functional Level after you’ve raised the Domain Functional Level for all the domains in the forest. Well…what is your end-to-end replication latency? How long does it take a change to replicate to all the DCs in the forest? Well, there’s your answer.

Best practices are covered in the following article:

322692 How to raise Active Directory domain and forest functional levels
http://support.microsoft.com/default.aspx?scid=kb;EN-US;322692

There, you’ll find some tools you can use to properly inventory your DCs, and validate your end-to-end replication.

Update: Woo, we found an app that breaks! It has a hotfix though (thanks Paolo!). Mkae sure you install this everywhere if you are using .Net 3.5 applications that implement the DomainMode enumeration function.

FIX: "The requested mode is invalid" error message when you run a managed application that uses the .NET Framework 3.5 SP1 or an earlier version to access a Windows Server 2008 R2 domain or forest  
http://support.microsoft.com/kb/2260240

Conclusion

To summarize, the Domain or Forest Functional Levels are flags that tell Active Directory and other Windows components that all DCs in the domain or forest are at a certain minimal level. When that occurs, new features that require a minimum OS on all DCs are enabled and can be leveraged by the Administrator. Older functionality is still supported so any applications or services that used those functions will continue to work as before -- queries will be answered, domain or forest trusts will still be valid, and all should remain right with the world. This projection is supported by over eleven years of customer issues, not one of which involves a case where changing the Domain or Forest Functional Level was directly responsible as the root cause of any issue. In fact, there are only cases of a Domain or Forest Functional Level increase failing because the prerequisites had not been met; overwhelmingly, these cases end with the customer's Active Directory being successfully upgraded.

If you want to read more about Domain or Forest Functional Levels, review the following documentation:

What Are Active Directory Functional Levels?
http://technet.microsoft.com/en-us/library/cc787290(WS.10).aspx

Functional Levels Background Information
http://technet.microsoft.com/en-us/library/cc738038(WS.10).aspx

Jonathan “Con-Function Junction” Stephens


AskDS is 12,614,400,000,000,000 shakes old

$
0
0

It’s been four years and 591 posts since AskDS reached critical mass. You’d hope our party would look like this: 

image

But it’s more likely to be:

image

Without you, we’d be another of those sites that glow red hot, go supernova, then collapse into a white dwarf. We really appreciate your comments, questions, and occasional attaboys. Hopefully we’re good for another year of insightful commentary.

Thanks readers.

The AskDS Contributors

Improved Group Policy Preference Targeting by Computer Group Membership

$
0
0

Hello AskDS readers, it's Mike again talking about Group Policy Preference targeting items. I posted an article in June entitled Targeting Group Policy Preferences by Container, not by Group. This post highlighted the common problems many people encounter when targeting preferences items based on a computer's group membership, why the problem occurs, and some workarounds.

Today, I'd like to introduce a hotfix released by Microsoft that improves targeting preference items by computer group membership. The behavior before the hotfix potentially resulted in slow computer group policy application. The slowness was caused by the way Security Group targeting applies against a computer account. The targeting item makes multiple round trips to a domain controller to determine group memberships (including nested groups). The slowness is more significant when the computer applying the targeting item does not have a local domain controller and must use a domain controller across a WAN link.

You can download the hotfix for Windows 7 and Windows Server 2008 R2 through Microsoft Knowledgebase article 2561285. This hotfix changes how the Security Group Targeting item calculates computer group membership. During policy application, the targeting item requests a copy of the computer's authentication token. This token is mostly identical to the token created during logon, which means it contains a list security identifiers (SIDs) for every group of which the computer is a member, including nested groups. The targeting item performs the configured comparison against this list of SIDs in the token, rather than multiple LDAP calls to a domain controller. This behavior aligns the behavior of computer security group targeting with that of user security group targeting. This should improve the performance of security group targeting.

Mike "Try, Try, Try Again" Stephens

Cluster and Stale Computer Accounts

$
0
0

Hi, Mike here again. Today, I want to write about a common administrative task that can lead to disaster: removing stale computer accounts from Active Directory.

Removing stale computer accounts is simply good hygiene-- it’s the brushing and flossing of Active Directory. Like tartar, computer accounts have the tendency to build up until they become a problem (difficult to identify and remove, and can lead to lengthy backup times).

Oops… my bad

Many environments separate administrative roles. The Active Directory administrator is not the Cluster Administrator. Each role holder performs their duties in a somewhat isolated manner-- the Cluster admins do their thing and the AD admins do theirs. The AD admin cares about removing stale computer accounts. The cluster admin does not… until the AD admin accidentally deletes a computer account associated with a functioning Failover Cluster because it looks like a stale account.

Unexpected deletion of Cluster Name Object (CNO) or Virtual computer Object (VCO) is one of the top issues worked by our engineers that support Clustering and High-Availability. Everyone does their job and boom-- Clustered Servers stop working because CNOs or the VCOs are missing. What to do?

What's wrong here

I'll paraphrase an article posted on the Clustering and High-Availability TechNet blog that solves this scenario. Typically, domain admins key on two different attributes to determine if a computer account is stale: pwdlastSet and LastLogonTimeStamp. Domains that are not configured to a Window Server 2003 Domain Functional Level use the pwdLastAttribute. However, domains configured to a Windows Server 2003 Domain Functional Level or later should use the lastLogonTimeStamp attribute. What you may not know is that a Failover Cluster (CNO and VCO) does not update the lastLogonTimeStamp the same way as a real computer.

Cluster updates the lastLogonTimeStamp when it brings a clustered network name resource online. Once online, it caches the authentication token. Therefore, a clustered network named resource working in production for months will never update the lastLogonTimeStamp. This appears as a stale computer account to the AD administrator. Being a good citizen, the AD administrator deletes the stale computer account that has not logged on in months. Oops.

The Solution

There are few things that you can do to avoid this situation.

  • Use the servicePrincipalName attribute in addition to the lastLogonTimeStamp attribute when determining stale computer accounts. If any variation of MSClusterVirtualServer appears in this attribute, then leave the computer account alone and consult with the cluster administrator.
  • Encourage the Cluster administrator to use -CleanupAD to delete the computer accounts they are not using after they destroy a cluster.
  • If you are using Windows Server 2008 R2, then consider implementing the Active Directory Recycle Bin. The concept is identical to the recycle bin for the file system, but for AD objects. The following ASKDS blogs can help you evaluate if AD Recycle Bin is a good option for your environment.

Mike "Four out of Five AD admins recommend ASKDS" Stephens

Friday Mail Sack: Super Slo-Mo Edition

$
0
0

Hello folks, Ned here again with another Mail Sack. Before I get rolling though, a quick public service announcement:

Plenty of you have downloaded the Windows 8 Developer Preview and are knee-deep in the new goo. We really want your feedback, so if you have comments, please use one of the following avenues:

I recommend sticking to IT Pro features; the consumer side’s covered and the biggest value is your Administrator experience. The NDA is not off - I still cannot comment on the future of Windows 8 or tell you if we already have plans to do X with Y. This is a one-way channel from you to us (to the developers).

Cool? On to the sack. This week we discuss:

Shake it.

Question

We were chatting here about password synchronization tools that capture password changes on a DC and send the clear text password to some third party app. I consider that a security risk...but then someone asked me how the password is transmitted between a domain member workstation and a domain controller when the user performs a normal password change operation (CTRL+ALT+DEL and Change Password). I suppose the client uses some RPC connection, but it would be great if you could point me to a reference.

Answer

Windows can change passwords many ways - it depends on the OS and the component in question.

1. For the specific case of using CTRL+ALT+DEL because your password has expired or you just felt like changing your password:

If you are using a modern OS like Windows 7 with AD, the computer uses the Kerberos protocol end to end. This starts with a normal AS_REQ logon, but to a special service principal name of kadmin/changepw, as described in http://www.ietf.org/rfc/rfc3244.txt.

The computer first contacts a KDC over port 88, then communicates over port 464 to send along the special AP_REQ and AP_REP. You are still using Kerberos cryptography and sending an encrypted payload containing a KRB_PRIV message with the password. Therefore, to get to the password, you have to defeat Kerberos cryptography itself, which means defeating the crypto and defeating the key derived from the cryptographic hash of the user's original password. Which has never happened in the history of Kerberos.

image

The parsing of this kpasswd traffic is currently broken in NetMon's latest public parsers, but even when you parse it in WireShark, all you can see is the encryption type and a payload of encrypted goo. For example, here is that Windows 7 client talking to a Windows Server 2008 R2 DC, which means AES-256:

image
Aka: Insane-O-Cryption ™

On the other hand, if using a crusty OS like Windows XP, you end up using a legacy password mechanism that worked with NT 4.0 – in this case SamrUnicodeChangePasswordUser2 (http://msdn.microsoft.com/en-us/library/cc245708(v=PROT.10).aspx).

XP also supports the Kerberos change mechanism, but by default uses NTLM with CTRL+ALT+DEL password changes. Witness:

image

This uses “RPC over SMB with Named Pipes” with RPC packet privacy. You are using NTLM v2 by default (unless you set LMCompatibility unwisely) and you are still double-protected (the payload and packets), which makes it relatively safe. Definitely not as safe as Win7 though – just another reason to move forward.

image

You can disable NTLM in the domain if you have Win2008 R2 DCs and XP is smart enough to switch to using Kerberos here:

image

... but you are likely to break many other apps. Better to get rid of Windows XP.

2. A lot of administrative code use SamrSetInformationUser2, which does not require knowing the user’s current password (http://msdn.microsoft.com/en-us/library/cc245793(v=PROT.10).aspx). For example, when you use NET USER to change a domain user’s password:

image

This invokes SamrSetInformationUser2 to set Internal4InformationNew data:

image

So, doubly-protected (a cryptographically generated, key signed hash covered by an encrypted payload). This is also “RPC over SMB using Named Pipes”

image

The crypto for the encrypted payload is derived from a key signed using the underlying authentication protocol, seen from a previous session setup frame (negotiated as Kerberos in this case):

image

3. The legacy mechanisms to change a user password are NetUserChangePassword (http://msdn.microsoft.com/en-us/library/windows/desktop/aa370650(v=vs.85).aspx) and IADsUser::ChangePassword (http://msdn.microsoft.com/en-us/library/windows/desktop/aa746341(v=vs.85).aspx)

4. A local user password change usually involves SamrUnicodeChangePasswordUser2, SamrChangePasswordUser, or SamrOemChangePasswordUser2 (http://msdn.microsoft.com/en-us/library/cc245705(v=PROT.10).aspx).

There are other ways but those are mostly corner-case.

Note: In my examples, I am using the most up to date Netmon 3.4 parsers from http://nmparsers.codeplex.com/.

Question

If I try to remove the AD Domain Services role using ServerManager.msc, it blocks me with this message:

image

But if I remove the role using Dism.exe, it lets me continue:

image

This completely hoses the DC and it no longer boots normally. Is this a bug?

And - hypothetically speaking, of course - how would I fix this DC?

Answer

Don’t do that. :)

Not a bug, this is expected behavior. Dism.exe is a pure servicing tool; it knows nothing more of DCs than the Format command does. ServerManager and servermanagercmd.exe are the tools that know what they are doing.
Update: Although as Artem points out in the comments, we want you to use the Server Manager PowerShell and not servermanagercmd, which is on its way out.

To fix your server, pick one:

  • Boot it into DS Repair Mode with F8 and restore your system state non-authoritatively from backup (you can also perform a bare metal restore if you have that capability - no functional difference in this case). If you do not have a backup and this is your only DC, update your résumé.
  • Boot it into DS Repair Mode with F8 and use dcpromo /forceremoval to finish what you started. Then perform metadata cleanup. Then go stand in the corner and think about what you did, young man!

Question

We are getting Event ID 4740s (account lockout) for the AD Guest account throughout the day, which is raising alerts in our audit system. The Guest account is disabled, expired, and even renamed. Yet various clients keep locking out the account and creating the 4740 event. I believe I've traced it back to the occasional attempt of a local account attempting to authenticate to the domain. Any thoughts?

Answer

You'll see that when someone has set a complex password on the Guest account, using NET USER for example, rather than having it be the null default. The clients never know what the guest password is, they always assume it's null like default - so if you set a password on it, they will fail. Fail enough and you lock out (unless you turn that policy off and replace it with intrusion protection detection and two-factor auth). Set it back to null and you should be ok. As you suspected, there a number of times when Guest is used as part of a "well, let's try that" algorithm:

Network access validation algorithms and examples for Windows Server 2003, Windows XP, and Windows 2000

To set it back you just use the Reset Password menu in Dsa.msc on the guest account, making sure not to set a password and clicking ok. You may have to adjust your domain password policy temporarily to allow this.

As for why it's "locking out" even though it's disabled and renamed:

  • It has a well-known SID (S-1-5-21-domain-501) so renaming doesn’t really do anything except tick a checkbox on some auditor's clipboard
  • Disabled accounts can still lock out if you keep sending bad passwords to them. Usually no one notices though, and most people are more concerned about the "account is disabled" message they see first.

Question

What are the steps to change the "User Account" password set when the Network Device Enrollment Service (NDES) is installed?

Answer

When you first install the Network Device Enrollment Service (NDES), you have the option of setting the identity under which the application pool runs to the default application pool identity or to a specific user account. I assume that you selected the latter. The process to change the password for this user account requires two steps -- with 27 parts (not really…).

  1. First, you must reset the user account's password in Active Directory Users and Computers.

  2. Next, you must change the password configured in the application pool Advanced Settings on the NDES server.

a. In IIS manager, expand the server name node.

b. Click on Application Pools.

c. On the right, locate and highlight the SCEP application pool.

image

d. In the Action pane on the right, click on Advanced Settings....

e. Under Process Model click on Identity, then click on the … button.

image

f. In the Application Pool Identity dialog box, select Custom account and then click on Set….

g. Enter the custom application pool account name, and then set and confirm the password. Click Ok, when finished.

image

h. Click Ok, and then click Ok again.

i. Back on the Application Pools page, verify that SCEP is still highlighted. In the Action pane on the right, click on Recycle….

j. You are done.

Normally, you would have to be concerned with simply resetting the password for any service account to which any digital certificates have been assigned. This is because resetting the password can result in the account losing access to the private keys associated with those certificates. In the case of NDES, however, the certificates used by the NDES service are actually stored in the local computer's Personal store and the custom application pool identity only has read access to those keys. Resetting the password of the custom application pool account will have no impact on the master key used to protect the NDES private keys.

[Courtesy of Jonathan, naturally - Neditor]

Question

If I have only one domain in my forest, do I need a Global Catalog? Plenty of documents imply this is the case.

Answer

All those documents saying "multi-domain only" are mistaken. You need GCs - even in a single-domain forest - for the following:

(Update: Correction on single-domain forest logon made, thanks for catching that Yusuf! I also added a few more breakage scenarios)

  • Perversely, if you have enabled IgnoreGCFailures (http://support.microsoft.com/kb/241789); turning it on removes universal groups from the user security token if there is no GC, meaning they will logon but not be able to access resources they accessed fine previously).
  • If your users logon with UPNs and try to change their password (they can still logon in a single domain forest with UPN or NetBiosDomain\SamAccountName style logons).
  • Even if you use Universal Group Membership Caching to avoid the need for a GC in a site, that DC needs a GC to update the cache.
  • MS Exchange is deployed (All versions of Exchange services won't even start without a GC).
  • Using the built-in Find in the shell to search AD for published shares, published DFS links, published printers, or any object picker dialog that provides option "entire directory"  will fail.
  • DPM agent installation will fail.
  • AD Web Services (aka AD Management Gateway) will fail.
  • CRM searches will fail.
  • Probably other third parties of which I'm not aware.

We stopped recommending that customers use only handfuls of GCs years ago - if you get an ADRAP or call MS support, we will recommend you make all DCs GCs, unless you have an excellent reason not. Our BPA tool states that you should have at least one GC per AD site: http://technet.microsoft.com/en-us/library/dd723676(WS.10).aspx.

Question

If I use DFSR to replicate a folder containing symbolic links, will this replicate the source files or the actual symlinks? The DFSR FAQ says symlink replication is supported under certain circumstances.

Answer

The symlink replicates; however, the underlying data does not replicate just because there is a symlink. If the data is not stored within the RF, you end up with a replicated symlink to nowhere:

Server 1, replicating a folder called c:\unfiltersub. Note how the symlink points to a file that is not in the scope of replication:

image

Server 2, the symlink has replicated - but naturally, it points to an un-replicated file. Boom:

image

If the source data is itself replicated, you’re fine. There’s no real way to guarantee that though, except preventing users from creating files outside the RF by using permissions and FSRM screens. If your end users can only access the data through a share, they are in good shape. I'd imagine they are not the ones creating symlinks though. ;-)

Question

I read your post on career development. There are many memory techniques and I know everyone is different, but what do you use?

[A number of folks asked this question - Neditor]

Answer

When I was younger, it just worked - if I was interested in it, I remembered it. As I get older and burn more brain cells though, I find that my best memory techniques are:

  • Periodic skim and refresh. When I have learned something through deep reading and hands on, I try to skim through core topics at least once a year. For example, I force myself to scan the diagrams in the all the Win2003 Technical Reference A-Z sections, and if I can’t remember what the diagram is saying, I make myself read that section in detail. I don’t let myself get too stale on anything and try to jog it often.
  • Mix up the media. When learning a topic, I read, find illustrations, and watch movies and demos. When there are no illustrations, I use Visio to make them for myself based on reading. When there are no movies, I make myself demo the topics. My brain seems to retain more info when I hit it with different styles on the same subject.
  • I teach and publically write about things a lot. Nothing hones your memory like trying to share info with strangers, as the last thing I want is look like a dope. It makes me prepare and check my work carefully, and that natural repetition – rather than forced “read flash cards”-style repetition, really works for me. My brain runs best under pressure.
  • Your body is not a temple (of Gozer worshipers). Something of a cliché, but I gobble vitamins, eat plenty of brain foods, and work out at least 30 minutes every morning.

I hope this helps and isn’t too general. It’s just what works for me.

Other Stuff

Have $150,000 to spend on a camera, a clever director who likes FPS gaming, and some very fit paint ballers? Go make a movie better than this. Watch it multiple times.

image
Once for the chat log alone

Best all-around coverage of the Frankfurt Auto Show here, thanks to Jalopnik.

image
Want!

The supposedly 10 Coolest Death Scenes in Science Fiction History. But any list not including Hudson’s last moments in Aliens is fail.

If it’s true… holy crap! Ok, maybe it wasn’t true. Wait, HOLY CRAP!

So many awesome things combined.

Finally, my new favorite time waster is Retronaut. How can you not like a website with things like “Celebrities as Russian Generals”.

image
No, really.

Have a nice weekend folks,

- Ned “Oh you want some of this?!?!” Pyle

Friday Mail Sack: Guest Reply Edition

$
0
0

Hi folks, Ned here again. This week we talk:

Let's gang up.

Question

We plan to migrate our Certificate Authority from single-tier online Enterprise Root to two-tier PKI. We have an existing smart card infrastructure. TechNet docs don’t really speak to this scenario in much detail.

1. Does migration to a 2-tier CA structure require any customization?

2. Can I keep the old CA?

3. Can I create a new subordinate CA under the existing CA and take the existing CA offline?

Answer

[Provided by Jonathan Stephens, the Public Keymaster- Editor]

We covered this topic in a blog post, and it should cover many of your questions: http://blogs.technet.com/b/askds/archive/2010/08/23/moving-your-organization-from-a-single-microsoft-ca-to-a-microsoft-recommended-pki.aspx.

Aside from that post, you will also find the following information helpful: http://blogs.technet.com/b/pki/archive/2010/06/19/design-considerations-before-building-a-two-tier-pki-infrastructure.aspx.

To your questions:

  1. While you can migrate an online Enterprise Root CA to an offline Standalone Root CA, that probably isn't the best decision in this case with regard to security. Your current CA has issued all of your smart card logon certificates, which may have been fine when that was all you needed, but it certainly doesn't comply with best practices for a secure PKI. The root CA of any PKI should be long-lived (20 years, for example) and should only issue certificates to subordinate CAs. In a 2-tier hierarchy, the second tier of CAs should have much shorter validity periods (5 years) and is responsible for issuing certificates to end entities. In your case, I'd strong consider setting up a new PKI and migrating your organization over to it. It is more work at the outset, but it is a better decision long term.
  2. You can keep the currently issued certificates working by publishing a final, long-lived CRL from the old CA. This is covered in the first blog post above. This would allow you to slowly migrate your users to smart card logon certificates issued by the new PKI as the old certificates expired. You would also need to continue to publish the old root CA certificate in the AD and in the Enterprise NTAuth store. You can see these stores using the Enterprise PKI snap-in: right-click on Enterprise PKI and select Manage AD Containers. The old root CA certificate should be listed in the NTAuthCertificates tab, and in the Certificate Authorities Container tab. Uninstalling the old CA will remove these certificates; you'll need to add them back.
  3. You can't take an Enterprise CA offline. An Enterprise CA requires access to Active Directory in order to function. You can migrate an Enterprise CA to a Standalone CA and take that offline, but, as I've said before, that really isn't the best option in this case.

Question

Are there any know issues with P2Ving ADAM/AD LDS servers?

Answer

[Provided by Kim Nichols, our resident ADLDS guru'ette - Editor]

No problems as far as we know. The same rules apply as P2V’ing DCs or other roles; make sure you clean up old drivers and decommission the physicals as soon as you are reasonably confident the virtual is working. Never let them run simultaneously. All the “I should have had a V-8” stuff.

Considering how simple it is to create an ADLDS replica, it might be faster and "cleaner" to create a new virtual machine, install and replicate ADLDS to it, then rename the guest and throw away the old physical; if ADLDS was its only role, naturally.

Question

[Provided by Fabian Müller, schlau Deutsche PFE- Editor]

When using production delegation in AGPM, we can grant permissions for editing group policy objects in the production environment. But these permissions will be written to all deployed GPOs, not for specific ones. GPMC makes it easy to set “READ” and “APPLY” permissions on a GPO, but I cannot find a security filtering switch in AGPM. So how can we manage the security filtering on group policies without setting the same ACL on all deployed policies?

Answer

Ok, granting “READ” and “APPLY” permissions respectively managing security filtering in AGPM is not that obvious to find. Do it like this in the change control panel of AGPM:

  • Check-out the according Group Policy Object and provide a brief overview of the changes to be done in the “comments” window, e.g. “Add important security filtering ACLs for group XYZ, dude!
  • Edit the checked-out GPO

In the top of the Group Policy Management Editor, click “Action” –> “Properties”:

image

  • Change to “Security” tab and provide your settings for security filtering:

image

  • Close the Group Policy Management Editor and Check-in the policy (again with a good comment)
  • If everything is done you can now safely “Deploy” the just edited GPO – now the security filter is in place in production:

image

Note 1: Be aware that you won’t find any information regarding the security filtering change in the AGPM history of the edited group policy object. There is nothing in the HTML reports that refer to security filtering changes. That’s why you should provide a good explanation on your changes during “check-in” and “check-out” phase:

image

image

Note 2: Be careful with “DENY” ACEs using AGPM – they might get removed. See the following blog for more information on that topic: http://blogs.technet.com/b/grouppolicy/archive/2008/10/27/agpm-3-0-doesnt-preserve-all-the-deny-aces.aspx

Question

I have one Windows Server 2003 IIS machine with two web applications, each in its own application pool. How can I register SPNs for each application?

Answer

[This one courtesy of Rob Greene, the Abominable Authman - Editor]

There are a couple of options for you here.

  1. You could address each web site on the same server with different host names.  Then you can add the specific HTTP SPN to each application pool account as needed.
  2. You could address each web site with a unique port assignment on the web server.  Then you can add the specific HTTP SPN with the port attached like http/myweb.contoso.com:88
  3. You could use the same account to run all the application pool accounts on the same web server.

NOTE: If you choose option 1 or 2, you have to be careful about Internet Explorer behaviors.  If you choose the unique host name per web site then you will need to make sure to use HOST records in DNS or put a registry key in place on all workstations if you choose CNAME.  If you choose having a unique port for each web site, you will need to put a registry key in place on all workstations so that they send the port number in the TGS SPN request.

http://blogs.technet.com/b/askds/archive/2009/06/22/internet-explorer-behaviors-with-kerberos-authentication.aspx

Question

Comparing AGPM controlled GPOs within the same domain is no problem at all – but if the AGPM server serves more than one domain, how can I compare GPOs that are hosted in different domains using AGPM difference report?

Answer

[Again from Fabian, who was really on a roll last week - Editor]

Since AGPM 4.0 we provide the ability to export and import Group Policy Objects using AGPM. What you have to do is:

  • To export one of the GPOs from domain 1…:

image

  • … and import the *.cab to domain 2 using the AGPM GPO import wizard (right-click on an empty area in AGPM Contents—> Controlled tab and select “New Controlled GPO…”):

image

image

  • Now you can simply compare those objects using difference report:

image

[Woo, finally some from Ned - Editor]

Question

When I use the Windows 7 (RSAT) version of AD Users and Computers to connect to certain domains, I get error "unknown user name or bad password". However, when I use the XP/2003 adminpak version, no errors for the same domain. There's no way to enter a domain or password.

Answer

ADUC in Vista/2008/7/R2 does some group membership and privilege checking when it starts that the older ADUC never did. You’ll get the logon failure message for any domain you are not a domain admin in, for example. The legacy ADUC is probably broken for that account as well – it’s just not telling you.

image

Question

I have 2 servers replicating with DFSR, and the network cable between them is disconnected. I delete a file on Server1, while the equivalent file on Server2 is modified. When the cable is re-connected, what is the expected behavior?

Answer

Last updater wins, even if a modification of an ostensibly deleted file. If the file was deleted first on server 1 and modified later on server 2, it would replicate back to server 1 with the modifications once the network reconnected. If it had been deleted later than the modification, that “last write” would win and it would delete from the other server once the network resumed.

More info on DFSR conflict handling here http://blogs.technet.com/b/askds/archive/2010/01/05/understanding-dfsr-conflict-algorithms-and-doing-something-about-conflicts.aspx

Question

Is there any automatic way to delete stale user or computer accounts? Something you turn on in AD?

Answer

Nope, not automatically; you have to create a solution to detect the age and disable or delete stale accounts. This is a very dangerous operation - make sure you understand what you are getting yourself into. For example:

Question

Whenever I try to use the PowerShell cmdlet Get-ACL against an object in AD, always get an error like " Cannot find path ou=xxx,dc=xxx,dc=xxx because it does not exist". But it does!

Answer

After you import the ActiveDirectory module, but before you run your commands, run:

CD AD:

Get-Acl won’t work until you change to the magical “active directory drive”.

Question

I've read the Performance Tuning Guidelines for Windows Server, and I wonder if all SMB server tuning parameters (AsyncCredits, MinCredits, MaxCredits, etc) also work (or help) for DFSR.  Also, do you know the limit is for SMB Asynchronous Credits - the document doesn’t say?

Answer

Nope, they won’t have any effect on DFSR – it does not use SMB to replicate files. SMB is only used by the DFSMGMT.MSC if you ask it to create a replicated folder on another server during RF setup. More info here:

Configuring DFSR to a Static Port - The rest of the story - http://blogs.technet.com/b/askds/archive/2009/07/16/configuring-dfsr-to-a-static-port-the-rest-of-the-story.aspx

That AsynchronousCredits SMB value does not have a true maximum, other than the fact that it is a DWORD and cannot exceed 4,294,967,295 (i.e. 0xffffffff). Its default value on Windows Server 2008 and 2008 R2 is 512; on Vista/7, it's 64.

HOWEVER!

As KB938475 (http://support.microsoft.com/kb/938475) points out, adjusting these defaults comes at the cost of paged pool (Kernel) memory. If you were to increase these values too high, you would eventually run out of paged pool and then perhaps hang or crash your file servers. So don't go crazy here.

There is no "right" value to set - it depends on your installed memory, if you are using 32-bit versus 64-bit (if 32-bit, I would not touch this value at all), the number of clients you have connecting, their usage patterns, etc. I recommend increasing this in small doses and testing the performance - for example, doubling it to 1024 would be a fairly prudent test to start.

Other Stuff

Happy Birthday to all US Marines out there, past and present. I hope you're using Veterans Day to sleep off the hangover. I always assumed that's why they made it November 11th, not that whole WW1 thing.

Also, happy anniversary to Jonathan, who has been a Microsoft employee for 15 years. In keeping with the tradition, he had 15 pounds of M&Ms for the floor, which in case you’re wondering, it fills a salad bowl. Which around here, means:

image

Two of the most awesome things ever – combined:

A great baseball story about Lou Gehrig, Kurt Russell, and a historic bat.

Off to play some Battlefield 3. No wait, Skyrim. Ah crap, I mean Call of Duty MW3. And I need to hurry up as Arkham City is coming. It's a good time to be a PC gamer. Or Xbox, if you're into that sorta thing.

 

Have a nice weekend folks,

 - Ned "and Jonathan and Kim and Fabian and Rob" Pyle

Friday Mail Sack: Best Post This Year Edition

$
0
0

Hi folks, Ned here and welcoming you to 2012 with a new Friday Mail Sack. Catching up from our holiday hiatus, today we talk about:

So put down that nicotine gum and get to reading!

Question

Is there an "official" stance on removing built-in admin shares (C$, ADMIN$, etc.) in Windows? I’m not sure this would make things more secure or not. Larry Osterman wrote a nice article on its origins but doesn’t give any advice.

Answer

The official stance is from the KB that states how to do it:

Generally, Microsoft recommends that you do not modify these special shared resources.

Even better, here are many things that will break if you do this:

Overview of problems that may occur when administrative shares are missing
http://support.microsoft.com/default.aspx?scid=kb;EN-US;842715

That’s not a complete list; it wasn’t updated for Vista/2008 and later. It’s so bad though that there’s no point, frankly. Removing these shares does not increase security, as only administrators can use those shares and you cannot prevent administrators from putting them back or creating equivalent custom shares.

This is one of those “don’t do it just because you can” customizations.

Question

The Windows PowerShell Get-ADDomainController cmdlet finds DCs, but not much actual attribute data from them. The examples on TechNet are not great. How do I get it to return useful info?

Answer

You have to use another cmdlet in tandem, without pipelining: Get-ADComputer. The Get-ADDomainController cmdlet is good mainly for searching. The Get-ADComputer cmdlet, on the other hand, does not accept pipeline input from Get-ADDomainController. Instead, you use a pseudo “nested function” to first find the PDC, then get data about that DC. For example, (this is all one command, wrapped):

get-adcomputer (get-addomaincontroller -Discover -Service "PrimaryDC").name -property * | format-list operatingsystem,operatingsystemservicepack

When you run this, PowerShell first processes the commands within the parentheses, which finds the PDC. Then it runs get-adcomputer, using the property of “Name” returned by get-addomaincontroller. Then it passes the results through the pipeline to be formatted. So it’s 123.

get-adcomputer (get-addomaincontroller -Discover -Service "PrimaryDC").name -property * | format-list operatingsystem,operatingsystemservicepack

Voila. Here I return the OS of the PDC, all without having any idea which server actually holds that role:

clip_image002[6]

Moreover, before the Internet clubs me like a baby seal: yes, a more efficient way to return data is to ensure that the –property list contains only those attributes desired:

image

Get-ADDomainController can find all sorts of interesting things via its –service argument:

PrimaryDC
GlobalCatalog
KDC
TimeService
ReliableTimeService
ADWS

The Get-ADDomain cmdlet can also find FSMO role holders and other big picture domain stuff. For example, the RID Master you need to monitor.

Question

I know about Kerberos “token bloat” with user accounts that are a member of too many groups. Does this also affect computers added to too many groups? What would some practical effects of that? We want to use a lot of them in the near future for some application … stuff.

Answer

Yes, things will break. To demonstrate, I use PowerShell to create 2000 groups in my domain and added a computer named “7-01” to them:

image

I then restart the 7-01 computer. Uh oh, the System Event log is un-pleased. At this point, 7-01 is no longer applying computer group policy, getting start scripts, or allowing any of its services to logon remotely to DCs:

image 

Oh, and check out this gem:

image

I’m sure no one will go on a wild goose chase after seeing that message. Applications will be freaking out even more, likely with the oh-so-helpful error 0x80090350:

“The system detected a possible attempt to compromise security. Please ensure that you can contact the server that authenticated you.”

Don’t do it. MaxTokenSize is probably in your future if you do, and it has limits that you cannot design your way out of. IT uniqueness is bad.

Question

We have XP systems using two partitions (C: and D:) migrating to Windows 7 with USMT. The OS are on C and the user profiles on D.  We’ll use that D partition to hold the USMT store. After migration, we’ll remove the second partition and expand the first partition to use the space freed up by the first partition.

When restoring via loadstate, will the user profiles end up on C or on D? If the profiles end up on D, we will not be able to delete the second partition obviously, and we want to stop doing that regardless.

Answer

You don’t have to do anything; it just works. Because the new profile destination is on C, USMT just slots everything in there automagically :). The profiles will be on C and nothing will be on D except the store itself and any non-profile folders*:

clip_image001
XP, before migrating

clip_image001[5]
Win7, after migrating

If users have any non-profile folders on D, that will require a custom rerouting xml to ensure they are moved to C during loadstate and not obliterated when D is deleted later. Or just add a MOVE line to whatever DISKPART script you are using to expand the partition.

Question

Should we stop the DFSR service before performing a backup or restore?

Answer

Manually stopping the DFSR service is not recommended. When backing up using the DFSR VSS Writer – which is the only supported way – replication is stopped automatically, so there’s no reason to stop the service or need to manually change replication:

Event ID=1102
Severity=Informational
The DFS Replication service has temporarily stopped replication because another
application is performing a backup or restore operation. Replication will resume
after the backup or restore operation has finished.

Event ID=1104
Severity=Informational
The DFS Replication service successfully restarted replication after a backup
or restore operation.

Another bit of implied evidence – Windows Server Backup does not stop the service.

Stopping the DFSR service for extended periods leaves you open to the risk of a USN journal wrap. And what if someone/something thinks that the service being stopped is “bad” and starts it up in the middle of the backup? Probably nothing bad happens, but certainly nothing good. Why risk it?

Question

In an environment where AGMP controls all GPOs, what is the best practice when application setup routines make edits "under the hood" to GPOs, such as the Default Domain Controllers GPO? For example, Exchange setup make changes to User Rights Assignment (SeSecurityPrivilege). Obviously if this setup process makes such edits on the live GPO in sysvol the changes will happen, but then only to have those critical edits be lost and overwritten the next time an admin re-deploys with AGPM.

Answer

[via Fabian “Wunderbar” Müller  – Ned]

From my point of view:

1. The Default Domain and Default Domain Controller Policies should be edited very rarely. Manual changes as well as automated changes (e.g. by the mentioned Exchange setup) should be well known and therefore the workaround in 2) should be feasible.

2. After those planned changes were performed, you have to use “import from production” the production GPO to the AGPM archive in order to reflect the production change to AGPM. Another way could be to periodically use “import from production” the default policies or to implement a manual / human process that defines the “import from production” procedure before a change in these policies is done using AGPM.

Not a perfect answer, but manageable.

Question

In testing the rerouting of folders, I took the this example from TechNet and placed in a separate custom.xml.  When using this custom.xml along with the other defaults (migdocs.xml and migapp.xml unchanged), the EngineeringDrafts folder is copied to %CSIDL_DESKTOP%\EngineeringDrafts' but there’s also a copy at C:\EngineeringDrafts on the destination computer.

I assume this is not expected behavior.  Is there something I’m missing?

Answer

Expected behavior, pretty well hidden though:

http://technet.microsoft.com/en-us/library/dd560751(v=WS.10).aspx

If you have an <include> rule in one component and a <locationModify> rule in another component for the same file, the file will be migrated in both places. That is, it will be included based on the <include> rule and it will be migrated based on the <locationModify> rule

That original rerouting article could state this more plainly, I think. Hardly anyone does this relativemove operation; it’s very expensive for disk space– one of those “you can, but you shouldn’t” capabilities of USMT. The first example also has an invalid character in it (the apostrophe in “user’s” on line 12, position 91 – argh!).

Don’t just comment out those areas in migdocs though; you are then turning off most of the data migration. Instead, create a copy of the migdocs.xml and modify it to include your rerouting exceptions, then use that as your custom XML and stop including the factory migdocs.xml.

There’s an example attached to this blog post down at the bottom. Note the exclude in the System context and the include/modify in the user context:

image

image

Don’t just modify the existing migdocs.xml and keep using it un-renamed either; that becomes a versioning nightmare down the road.

Question

I'm reading up on CAPolicy.inf files, and it looks like there is an error in the documentation that keeps being copied around. TechNet lists RenewalValidityPeriod=Years and RenewalValidityPeriodUnits=20 under the "Windows Server 2003" sample. This is the opposite of the Windows 2000 sample, and intuitively the "PeriodUnits" should be something like "Years" or "Weeks", while the "Period" would be an integer value. I see this on AskDS here and here also.

Answer

[via Jonathan “scissor fingers” Stephens  – Ned]

You're right that the two settings seem like they should be reversed, but unfortunately this is not correct. All of the *Period values can be set to Minutes, Hours, Days, Weeks, Months or Years, while all of the *PeriodUnits values should be set to some integer.

Originally, the two types of values were intended to be exactly what one intuitively believes they should be -- *PeriodUnits was to be Day, Weeks, Months, etc. while *Period was to be the integer value. Unfortunately, the two were mixed up early in the development cycle for Windows 2000 and, once the error was discovered, it was really too late to fix what is ultimately a cosmetic problem. We just decided to document the correct values for each setting. So in actuality, it is the Windows 2000 documentation that is incorrect as it was written using the original specs and did not take the switch into account. I’ll get that fixed.

Question

Is there a way to control the number, verbosity, or contents of the DFSR cluster debug logs (DfsrClus_nnnnn.log and DfsrClus_nnnnn.log.gz in %windir%\debug)?

Answer

Nope, sorry. It’s all static defined:

  • Severity = 5
  • Max log messages per log = 10000
  • Max number of log files = 999

Question

In your previous article you say that any registry modifications should be completed with resource restart (take resource offline and bring it back online), instead of direct service restart. However, official whitepaper (on page 16) says that CA service should be restarted by using "net stop certsvc && net start certsvc".

Also, I want to clarify about a clustered CA database backup/restore. Say, a DB was damaged or destroyed. I have a full backup of CA DB. Before restoring, I do I stop only AD CS service resource (cluadmin.msc) or stop the CA service directly (net stop certsvc)?

Answer

[via Rob “there's a Squatch in These Woods” Greene  – Ned]

The CertSvc service has no idea that it belongs to a cluster.  That’s why you setup the CA as a generic service within Cluster Administration and configure the CA registry hive within Cluster Administrator.

When you update the registry keys on the Active CA Cluster node, the Cluster service is monitoring the registry key changes.  When the resource is taken offline the Cluster Service makes a new copy of the registry keys to so that the other node gets the update.  When you stop and start the CA service the cluster services has no idea why the service is stopped and started, since it is being done outside of cluster and those registry key settings are never updated on the stand-by node. General guidance around clusters is to manage the resource state (Stop/Start) within Cluster Administrator and do not do this through Services.msc, NET STOP, SC, etc.

As far as the CA Database restore: just logon to the Active CA node and run the certutil or CA MMC to perform the operation. There’s no need to touch the service manually.

Other stuff

The Microsoft Premier Field Organization has started a new blog that you should definitely be reading.

Welcome to your nightmare (Thanks Mark!)

Totally immature and therefore funny. Doubles as a gender test.

Speaking of George Lucas re-imaginings, check out this awesome shot-by-shot comparison of Raiders and 30 other previous adventure films:


Indy whipped first!

I am completely addicted to Panzer Corps; if you ever played Panzer General in the 90’s, you will be too.

Apropos throwback video gaming and even more re-imagining, here is Battlestar Galactica as a 1990’s RPG:

   
The mail sack becomes meta of meta of meta

Like Legos? Love Simon Pegg? This is for you.

Best sci-fi books of 2011, according to IO9.

What’s your New Year’s resolution? Mine is to stop swearing so much.

 

Until next time,

- Ned “$#%^&@!%^#$%^” Pyle

Friday Mail Sack: It’s a Dog’s Life Edition

$
0
0

Hi folks, Ned here again with some possibly interesting, occasionally entertaining, and always unsolicited Friday mail sack. This week we talk some:

Fetch!

Question

We use third party DNS but used to have Windows DNS on domain controllers; that service has been uninstalled and all that remains are the partitions. According to KB835397, deleting the ForestDNSZones and DomainDNSZones partitions is not supported. Soon we will have removed the last few old domain controllers hosting some of those partitions and replaced them with Windows Server 2008 R2 that never had Windows DNS. Are we getting ourselves in trouble or making this environment unsupported?

Answer

You are supported. Don’t interpret the KB too narrowly; there’s a difference between deletion of partitions used by DNS and never creating them in the first place. If you are not using MS DNS and the zones don’t exist, there’s nothing in Windows that should care about them, and we are not aware of any problems.

This is more of a “cover our butts” article… we just don’t want you deleting partitions that you are actually using and naturally, we don’t rigorously test with non-MS DNS. That’s your job. ;-)

Question

When I run DCDIAG it returns all warning events for the system event log. I have a bunch of “expected” warnings, so this just clogs up my results. Can I change this behavior?

Answer

DCDIAG has no idea what the messages mean and has no way to control the output. You will need to suppress the events themselves in their own native fashion, if their application supports it. For example, if it’s a chatty combination domain controller/print server in a branch office that shows endless expected printer Warning messages, you’d use the steps here.

If your application cannot be controlled, there’s one (rather gross) alternative to make things cleaner though, and that’s to use the FIND command in a few pipelines to remove expected events. For example, here I always see this write cache warning when I boot this DC, and I don’t really care about it:

image

Since I don’t care about these entries, I can use pipelined FIND (with /v to drop those lines) and narrow down the returned data. I probably don’t care about the time generated since DCDIAG only shows the last 60 minutes, nor the event string lines either. So with that, I can use this single wrapped line in a batch file:

dcdiag/test:systemlog | find /I /v "eventid: 0x80040022" | find /I /v "the driver disabled the write cache on device" | find /i /v "event string:" | find /i /v "time generated:"

clip_image002
Whoops, I need to fix that user’s group memberships!

Voila. I still get most of the useful data and nothing about that write cache issue. Just substitute your own stuff.

See, I don’t always make you use Windows PowerShell for your pipelines. ツ

Question

If I walk into a new Windows Server 2008 AD environment cold and need to know if they are using DFSR or FRS for SYSVOL replication, what is the quickest way to tell?

Answer

Just run this DFSRMIG command:

dfsrmig.exe /getglobalstate

That tells you what the current state of the SYSVOL DFSR topology and migration.

If it says:

  • “Eliminated”

… they are using DFSR for SYSVOL. It will show this message even if the domain was built from scratch with a Windows Server 2008 domain functional level or higher and never performed a migration; the tool doesn’t know how to say “they always used DFSR from day one”.

If it says:

  • “Prepared”
  • “Redirected”

… they are mid-migration and using both FRS and DFSR, favoring one or the other for SYSVOL.

If it says:

  • “Start”
  • “DFSR migration has not yet initialized”
  • “Current domain functional level is not Windows Server 2008 or above”

… they are using FRS for SYSVOL.

Question

When using the DFSR WMI namespace “root\microsoftdfs” and class “dfsrvolumeconfig”, I am seeing weird results for the volume path. On one server it’s the C: drive, but on another it just shows a wacky volume GUID. Why?

Answer

DFSR is replicating data under a mount point. You can see this with any WMI tool (surprise! here’s PowerShell) and then use mountvol.exe to confirm your theory. To wit:

image

image

Question

I notice that the "dsquery user -inactive x" command returns a list of user accounts that have been inactive for x number of weeks, but not days.  I suspect that this lack of precision is related to this older AskDS post where it is mentioned that the LastLogonTimeStamp attribute is not terribly accurate. I was wondering what your thoughts on this were, and if my only real recourse for precise auditing of inactive user accounts was by parsing the Security logs of my DCs for user logon events.

Answer

Your supposition about DSQUERY is right. What's worse, that tool's queries do not even include users that have never logged on in its inactive search. So it's totally misleading. If you use the AD Administrative Center query for inactive accounts, it uses this LDAP syntax, so it's at least catching everyone (note that your lastlogontimestamp UTC value would be different):

(&(objectCategory=person)(objectClass=user)(!userAccountControl:1.2.840.113556.1.4.803:=2)(|(lastLogonTimestamp<=129528216000000000)(!lastLogonTimestamp=*)))

You can lower the msDS-LogonTimeSyncInterval down to 1 day, which removes the randomization and gets you very close to that magic "exactness" (within 24 hours). But this will increase your replication load, perhaps significantly if this is a large environment with a lot of logon activity. Warren's blog post you mentioned describes how to do this. I’ve seen some pretty clever PowerShell techniques for this: here's one (untested, non-MS) example that could be easily adopted into native Windows AD PowerShell or just used as-is. Dmitry is a smart fella. Make sure that you if you find scripts that the the author clearly understood Warren’s rules.

There is also the option - if you just care about users' interactive or runas logons and you have all Windows Vista or Windows 7 clients - to implement msDS-LastSuccessfulInteractiveLogonTime. The ups and downs of this are discussed here. That is replicated normally and could be used as an LDAP query option.

Windows AD PowerShell has a nice built-in constructed property called “LastLogonDate” that is the friendly date time info, converted from the gnarly UTC. That might help you in your scripting efforts.

After all that, you are back to Warren's recommended use of security logs and audit collection services. Which is a good idea anyway. You don't get to be meticulous about just one aspect of security!

Question

I was reading your older blog post about setting legal notice text and had a few questions:

  1. Has Windows 7 changed to make this any easier or better?
  2. Any way to change the font or its size?
  3. Any way to embed URLs in the text so the user can see what they are agreeing to in more detail?

Answer

[Courtesy of that post’s author, Mike “DiNozzo” Stephens]

  1. No
  2. No
  3. No

:)

#3 is especially impossible. Just imagine what people would do to us if we allowed you to run Internet Explorer before you logged on!

image

 [The next few answers courtesy of Jonathan “Davros” Stephens. Note how he only ever replies with bad news… – Neditor]

Question

I have encountered the following issue with some of my users performing smart card logon from Windows XP SP3.

It seems that my users are able to logon using smart card logon even if the certificate on the user’s smart card was revoked.
Here are the tests we've performed:

  1. Verified that the CRL is accessible
  2. Smartcard logon with the working certificate
  3. Revoked the certificate + waited for the next CRL publish
  4. Verified that the new CRL is accessible and that the revoked certificate was present in the list
  5. Tested smartcard logon with the revoked certificate

We verified the presence of the following registry keys both on the client machine and on the authenticating DC:

HKEY_Local_Machine\System\CurrentControlSet\Services\KDC\CRLValidityExtensionPeriod
HKEY_Local_Machine\System\CurrentControlSet\Services\KDC\CRLTimeoutPeriod
HKEY_Local_Machine\System\CurrentControlSet\Control\LSA\Kerberos\Parameters\CRLTimeoutPeriod
HKEY_Local_Machine\System\CurrentControlSet\Control\LSA\Kerberos\Parameters\UseCachedCRLOnlyAndIgnoreRevocationUnknownErrors

None of them were found.

Answer

First, there is an overlap built into CRL publishing. The old CRL remains valid for a time after the new CRL is published to allow clients/servers a window to download the new CRL before the old one becomes invalid. If the old CRL is still valid then it is probably being used by the DC to verify the smart card certificate.

Second, revocation of a smart card certificate is not intended to be usable as real-time access control -- not even with OCSP involved. If you want to prevent the user from logging on with the smart card then the account should be disabled. That said, one possible hacky alternative that would be take immediate effect would be to change the UPN of the user so it does not match the UPN on the smart card. With mismatched UPNs, implicit mapping of the smart card certificate to the user account would fail; the DC would have no way to determine which account it should authenticate even assuming the smart card certificate verified successfully.

If you have Windows Server 2008 R2 DCs, you can disable the implicit mapping of smart card logon certificates to user accounts via the UPN in favor of explicit certificate mapping. That way, if a user loses his smart card and you want to make sure that that certificate cannot be used for authentication as soon as possible, remove it from the altSecurityIdentities attribute on the user object in AD. Of course, the tradeoff here is the additional management of updating user accounts before their smart cards can be used for logon.

Question

When using the SID cloning tools like sidhist.vbs in a Windows Server 2008 R2 domain, they always fail with error “Destination auditing must be enabled”. I verified that Account Management auditing is on as required, but then I also found that the newer Advanced Audit policy version of that setting is also on. It seems like the DSAddSIDHistory() API does not consider this new auditing sufficient? In my test environment everything works fine, but it does not use Advanced Auditing. I also found that if I set all Account Management advanced audit subcategories to enabled, it works.

Answer

It turns out that this is a known issue (it affects ADMT too). At this time, DsAddSidHistory() only works if it thinks legacy Account Management is enabled. You will either need to:

  • Remove the Advanced Auditing policy and force the destination computers use legacy auditing by setting Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings to disabled.
  • Set all Account Management advanced audit subcategories to enabled, as you found, which satisfies the SID cloning function.

We are making sure TechNet is updated to reflect this as well.  It’s not like Advanced Auditing is going to get less popular over time.

Question

Enterprise and Datacenter editions of Windows Server support enforcing Role Separation based on the common criteria (CC) definitions.  But there doesn't seem to be any way to define the roles that you want to enforce.

CC Security Levels 1 and 2 only define two roles that need to be restricted (CA Administrator and Certificate Manager).  Auditing and Backup functions are handled by the CA administrator instead of dedicated roles.

Is there a way to enforce separation of these two roles without including the Auditor and Backup Operator roles defined in the higher CC Security Levels?

Answer

Unfortunately, there is no way to make exceptions to role separation. Basically, you have two options:

  1. Enable Role Separation and use different user accounts for each role.
  2. Do not enable Role Separation, turn on CA Auditing to monitor actions taken on the CA.

[Now back to Ned for the idiotic finish!]

Other Stuff

My latest favorite site is cubiclebot.com. Mainly because they lead me to things like this:


Boing boing boing

And this:


Wait for the pit!

Speaking of cool dogs and songs: Bark bark bark bark, bark bark bark-bark.

Game of Thrones season 2 is April 1st. Expect everyone to die, no matter how important or likeable their character. Thanks George!

At last, Ninja-related sticky notes.

For all the geek parents out there. My favorite is:

adorbz-ewok
For once, an Ewok does not enrage me

It was inevitable.

 

Finally: I am headed back to Chicagoland next weekend to see my family. If you are in northern Illinois and planning on eating at Slott’s Hots in Libertyville, Louie’s in Waukegan, or Leona’s in Chicago, gimme a wave. Yes, all I care about is the food. My wife only cares about the shopping, that’s why we’re on Michigan avenue and why she cannot complain. You don’t know what it’s like living in Charlotte!! D-:

Have a nice weekend folks,

Ned “my dogs are not quite as athletic” Pyle


Friday Mail Sack: Carl Sandburg Edition

$
0
0

Hi folks, Jonathan again. Ned is taking some time off visiting his old stomping grounds – the land of Mother-in-Laws and heart-breaking baseball. Or, as Sandburg put it:

Hog Butcher for the World,
Tool Maker, Stacker of Wheat,
Player with Railroads and the Nation's Freight Handler;
Stormy, husky, brawling,
City of the Big Shoulders”

Cool, huh?

Anyway, today we talk about:

And awayyy we go!

Question

When thousands of clients are rebooted for Windows Update or other scheduled tasks, my domain controllers log many KDC 7 System event errors:

Log Name: System
Source: Microsoft-Windows-Kerberos-Key-Distribution-Center
Event ID: 7
Level: Error
Description:

The Security Account Manager failed a KDC request in an unexpected way. The error is in the data field.

Error 170000C0

I’m trying to figure out if this is a performance issue, if the mass reboots are related, if my DCs are over-utilized, or something else.

Answer

That extended error is:

C0000017 = STATUS_NO_MEMORY - {Not Enough Quota} - Not enough virtual memory or paging file quota is available to complete the specified operation.

The DCs are being pressured with so many requests that they are running out of Kernel memory. We see this very occasionally with applications that make heavy use of the older SAMR protocol for lookups (instead of say, LDAP). In some cases we could change the client application's behavior. In others, the customer just had to add more capacity. The mass reboots alone are not the problem here - it's the software that runs at boot up on each client that is then creating what amounts to a denial of service attack against the domain controllers.

Examine one of the client computers mentioned in the event for all non-Windows-provided services, scheduled tasks that run at startup, SCCM/SMS at boot jobs, computer startup scripts, or anything else that runs when the computer is restarted. Then get promiscuous network captures of that computer starting (any time, not en masse) while also running Process Monitor in boot mode, and you'll probably see some very likely candidates. You can also use SPA or AD Data Collector sets (http://blogs.technet.com/b/askds/archive/2010/06/08/son-of-spa-ad-data-collector-sets-in-win2008-and-beyond.aspx) in combination with network captures to see exactly what protocol is being used to overwhelm the DC, if you want to troubleshoot the issue as it happens. Probably at 3AM, that sounds sucky.

Ultimately, the application causing the issue must be stopped, reconfigured, or removed - the only alternative is to add more DCs as a capacity Band-Aid or stagger your mass reboots.

Question

Is it possible to have 2003 and 2008 servers co-exist in the same DFS namespace? I don’t see it documented either “for” or “against” on the blog anywhere.

Answer

It's totally ok to mix OSes in the DFSN namespace, as long as you don't use Windows Server 2008 ("V2 mode") namespaces, which won't allow any Win2003 servers. If you are using DFSR to replicate the data, make sure all server have the latest DFSR hotfixes (here and here), as there areincompatibilities in DFSR that these hotfixes resolve.

Question

Should I create DFS namespace folders (used by the DFS service itself) under NTFS mount points? Is there any advantage to this?

Answer

DFSN management tools do not allow you to create DFSN roots and links under mount points ordinarily, and once you do through alternate hax0r means, they are hard to remove (you have to use FSUTIL). Ergo, do not do it – the management tools blocking you means that it is not supported.

There is no real value in placing the DFSN special folders under mount points - the DFSN special folders consume no space, do not contain files, and exist only to provide reparse point tags to the DFSN service and its file IO driver goo. By default, they are configured on the root of the C: drive in a folder called c:\dfsroots. That ensures that they are available when the OS boots. If clustering you'd create them on one of your drive-lettered shared disks.

Question

How do you back up the Themes folder using USMT4 in Windows 7?

Answer

The built-in USMT migration code copies the settings but not the files, as it knows the files will exist somewhere on the user’s source profile and that those are being copied by the migdocs.xml/miguser.xml. It also knows that the Themes system will take care of the rest after migration; the Themes system creates the transcoded image files using the theme settings and copies the image files itself.

Note here how after scanstate, my USMT store’s Themes folder is empty:

clip_image001

After I loadstate that user, the Themes system fixed it all up in that user’s real profile when the user logged on:

clip_image002

However, if you still specifically need to copy the Themes folder intact for some reason, here’s a sample custom XML file:

<?xmlversion="1.0"encoding="UTF-8"?>

<migrationurlid="http://www.microsoft.com/migration/1.0/migxmlext/migratethemefolder">

<componenttype="Documents"context="User">

<!-- sample theme folder migrator -->

<displayName>ThemeFolderMigSample</displayName>

 <rolerole="Data">

  <rules>

   <includefilter='MigXmlHelper.IgnoreIrrelevantLinks()'>

   <objectSet>

    <patterntype="File">%CSIDL_APPDATA%\Microsoft\Windows\Themes\* [*]</pattern>

   </objectSet>

  </include>

 </rules>

 </role>

And here it is in action:

clip_image004

Question

I've recently been working on extending my AD schema with a new back-linked attribute pair, and I used the instructions on this blog to auto-generate the linkIDs for my new attributes. Confusingly, the resulting linkIDs are negative values (-912314983 and -912314984). The attributes and backlinks seem to work as expected, but when looking at the MSDN definition of the linkID attribute, it specifically states that the linkID should be a positive value. Do you know why I'm getting a negative value, and if I should be concerned?

Answer

The only hard and fast rule is that the forward link (flink) be an even number and the backward link (blink) be the flink's ID plus one. In your case, the flink is -912314984 then the blink had better be -912314983, which I assume is the case since things are working. But, we were curious when you posted the linkID documentation from MSDN so we dug a little deeper.

The fact that your linkIDs are negative numbers is correct and expected, and is the result of a feature called AutoLinkID. Automatically generated linkIDs are in the range of 0xC0000000-0xFFFFFFFC (-1,073,741,824 to -4). This means that it is a good idea to use positive numbers if you are going to set the linkID manually. That way you are guaranteed not to conflict with automatically generated linkIDs.

The bottom line is, you're all good.

Question

I am trying to delegate permissions to the DBA team to create, modify, and delete SPNs since they're the team that swaps out the local accounts SQL is installed under to the domain service accounts we create to run SQL.

Documentation on the Internet has led me down the rabbit hole to no end.  Can you tell me how this is done in a W2K8 R2 domain and a W2K3 domain?

Answer

So you will want to delegate a specific group of users -- your DBA team -- permissions to modify the SPN attribute of a specific set of objects -- computer accounts for servers running SQL server and user accounts used as service accounts under which SQL Server can run.

The easiest way to accomplish this is to put all such accounts in one OU, ie OU=SQL Server Accounts, and run the following commands:

Dsacls "OU=SQL Server Accounts,DC=corp,DC=contoso,DC=com" /I:S /G "CORP\DBA Team":WPRP;servicePrincipalName;user
Dsacls "OU=SQL Server Accounts,DC=corp,DC=contoso,DC=com" /I:S /G "CORP\DBA Team":WPRP;servicePrincipalName;computer

These two commands will grant the DBA Team group permission to read and write the servicePrincipalName attribute on user and computer objects in the SQL Server Accounts OU.

Your admins should then be able to use setspn.exe to modify that property on the designated accounts.

But…what if you have a large number of accounts spread across multiple OUs? The above solution only works well if all of your accounts are concentrated in a few (preferably one) OUs. In this case, you basically have two options:

  1. You can run the two commands specifying the root of the domain as the object, but you would be delegating permissions for EVERY user and computer in the domain. Do you want your DBA team to be able to modify accounts for which they have no legitimate purpose?
  2. Compile a list of specific accounts the DBA team can manage and modify each of them individually. That can be done with a single command line. Create a text file that contains the DNs of each account for which you want to delegate permissions and then use the following command:

    for /f "tokens=*" %i in (object-list.txt) do dsacls "%i" /G "CORP\DBA Team":WPRP;servicePrincipalName

None of these are really great options, however, because you’re essentially giving a group of non-AD Administrators the ability to screw up authentication to what are perhaps critical business resources. You might actually be better off creating an expedited process whereby these DBAs can submit a request to a real Administrator who already has permissions to make the required changes, as well as the experience to verify such a change won’t cause any problems.

Author’s Note: This gentleman pointed out in a reply that these DBAs wouldn’t want him messing with tables, rows and the SA account, so he doesn’t want them touching AD. I thought that was sort of amusing.

Question

What is Powershell checking when your run get-adcomputer -properties * -filter * | format-table Name,Enabled?  Is Enabled an attribute, a flag, a bit, a setting?  What, if at all, would that setting show up as in something like ADSIEdit.msc?

I get that stuff like samAccountName, sn, telephonenumber, etc.  are attributes but what the heck is enabled?

Answer

All objects in PowerShell are PSObjects, which essentially wrap the underlying .NET or COM objects and expose some or all of the methods and properties of the wrapped object. In this case, Enabled is an attribute ultimately inherited from the System.DirectoryServices.AccountManagement.AuthenticablePrincipal .NET class. This answer isn’t very helpful, however, as it just moves your search for answers from PowerShell to the .NET Framework, right? Ultimately, you want to know how a computer’s or user’s account state (enabled or disabled) is stored in Active Directory.

Whether or not an account is disabled is reflected in the appropriate bit being set on the object’s userAccountControl attribute. Check out the following KB: How to use the UserAccountControl flags to manipulate user account properties. You’ll find that the penultimate least significant bit of the userAccountControl bitmask is called ACCOUNTDISABLE, and reflects the appropriate state; 1 is disabled and 0 is enabled.

If you find that you need to use an actual LDAP query to search for disabled accounts, then you can use a bitwise filter. The appropriate LDAP filter would be:

(UserAccountControl:1.2.840.113556.1.4.803:=2)

Other stuff

I watched this and, despite the lack of lots of moving arms and tools, had sort of a Count Zero moment:

And just for Ned (because he REALLY loves this stuff!): Kittens!

No need to rush back, dude.

Jonathan “Payback is a %#*@&!” Stephens

Purging Old NT Security Protocols

$
0
0

Hi folks, Ned here again (with somefriends). Everyone knows that Kerberos is Microsoft’s preeminent security protocol and that NTLM is both inefficient and, in some iterations, not strong enough to avoid concerted attack. NTLM V2 using complex passwords stands up well to common hash cracking tools like Cain and Abel, Ophcrack, or John the Ripper. On the other hand, NTLM V1 is defeated far faster and LM is effectively no protection at all.

I discussed NTLM auditing years ago, when Windows 7 and Windows Server 2008 R2 introduced the concept of NTLM blocking. That article was for well-controlled environments where you thought that there was some chance of disabling NTLM – only modern clients and servers, the latest applications, and Active Directory. In a few other articles, I gave some further details on the limitations of the Windows auditing system logging. It turns out that while we’re ok at telling when NTLM was used, we’re not great at describing which flavor. For instance, Windows Server 2008+security auditing can tell you about the NTLM version through the 4624 event that states a Package Name (NTLM only): NTLM V1 or Package Name (NTLM only): NTLM V2, but all prior operating systems cannot. None of the older auditing can tell you if LM is used either. Windows Server 2008 R2 NTLM auditing only shows you NTLM usage in general.

Today the troika of Dave, Jonathan, and Ned are here to help you discover which computers and applications are using NTLM V1 and LM security, regardless of your operating system. It’s safe to say that some people aren’t going to like our answers or how much work this entails, but that’s life; when LM security was created as part of LAN Manager and OS/2 by Microsoft and IBM, Dave and I were in grade school and Jonathan was only 48.

If you need to keep using NTLM V2 and simply want to hunt down the less secure precursors, this should help.

Finding NTLM V1 and LM Usage via network captures

The only universal, OS-agnostic way you can tell which clients are sending NTLMv1 and LM challenges is by examining a network trace taken from destination computers. Using Netmon 3.4 or another network capture tool, look for packets with a negotiated NTLM security mechanism.

This first example is with LMCompatibilityLevel set to 0 on clients. This example is an SMB session request packet, specifying NTLM authentication.

Here is the SMB SESSION SETUP request, which specifies the security token mechanism:

  Frame: Number = 15, Captured Frame Length = 220, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-44],SourceAddress:[00-15-5D-05-B4-49]

+ Ipv4: Src = 10.10.10.20, Dest = 10.10.10.27, Next Protocol = TCP, Packet ID = 747, Total IP Length = 206

+ Tcp: Flags=...AP..., SrcPort=49235, DstPort=Microsoft-DS(445), PayloadLen=166, Seq=2204022974 - 2204023140, Ack=820542383, Win=32724 (scale factor 0x2) = 130896

+ SMBOverTCP: Length = 162

- SMB2: C   SESSION SETUP (0x1)

    SMBIdentifier: SMB

  + SMB2Header: C SESSION SETUP (0x1),TID=0x0000, MID=0x0002, PID=0xFEFF, SID=0x0000

  - CSessionSetup:

     StructureSize: 25 (0x19)

     VcNumber: 0 (0x0)

   + SecurityMode: 1 (0x1)

   + Capabilities: 0x1

     Channel: 0 (0x0)

     SecurityBufferOffset: 88 (0x58)

     SecurityBufferLength: 74 (0x4A)

     PreviousSessionId: 0 (0x0)

   - securityBlob:

    - GSSAPI:

     - InitialContextToken:

      + ApplicationHeader:

      + ThisMech: SpnegoToken (1.3.6.1.5.5.2)

      - InnerContextToken: 0x1

       - SpnegoToken: 0x1

        + ChoiceTag:

        - NegTokenInit:

         + SequenceHeader:

         + Tag0:

         + MechTypes: Prefer NLMP (1.3.6.1.4.1.311.2.2.10)

         + Tag2:

         + OctetStringHeader:

         -MechToken: NTLM NEGOTIATE MESSAGE

          - NLMP: NTLM NEGOTIATE MESSAGE

             Signature: NTLMSSP

             MessageType: Negotiate Message (0x00000001)

           + NegotiateFlags: 0xE2088297 (NTLM v2128-bit encryption, Always Sign)

           + DomainNameFields: Length: 0, Offset: 0

           + WorkstationFields: Length: 0, Offset: 0

           + Version: Windows 6.1 Build 7601 NLMPv15

Next, the server sends its NTLM challenge back to the client:

  Frame: Number = 16, Captured Frame Length = 447, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-49],SourceAddress:[00-15-5D-05-B4-44]

+ Ipv4: Src = 10.10.10.27, Dest = 10.10.10.20, Next Protocol = TCP, Packet ID = 24310, Total IP Length = 433

+ Tcp: Flags=...AP..., SrcPort=Microsoft-DS(445), DstPort=49235, PayloadLen=393, Seq=820542383 - 820542776, Ack=2204023140, Win=512 (scale factor 0x8) = 131072

+ SMBOverTCP: Length = 389

- SMB2: R  - NT Status: System - Error, Code = (22) STATUS_MORE_PROCESSING_REQUIRED  SESSION SETUP (0x1), SessionFlags=0x0

    SMBIdentifier: SMB

  + SMB2Header: R SESSION SETUP (0x1),TID=0x0000, MID=0x0002, PID=0xFEFF, SID=0x0019

  - RSessionSetup:

     StructureSize: 9 (0x9)

   + SessionFlags: 0x0

     SecurityBufferOffset: 72 (0x48)

     SecurityBufferLength: 317 (0x13D)

   - securityBlob:

    - GSSAPI:

     - NegotiationToken:

      + ChoiceTag:

      - NegTokenResp:

       + SequenceHeader:

       + Tag0:

       + NegState: accept-incomplete (1)

       + Tag1:

       + SupportedMech: NLMP (1.3.6.1.4.1.311.2.2.10)

       + Tag2:

       + OctetStringHeader:

       - ResponseToken: NTLM CHALLENGE MESSAGE

        - NLMP: NTLM CHALLENGE MESSAGE

          Signature: NTLMSSP

           MessageType: Challenge Message (0x00000002)

        + TargetNameFields: Length: 12, Offset: 56

         + NegotiateFlags: 0xE2898215 (NTLM v2128-bit encryption, Always Sign)

         + ServerChallenge: 67F9C5F851F2CD73

           Reserved: Binary Large Object (8 Bytes)

         + TargetInfoFields: Length: 214, Offset: 68

         + Version: Windows 6.1 Build 7601 NLMPv15

           TargetNameString: CORP01

         + AvPairs: 7 pairs

The client calculates the response to the challenge, using the various available hashes of the password. Note how this response includes both LM and NTLMv1 challenge responses.

  Frame: Number = 17, Captured Frame Length = 401, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-44],SourceAddress:[00-15-5D-05-B4-49]

+ Ipv4: Src = 10.10.10.20, Dest = 10.10.10.27, Next Protocol = TCP, Packet ID = 748, Total IP Length = 387

+ Tcp: Flags=...AP..., SrcPort=49235, DstPort=Microsoft-DS(445), PayloadLen=347, Seq=2204023140 - 2204023487, Ack=820542776, Win=32625 (scale factor 0x2) = 130500

+ SMBOverTCP: Length = 343

- SMB2: C   SESSION SETUP (0x1)

    SMBIdentifier: SMB

  + SMB2Header: C SESSION SETUP (0x1),TID=0x0000, MID=0x0003, PID=0xFEFF, SID=0x0019

  - CSessionSetup:

     StructureSize: 25 (0x19)

     VcNumber: 0 (0x0)

   + SecurityMode: 1 (0x1)

   + Capabilities: 0x1

     Channel: 0 (0x0)

   SecurityBufferOffset: 88 (0x58)

     SecurityBufferLength: 255 (0xFF)

     PreviousSessionId: 0 (0x0)

   - securityBlob:

    - GSSAPI:

     - NegotiationToken:

      + ChoiceTag:

      - NegTokenResp:

       + SequenceHeader:

       + Tag0:

       + NegState: accept-incomplete (1)

       + Tag2:

       + OctetStringHeader:

       - ResponseToken: NTLM AUTHENTICATE MESSAGEVersion:v1, Domain: CORP01, User: Administrator, Workstation: CONTOSO-CLI-01

        - NLMP: NTLM AUTHENTICATE MESSAGEVersion:v1, Domain: CORP01, User: Administrator, Workstation: CONTOSO-CLI-01

           Signature: NTLMSSP

           MessageType: Authenticate Message (0x00000003)

         + LmChallengeResponseFields: Length: 24, Offset: 154

         + NtChallengeResponseFields: Length: 24, Offset: 178

         + DomainNameFields: Length: 12, Offset: 88

         + UserNameFields: Length: 26, Offset: 100

         + WorkstationFields: Length: 28, Offset: 126

         + EncryptedRandomSessionKeyFields: Length: 16, Offset: 202

         + NegotiateFlags: 0xE2888215 (NTLM v2128-bit encryption, Always Sign)

         + Version: Windows 6.1 Build 7601 NLMPv15

         + MessageIntegrityCheckNotPresent: 6243C42AF68F9DFE30BD31BFC722B4C0

           DomainNameString: CORP01

           UserNameString: Administrator

           WorkstationString: CONTOSO-CLI-01

         + LmChallengeResponseStruct: 3995E087245B6F7100000000000000000000000000000000

         + NTLMV1ChallengeResponse: B0751BDCB116BA5737A51962328D5CCD19EEBEBB15A69B1E

         + SessionKeyString: 397DACB158C9F10EF4903F10D4CBE032

       + Tag3:

       + OctetStringHeader:

       + MechListMic: Version: 1

The server then responds with successful negotiation state:

  Frame: Number = 18, Captured Frame Length = 159, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-49],SourceAddress:[00-15-5D-05-B4-44]

+ Ipv4: Src = 10.10.10.27, Dest = 10.10.10.20, Next Protocol = TCP, Packet ID = 24312, Total IP Length = 145

+ Tcp: Flags=...AP..., SrcPort=Microsoft-DS(445), DstPort=49235, PayloadLen=105, Seq=820542776 - 820542881, Ack=2204023487, Win=510 (scale factor 0x8) = 130560

+ SMBOverTCP: Length = 101

- SMB2: R   SESSION SETUP (0x1), SessionFlags=0x0

    SMBIdentifier: SMB

  + SMB2Header: R SESSION SETUP (0x1),TID=0x0000, MID=0x0003, PID=0xFEFF, SID=0x0019

  - RSessionSetup:

     StructureSize: 9 (0x9)

   + SessionFlags: 0x0

     SecurityBufferOffset: 72 (0x48)

     SecurityBufferLength: 29 (0x1D)

   - securityBlob:

    - GSSAPI:

     - NegotiationToken:

      + ChoiceTag:

      - NegTokenResp:

       + SequenceHeader:

       + Tag0:

       +NegState: accept-completed (0)

       + Tag3:

       + OctetStringHeader:

       + MechListMic: Version: 1

To contrast this, consider the challenge response packet when LMCompatibility is set to 4 or 5 on the client (meaning it is not allowed to send anything but NTLM V2). The LM response is null, while the NTLMv1 response isn't included at all.

  Frame: Number = 17, Captured Frame Length = 763, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-44],SourceAddress:[00-15-5D-05-B4-49]

+ Ipv4: Src = 10.10.10.20, Dest = 10.10.10.27, Next Protocol = TCP, Packet ID = 844, Total IP Length = 749

+ Tcp: Flags=...AP..., SrcPort=49231, DstPort=Microsoft-DS(445), PayloadLen=709, Seq=4045369997 - 4045370706, Ack=881301203, Win=32625 (scale factor 0x2) = 130500

+ SMBOverTCP: Length = 705

- SMB2: C   SESSION SETUP (0x1)

    SMBIdentifier: SMB

  + SMB2Header: C SESSION SETUP (0x1),TID=0x0000, MID=0x0003, PID=0xFEFF, SID=0x0021

  - CSessionSetup:

     StructureSize: 25 (0x19)

     VcNumber: 0 (0x0)

  + SecurityMode: 1 (0x1)

   + Capabilities: 0x1

     Channel: 0 (0x0)

     SecurityBufferOffset: 88 (0x58)

     SecurityBufferLength: 617 (0x269)

     PreviousSessionId: 0 (0x0)

   - securityBlob:

    - GSSAPI:

     - NegotiationToken:

      + ChoiceTag:

      - NegTokenResp:

       + SequenceHeader:

       + Tag0:

       + NegState: accept-incomplete (1)

       + Tag2:

       + OctetStringHeader:

       - ResponseToken: NTLM AUTHENTICATE MESSAGEVersion:v2, Domain: CORP01, User: Administrator, Workstation: CONTOSO-CLI-01

        - NLMP: NTLM AUTHENTICATE MESSAGEVersion:v2, Domain: CORP01, User: Administrator, Workstation: CONTOSO-CLI-01

           Signature: NTLMSSP

           MessageType: Authenticate Message (0x00000003)

         + LmChallengeResponseFields: Length: 24, Offset: 154

         + NtChallengeResponseFields: Length: 382, Offset: 178

         + DomainNameFields: Length: 12, Offset: 88

         + UserNameFields: Length: 26, Offset: 100

         + WorkstationFields: Length: 28, Offset: 126

         + EncryptedRandomSessionKeyFields: Length: 16, Offset: 560

         + NegotiateFlags: 0xE2888215 (NTLM v2128-bit encryption, Always Sign)

         + Version: Windows 6.1 Build 7601 NLMPv15

         + MessageIntegrityCheck: 2B69C069DD922D4A841D0EC43939DF0F

           DomainNameString: CORP01

           UserNameString: Administrator

           WorkstationString: CONTOSO-CLI-01

         + LmChallengeResponseStruct: 000000000000000000000000000000000000000000000000

         + NTLMV2ChallengeResponse: CD22D7CC09140E02C3D8A5AB623899A8

         + SessionKeyString: AF31EDFAAF8F38D1900D7FBBDCB43760

       + Tag3:

       + OctetStringHeader:

       + MechListMic: Version: 1

By taking traces and filtering on the NTLMV1ChallengeResponse field, you find those hosts that are sending NTLMv1 responses and determine if you need to upgrade them or if they simply have the wrong LMcompatibility values set through security policy.

Finding LM usage via Netlogon debug logs

If you just want to detect LM authentication and not looking to spend time in network captures, you can instead enable Netlogon logging on all DCs and servers in the environment.

Nltest /dbflag:2080ffff
net stop NetLogon
net start NetLogon

This creates the netlogon.log in the C:\Windows\Debug folder and it can grow to a maximum of 20 Mb by default. At that point, the server renames the file to netlogon.bak and a new netlogon.log file started. At 20Mb, the server deletes netlogon.bak, renames the netlogon.log to netlogon.bak, and a new netlogon.log file started. To make these log files larger, you can use a registry entry or group policy:

Registry

Path: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters
Value Name: MaximumLogFileSize
Value Type: REG_DWORD
Value Data: <maximum log file size in bytes>

Group Policy

\Computer Configuration\Administrative Templates\System\Net Logon\Maximum Log File Size

You aren't trying to capture all data here - just useful samples - but if they wrap so much that you're unsure if they are accurate at all, increasing size is a good idea. As an alternative, you can create a scheduled task that runs ONSTART or a computer startup script. Either of them can use this batch file to make backups of the netlogon log by date/time and the computer name:

REM Sample script to copy the netlogon.bak to a netlogon_DATETIME_COMPUTERNAME.log backup form every 5 minutes

:start
if exist %windir%\debug\netlogon.bak goto copylog

:
copylog_return
sleep 300
goto start

:copylog
for /f "tokens=1-7 delims=/:., " %%a in ("%DATE% %TIME%") do (set DATETIME=%%a-%%b-%%c_%%d-%%e-%%f)
copy /v %windir%\debug\netlogon.bak %windir%\debug\netlogon_%DATETIME%_%COMPUTERNAME%.log
if %ERRORLEVEL% EQU 0 del %windir%\debug\netlogon.bak
goto copylog_return

Periodically, gather all of the NetLogon logs from the DCs and servers and place them in a single folder. Once you have assembled the NetLogon logs into a single spot, you may then use the following LogParser command from that folder to parse them all for a count of unique UAS logons to the domain controller by workstation:

Logparser.exe "SELECT TO_UPPERCASE(EXTRACT_SUFFIX(TEXT,0,'returns ')) AS ERR, TO_UPPERCASE (extract_prefix(extract_suffix(TEXT, 0, 'NetrLogonUasLogon of '), 0, 'from ')) as USER, TO_UPPERCASE (extract_prefix(extract_suffix(TEXT, 0, 'from '), 0, 'returns ')) as WORKSTATION, COUNT(*) FROM '*netlogon.*' WHERE INDEX_OF(TO_UPPERCASE (TEXT),'LOGON') >0 AND INDEX_OF(TO_UPPERCASE(TEXT),'RETURNS') >0 AND INDEX_OF(TO_UPPERCASE(TEXT),'NETRLOGONUASLOGON') >0 GROUP BY ERR, USER, WORKSTATION ORDER BY COUNT(*) DESC" -i:TEXTLINE -rtp:-1 >UASLOGON_USER_BY_WORKSTATION.txt

UASLOGON_USER_BY_WORKSTATION.txt contains the unique computers and counts. LogParser is available for download from here.

FIND and PowerShell are options here as well. The simplest approach is just to return the lines, perhaps into a text file for later sorting in say, Excel (which is very fast at sorting and allows you to organize your data).

image

image

image

I'll wager someone in the comments will take on the rather boring challenge of exactly duplicating what LogParser does. I didn't have the energy this time around. :)

Final thoughts

Microsoft stopped using LM after Windows 95/98/ME. If you do find specific LM-only usage and you don't have any (unsupported) Win9X computers, this is a third party application. A really heinous one.

All supported versions of Windows obey the LMCompatibility registry setting, and can use NTLMv2 just as easily as NTLMv1. At that point, analyzing network traces just becomes useful for tracking down those hosts that have applied the policy, but have not yet been rebooted. Considering how unsafe LM and NTLMv1 are, enabling NoLMHash and LMCompatibility 4 or 5 on all computers may be a faster alternative to auditing. It could cause some temporary outages, but would definitely catch anyone requiring unsafe protocols. There's no better auditing that a complaining application administrator.

Finally, do not limit your NTLM inventory to domain controllers and file or application servers. A comprehensive project requires you examine all computers in the environment, as even a Windows XP workstation can be a "server" for some application. Use a multi-pronged approach, where you also inventory operating systems through network probing - if you have Windows 95 or old SAMBA lying around somewhere on a shop floor, they are almost guaranteed to use insecure protocols.

Until next time,

- Ned “and Dave and Jonathan and Jonathan's in-home elderly care nurse” Pyle

Friday Mail Sack: Get Off My Lawn Edition

$
0
0

Hi folks, Ned here again. I know this is supposed to be the Friday Mail Sack but things got a little hectic and... ah heck, it doesn't need explaining, you're in IT. This week - with help from the ever-crotchety Jonathan Stephens - we talk about:

Now that Jonathan's Rascal Scooter has finished charging, on to the Q & A.

Question

We want to create a group policy for an OU that contains various computers needs to run for just Windows 7 notebooks only. All of our notebooks are named starting with an "N". Does group policy WMI filtering allows stacking conditions on the same group policy? 

Answer

Yes, you can chain together multiple query criteria, and they can even be from different classes or namespaces. For example, here I use both the Win32_OperatingSystem and Win32_ComputerSystem classes:

image

And here I use only the Win32_OperatingSystem class, with multiple filter criteria:

image

As long as they all evaluate TRUE, you get the policy. If you had a hundred of these criteria (please don’t) and 99 evaluate true but just one is false, the policy is skipped.

Note that my examples above would catch Win2008 R2 servers also; if you’ve read my previous posts, you know that you can also limit queries to client operating systems using the Win32_OperatingSystem property OperatingSystemSKU. Moreover, if you hadn’t used a predictable naming convention, you can also filter on with Win32_SystemEnclosure and query the ChassisTypes property for 8, 9, or 10 (respectively: “Portable”, “Laptop”, and “Notebook”). And no, I do not know the difference between these, it is OEM-specific. Just like “pizza box” is for servers. You stay classy, WMI.

Question

Is changing LDAP MaxPoolThreads a good or bad idea?

Answer

MaxPoolThreads controls the maximum number of simultaneous threads per-processor that a DC uses to work on LDAP requests. By default, it’s four per processor core. Increasing this value would allow a DC/GC to handle more LDAP requests. So if you have too many LDAP clients talking to too few DCs at once, raising this can reduce LDAP application timeouts and periodic “hangs”. As you might have guessed, the biggest complainer here is often MS Exchange and Outlook. If the performance counters “ATQ Threads LDAP" & "ATQ Threads Total" are constantly at the maximum number based on the number of processor and MaxPoolThreads value, then you are bottlenecking LDAP.

However!

DCs are already optimized to quickly return data from LDAP requests. If your hardware is even vaguely new and if you are not seeing actual issues, you should not increase this default value. MaxPoolThreads depends on non-paged pool memory, which on a Win2003 32-bit Windows OS is limited to 256MB (more on Win2008 32-bit). Meaning that if you still have not moved to at least x64 Windows Server 2003, don’t touch this value at all – you can easily hang your DCs. It also means you need to get with the times; we stopped making a 32-bit server OS nearly three years ago and OEMS stopped selling the hardware even before that. A 64-bit system's non-paged pool limit is 128GB.

In addition, changing the LDAP settings is often a Band-Aid that doesn’t address the real issue of DC capacity for your client/server base.  Use SPA or AD Data Collector sets to determine "Clients with the Most CPU Usage" under section "Ldap Requests”. Especially if the LDAP queries are not just frequent but also gross - there are also built-in diagnostics logs to find poorly-written requests:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics\
15 Field Engineering

To categorize search operations as expensive or inefficient, two DWORD registry keys are used:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\
Expensive Search Results Threshold

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\
Inefficient Search Results Threshold

These DWORD registry keys have the following default values:

  • Expensive Search Results Threshold: 10000
  • Inefficient Search Results Threshold: 1000

For example, here’s an inefficient result written in the DS event log; yuck, ick, argh!:

Event Type: Information
Event Source: NTDS General
Event Category: Field Engineering
Event ID: 1644
Description:
The Search operation based at RootDSE
using the filter:
& ( | ( & ( (objectCategory = <val>) (objectSid = *) ! ( (sAMAccountType | <bit_val>) ) ) & ( (objectCategory = <val>) ! ( (objectSid = *) ) ) & ( (objectCategory = <val>) (groupType | <bit_val>) ) ) (aNR = <substr>) <startSubstr>*) )

visited 40 entries and returned 0 entries.

Finally, this article should be required reading to any application developers in your company:

Creating More Efficient Microsoft Active Directory-Enabled Applications -
http://msdn.microsoft.com/en-us/library/windows/desktop/ms808539.aspx#efficientadapps_topic04

(The title should be altered to “Creating even slightly efficient…” in my experience).

Question

I want to implement many-to-one certificate mappings by using Issuer and Subject DN match. In altSecurityIdentities I put the following string:

X509:<I>DC=com,DC=contoso,CN=Contoso CA<S>DC=com,DC=contoso,CN=users,CN=user name

In a given example, a certificate with “cn=user name, cn=users, dc=contoso, dc=com” in the Subject field will be mapped to a user account, where I define the mappings. But in that example I get one-to-one mapping. Can I use wildcards here, say:

X509:<I>DC=com,DC=contoso,CN=Contoso CA<S>DC=com,DC=contoso,CN=users,CN=*

So that any certificate that contains “cn=<any value>, cn=users, dc=contoso, dc=com” will be mapped to the same user account?

Answer

[Sent from Jonathan while standing in the 4PM dinner line at Bob Evans]

Unfortunately, no. All that would do is map a certificate with a wildcard subject to that account. The only type of one-to-many mapping supported by the Active Directory mapper is configuring it to ignore the subject completely. Using this method, you can configure the AD mappings so that any certificate issued by a particular CA can be mapped to a single user account. See the following: http://technet.microsoft.com/en-us/library/bb742438.aspx#ECAA

Question

I've recently been working on extending my AD schema with a new back-linked attribute pair, and I used the instructions on this blog and MSDN to auto-generate the linkIDs for my new attributes. Confusingly, the resulting linkIDs are negative values (-912314983 and -912314984). The attributes and backlinks seem to work as expected, but when looking at the MSDN definition of the linkID attribute, it specifically states that the linkID should be a positive value. Do you know why I'm getting a negative value, and if I should be concerned?

Answer

[Sent from Jonathan’s favorite park bench where he feeds the pigeons]

The negative numbers are correct and expected, and are the result of a feature called AutoLinkID. Automatically generated linkIDs are in the range of 0xC0000000-0xFFFFFFFC (-1,073,741,824 to -4). This means that it is a good idea to use positive numbers if you are going to set the linkID manually. That way you are guaranteed not to conflict with automatically generated linkIDs.

The bottom line is, this is expected under the circumstances and you're all good.

Question

Is there any performance advantage to turning off the DFSR debug logging, lowering the number of logs, or moving the logs to another drive? You explained how to do this here in the DFSR debug series, but never mentioned it in your DFSR performance tuning article.

Answer

Yes, you will see some performance improvements turning off the logging or lowering the log count; naturally, all this logging isn’t free, it takes CPU and disk time. But before you run off to make changes, remember that if there are any problems, these logs are the only thing standing between you and the unemployment line. Your server will be much faster without any anti-virus software too, and your company’s profits higher without fire insurance; there are trade-offs in life. That’s why – after some brief agonizing, followed by heavy drinking – I decided not to include it in the performance article.

Moving the logs to another physical disk than Windows is safe and may take some pressure of the OS drive.

Question

When I try to join this Win2008 R2 computer to the domain, it gives an error I’ve never seen before:

"The following error occurred attempting to join the domain "contoso.com":
The request is not supported."

Answer

This server was once a domain controller. During demotion, something prevented the removal of the following registry value name:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters\
DSA Database file

Delete that "Dsa Database File" value name and attempt to join the domain again. It should work this time. If you take a gander at the %systemroot%\debug\netsetup.log, you’ll see another clue that this is your issue:

NetpIsTargetImageADC: Determined this is a DC image as RegQueryValueExW loaded Services\NTDS\Parameters\DSA Database file: 0x0
NetpInitiateOfflineJoin: The image at C:\Windows\system32\config\SYSTEM is a DC: 0x32

We started performing this check in Windows Server 2008 R2, as part of the offline domain join code changes. Hurray for unintended consequences!

Question

We have a largish AD LDS (ADAM) instance we update daily through by importing CSV files that deletes all of yesterday’s user objects and import today’s. Since we don’t care about deleted objects, we reduced the tombstoneLifetime to 3 days. The NTDS.DIT usage, as shown by the 1646 Garbage Collection Event ID, shows 1336mb free with a total allocation of 1550mb – this would suggest that there is a total of 214MB of data in the database.

The problem is that Task Manager shows a total of 1,341,208K of Memory (Private Working Set) in use. The memory usage is reduced to around the 214MB size when LDS is restarted; however, when Garbage Collection runs the memory usage starts to climb. I have read many KB articles regarding GC but nothing explains what I am seeing here.

Answer

Generally speaking, LSASS (and DSAMAIN, it’s red-headed ADLDS cousin) is designed to allocate and retain more memory – especially ESE (aka “Jet”) cache memory – than ordinary processes, because LSASS/DSAMAIN are the core processes of a DC or AD/LDS server. I would expect memory usage to grow heavily during the import, the deletions, and then garbage collection; unless something else put pressure on the machine for memory, I’d expect the memory usage to remain. That’s how well-written Jet database applications work – they don’t give back the memory unless someone asks, because LSASS and Jet can reuse it much faster when needed if it’s already loaded; why return memory if no one wants it? That would be a performance bug unto itself.

The way to show this in practical terms is to start some other high-memory process and validate that DSAMAIN starts to return the demanded memory. There are test applications like this on the internet, or you can install some app that likes to gobble a lot of RAM. Sometimes I’ll just install Wireshark and load a really big saved network capture – that will do it in a pinch. :-D You can also use the ESE performance counters under the “Database” and “Database ==> Instances” to see more about how much of the memory usage is Jet database cache size.

Regular DCs have this behavior too, as does DFSR and do other applications. You paid for all that memory; you might as well use it.

(Follow up from the customer where he provided a useful PowerShell “memory gobbler” example)

I ran the following Windows PowerShell script a few times to consume all available memory and the DSAMAIN process started releasing memory immediately as expected:

$chunk = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
for ($i = 0; $i -lt 5000; $i++)

       $chunk += $chunk
}

Question

When I migrate users from Windows 7 to Windows 7 using USMT 4.0, their pinned and automatic taskbar jump lists are lost. Is this expected?

Answer

Yes. For those poor $#%^&#s readers still using XP, Windows 7 introduced application taskbar pinning and a special menu called a jump list:

image

Pinned and Recent jump lists are not migrated by USMT, because the built-in OS Shell32 manifest called by USMT (c:\windows\winsxs\manifests\*_microsoft-windows-shell32_31bf3856ad364e35_6.1.7601.17514_non_ca4f304d289b7800.manifest) contains this specific criterion:

<pattern type="File">%CSIDL_APPDATA%\Microsoft\Windows\Recent [*]</pattern>

Note how it is notRecent\* [*], which would grab the subfolder contents of Recent. It only copies the direct file contents of Recent. The pinned/automatic jump lists are stored in special files under the CustomDestinations and AutomaticDestinations folders inside the Recent folder. All the other contents of Recent are shortcut files to recently opened documents anywhere on the system:

image

If you examine these special files, you'll see that they are binary, unreadable, and totally proprietary:

image

Since these files are binary and embed all their data in a big blob of goo, they cannot simply be copied safely between operating systems using USMT. The paths they reference could easily change in the meantime, or the data they reference could have been intentionally skipped. The only way this would work is if the Shell team extended their shell migration plugin code to handle it. Which would be a fair amount of work, and at the time these manifests were being written, customers were not going to be migrating from Win7 to Win7. So no joy. You could always try copying them with custom XML, but I have no idea if it would work at all and you’re on your own anyway – it’s not supported.

Question

We have a third party application that requires DES encryption for Kerberos. It wasn’t working from our Windows 7 clients though, so we enabled the security group policy “Network security: Configure encryption types allowed for Kerberos” to allow DES. After that though, these Windows 7 clients stopped working in many other operations, with event log errors like:

Event ID: 4
Source: Kerberos
Type: Error
"The kerberos client received a KRB_AP_ERR_MODIFIED error from the server host/myserver.contoso.com. This indicates that the password used to encrypt the kerberos service ticket is different than that on the target server. Commonly, this is due to identically named machine accounts in the target realm (domain.com), and the client realm. Please contact your system administrator."

And “The target principal name is incorrect” or “The target account name is incorrect” errors connecting to network resources.

Answer

When you enable DES on Windows 7, you need to ensure you are not accidentally disabling the other cipher suites. So don’t do this:

image

That means only DES is supported and you just disabled RC4, AES, etc.

Instead, do this:

image

If it exists at all and you want DES, this registry DWORD value to be 0x7fffffff on Windows 7 or Win2008 R2:

MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\System\Kerberos\Parameters\
SupportedEncryptionTypes

If it’s set to 0x3, all heck will break loose. This security policy interface is admittedly tiresome in that it has no “enabled/disabled” toggle. Use GPRESULT /H or /Z to see how it’s applying if you’re not sure about the actual settings.

Other Stuff

Windows 8 Consumer Preview releases February 29th, as if you didn’t already know it. Don’t ask me if this also means Windows Server 8 Beta the same exact day, I can’t say. But it definitely means the last 16 months of my life finally start showing some results. As will this blog…

Apparently we’ve been wrong about Han and Greedo since day one. I want to be wrong though. Thanks for passing this along Tony. And speaking of which, thanks to Ted O and the rest of the gang at LucasArts for the awesome tee!

This is a … creepily good music video? Definitely a nice find, Mark!


This is basically my home video collection

My new favorite site of the week? The Awesomer. Do not visit if you have to be somewhere in an hour.

Wait, no… my new favorite site is That’s Nerdaliscious. Do not read if hungry or dorky. 

Sick of everyone going on about Angry Birds? Love Chuck Norris? Go here now. There are a lot of these; don't miss Mortal Combat versus Donkey Kong.

Ah, there’s Waldo.

Likely the coolest advertisement for something that doesn’t yet exist that you will see this year.


I need to buy stock in SC Johnson. Can you imagine the Windex sales?!

Until next time.

- Ned “Generation X” Pyle with Jonathan “The Greatest Generation” Stephens

Saturday Mail Sack: Because it turns out, Friday night was alright for fighting edition

$
0
0

Hello all, Ned here again with our first mail sack in a couple months. I have enough content built up here that I actually created multiple posts, which means I can personally guarantee there will be another one next week. Unless there isn't!

Today we answer your questions around:

One side note: as I was groveling old responses, I came across a handful of emails I'd overlooked and never responded to; <insert various excuses here>. People who know me know that I don’t ignore email lightly. Even if I hadn't the foggiest idea how to help, I'd have at least responded with a "Duuuuuuuuuuurrrrrrrr, no clue, sorry".

Therefore, I'll make you deal: if you sent us an email in the past few months and never heard back, please resend your question and I'll answer them as best I can. That way I don’t spend cycles answering something you already figured out later, but if you’re still stuck, you have another chance. Sorry about all that - what with Windows 8 work, writing our internal support engineer training, writing public content, Jonathan having some kind of south pacific death flu, and presenting at internal conferences… well, only the usual insane Microsoft Office clipart can sum up why we missed some of your questions:

clip_image002

On to the goods!

Question

Is it possible to create a WMI Filter that detects only virtual machines? We want a group policy that will apply specifically to our virtualized guests.

Answer

Totally possible for Hyper-V virtual machines: You can use the WMI class Win32_ComputerSystem with a property of Model like “Virtual Machine” and property Manufacturer of “Microsoft Corporation”. You can also use class Win32_BaseBoard for the Product property, which will be “Virtual Machine” and property Manufacturer that will be “Microsoft Corporation”.

image

Technically speaking, this might also capture Virtual PC machines, but I don’t have one handy to see, and I doubt you are allowing those to handle production workloads anyway. As for EMC VMWare, Citrix Xen, KVM, Oracle Virtual Box, etc. you’ll have to see what shows for Win32_BaseBoard/Win32_ComputerSystem in those cases and make sure your WMI filter looks for that too. I don’t have any way to test them, and even if I did, I'd still make you do it out of spite. Gimme money!

Which reminds me - Tad is back:

image

Question

The Understand and Troubleshoot AD DS Simplified Administration in Windows Server "8" Beta guide states:

Microsoft recommends that all domain controllers provide DNS and GC services for high availability in distributed environments; these options default to on when installing a domain controller in any mode or domain.

But when I run Install-ADDSDomainController -DomainName corp.contoso.com -whatif it returns that the cmdlet will not install the DNS Server (DNS Server: No).

If Microsoft recommends that all domain controllers provide DNS, why do I need to specify -InstallDNS argument?

Answer

The output of DNS Server: No is a cosmetic issue with the output of -whatif. It should say YES, but doesn't unless you specifically use the $true parameter. You don't have to specify -installdns; the cmdlet will automatically* install DNS server unless you specify -installdns:$false.

* If you are using Windows DNS on domain controllers, that is. The UTG isn't totally accurate in this version (but will be in the next). The logic is that if that domain already hosts the DNS, all subsequent DCs will also host the DNS by default. So to be very specific:

1. New forest: always install DNS
2. New child or new tree domain: if the parent/tree domain hosts DNS, install DNS
3. Replica: if the current domain hosts DNS, install DNS

Question

How can I disable a user on all domain controllers, without waiting for (or forcing) AD replication?

Answer

The universal in-box way that works in all operating systems would be to use DSMOD.EXE USER and feed it the DC names in a list. For example:

1. Create a text file that contains all your DC in a forest, in a line-separated list:

2008r2-01
2008r2-02

2. Run a FOR loop command to read that list and disable the specified user against each domain controller.

FOR /f %i IN (some text file) DO dsmod user "some DN" -disabled -yes -s %i

For instance:

image

You also have the AD PowerShell option in your Win2008 R2 DC environment, and it’s much easier to automate and maintain. You just tell it the domain controllers' OU and the user and let it rip:

get-adcomputer -searchbase "your DC OU" -filter * | foreach {disable-adaccount "user logon ID" -server $_.dnshostname}

For instance:

image

If you weren't strictly opposed to AD replication (short circuiting it like this isn't going to stop eventual replication traffic) you can always disable the user on one DC then force just that single object to replicate to all the other DCs. Check out repadmin /replsingleobj or the new Windows Server "8" Beta " sync-adobject cmdlet.

image

 The Internet also has many further thoughts on this. It's a very opinionated place.

Question

We have found that modifying the security on a DFSR replicated folder and its contents causes a big DFSR replication backlog. We need to make these permissions changes though; is there any way to avoid that backlog?

Answer

Not the way you are doing it. DFSR has to replicate changes and you are changing every single file; after all, how can you trust a replication system that does not replicate? You could consider changing permissions "from the bottom up" - where you modify perms on lower level folders first - in some sort of staged fashion to minimize the amount of replication that has to occur, but it just sounds like a recipe to get things wrong or end up replicating things twice, making it worse. You will just have to bite the bullet in Windows Server 2008 R2 and older DFSR. Do it on a weekend and next time, treat this as a lesson learned and plan your security design better so that all of your user base fits into the model using groups.

However…

It is a completely different story if you switch to Windows Server "8" Beta - well really, the RTM version when it ships. There you can use Central Access Policies (similar to Windows Server 2008 R2's global object access auditing). This new kind of security system is part of the Dynamic Access Control feature and abstracts the user access from NTFS, meaning you can change security using claims policy and not actually change the files on the disk (under some but not all circumstances - more on this when I write a proper post after RTM). It's amazing stuff; in my opinion, DAC is the first truly huge change in Windows file access control since Windows NT gave us NTFS.

image

Central Access Policy is not a trivial thing to implement, but this is the future of file servers. Admins should seriously evaluate this feature when testing Windows Server "8" Beta in their lab environments and thinking about future designs. Our very own Mike Stephens has written at length about this in the Understand and Troubleshoot Dynamic Access Control in Windows Server "8" Beta guide as well.

Question

[Perhaps interestingly to you the reader, this was my question to the developers of AD PowerShell. I don’t know everything after all… - Ned]

I am periodically seeing error "invalid enumeration context" when querying the Redmond domain using get-adcomputer. It’s a simple query to return all the active Windows 8 and Windows Server "8" computers that were logged into since February 15th and write them to a CSV file:

image

It runs for quite a while and sometimes works, sometimes fails. I don’t find any well-explained reference to what this error means or how to avoid it, but it smells like a “too much data asked for over too long a period of time” kind of issue.

Answer

The enumeration contexts do have a finite hardcoded lifetime and you will get an error if they expire. You might see this error when executing searches that search a huge quantity of data using limited indexed attributes and return a small data set. If we hit a DC that is not very busy then the query will run faster and could have enough time to complete for a big dataset like this query. Server hardware would also be a factor here. You can also try searching starting at a deeper level. You could also tweak the indexes, although obviously not in this case.

[For those interested, when the query worked, it returned roughly 75,000 active Windows 8 family machines from that domain alone. Microsoft dogfoods in production like nobody else, baby - Ned]

Question

Is there any chance that DFSR could lock a file while it is replicating outbound and prevent user access to their data?

Answer

DFSR uses the BackupRead() function when copying a file into the staging folder (i.e. any file over 64KB, by default), so that should prevent any “file in use” issues with applications or users; the file "copying" to the staging folder is effectively instantaneous and non-exclusive. Once staged and marshaled, the copy of the file is replicated and no user has any access to that version of the file.

For a file under 64KB, it is simply replicated without staging and that operation of making a copy and sending it into RPC is so fast there’s no reasonable way for anyone to ever see any issues there. I have certainly never seen it, for sure, and I should have by now after six years.

Question

Why does TechNet state that USMT 4.0 offline migrations don’t work for certain OS settings? How do I figure out the complete list?

Answer

Manifests that use migration plugin DLLs aren’t processed when running offline migrations. It's just a by design limitation of USMT and not a bug or anything. To see which manifests you need to examine and consider creating custom XML to handle, review the complete list at Understanding what the USMT 4.0 CONFIG manifests migrate (Part 1: Introduction).

Question

One of my customers has found that the "Everyone" group is added to the below folders in Windows 2003 and Windows 2008:

Windows Server 2008

C:\ProgramData\Microsoft\Crypto\DSS\MachineKeys

C:\ProgramData\Microsoft\Crypto/RSA\MachineKeys

Windows Server 2003

C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\DSS\MachineKeys

C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys

1. Can we remove the "Everyone" group and give permissions to another group like - Authenticated users for example?

2. Will replacing that default cause issues?

3. Why is this set like this by default?

Answer

[Courtesy of:

image

]

These permissions are intentional. They are intended to allow any process to generate a new private key, even an Anonymous one. You'll note that the permissions on the MachineKeys folder are limited to the folder only. Also, you should note that inheritance has been disabled, so the permissions on the MachineKeys folder will not propagate to new files created therein. Finally, the key generation code itself modifies the permissions on new key container files before the private key is actually written to the container file.

In short, messing with these permissions will probably lead to failures in creating or accessing keys belonging to the computer. So please don't touch them.

1. Exchanging Authenticated Users with Everyoneprobably won't cause any problems. Microsoft, however, doesn't test cryptographic operations after such a permission change; therefore, we cannot predict what will happen in all cases.

2. See my answer above. We haven't tested it. We have, however, been performing periodic security reviews of the default Windows system permissions, tightening them where possible, for the last decade. The default Everyone permissions on the MachineKeys folder have cleared several of these reviews.

3. In local operations, Everyone includes unidentified or anonymous users. The theory is that we always want to allow a process to generate a private key. When the key container is actually created and the key written to it, the permissions on the key container file are updated with a completely different set of default permissions. All the default permissions allow are the ability to create a file, read and write data. The permissions do not allow any process except System to launch any executable code.

Question

If I specify a USMT 4.0 config.xml child node to prevent migration, I am still seeing the settings migrate. But if I set the parent node, those settings do not migrate. The consequence being that no child nodesmigrate, which I do not want.

For example, on XP the Dot3Svc service is set to Manual startup.  On Win7, I want the Dot3Svc service set to Automatic startup.  If I use this config.xml on the loadstate, the service is set to manual like the XP machine and my "no" setting is ignored:

<componentdisplayname="Networking Connections"migrate="yes"ID="network_and_internet\networking_connections">

  <componentdisplayname="Microsoft-Windows-Wlansvc"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-VWiFi"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-RasConnectionManager"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-RasApi"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-PeerToPeerCollab"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-Native-80211"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-MPR"migrate="yes"ID="<snip>"/>

  <componentdisplayname="Microsoft-Windows-Dot3svc"migrate="no"ID="<snip>"/>

</component>

Answer

Two different configurations can cause this symptom:

1. You are using a config.xml file created on Windows 7, then running it on a Windows XP computer with scanstate /config

2. The source computer was Windows XP and it did not have a config.xml file set to block migration.

When coming from XP, where downlevel manifests were used, loadstate does not process those differently-named child nodes on the destination Win7 computer. So while the parent node set to NO would work, the child nodes would not, as they have different displayname and ID.

It’s a best practice to use a config.xml in scanstate as described in http://support.microsoft.com/kb/2481190, if going from x86 to x64; otherwise, you end up with damaged COM settings. Otherwise, you only need to generate per-OS config.xml files if you plan to change default behavior. All the manifests run by default if there is a config.xml with no modifications or if there is no config.xml at all.

Besides being required for XP to block settings, you should also definitely lean towards using config.xml on the scanstate rather than the loadstate. If using Vista to Vista, Vista to 7, or 7 to 7, you could use the config.xml on either side, but I’d still recommend sticking with the scanstate; it’s typically better to block migration from adding things to the store, as it will be faster and leaner.

Other Stuff

[Many courtesy of our pal Mark Morowczynski -Ned]

Happy belated 175th birthday Chicago. Here's a list of things you can thank us for, planet Earth; where would you be without your precious Twinkies!?

Speaking of Chicago…

All the new MCSE and certification news reminded me of the other side to that coin.

Do you know where your nearest gun store is located? Map of the Dead does. Review now; it will be too late when the zombies rise from their graves, and I don't plan to share my bunker, Jim.

image

If you call yourself an IT Pro, you owe it to yourself to visit moviecarposters.com right now and buy… everything. They make great alpha geek conversation pieces. To get things started, I recommend these:

clip_image002[6]clip_image004clip_image006
Sigh - there is never going to be another Firefly

And finally…

I started re-reading Terry Pratchett, picking up where from where I left off as a kid. Hooked again. Damn you English writers, with your understated awesomeness!

Ok, maybe not all English Writers…

image

Until next time,

- Ned "Jonathan is seriously going to kill me" Pyle

Friday Mail Sack: Mothers day pfffft… when is son’s day?

$
0
0

Hi folks, Ned here again. It’s been a little while since the last sack, but I have a good excuse: I just finished writing a poop ton of Windows Server 2012 depth training that our support folks around the world will use to make your lives easier (someday). If I ever open MS Word again it will be too soon, and I’ll probably say the same thing about PowerPoint by June.

Anyhoo, let’s get to it. This week we talk about:

Question

Is it possible to use any ActiveDirectory module cmdlets through invoke-command against a remote non-Windows Server 2012 DC where the module is installed? It always blows up for me as it tries to “locally” (remotely) use the non-existent ADWS with error “Unable to contact the server. This may be because the server does not exist, it is currently down, or it does not have the active directory web services running”

image

Answer

Yes, but you have to ignore that terribly misleading error and put your thinking cap on: the problem is your credentials. When you invoke-command, you make the remote server run the local PowerShell on your behalf. In this case that remote command has to go off-box to yet another remote server – a DC running ADWS. This means a multi-hop credential scenario. Provide –credential (get-credential) to your called cmdlets inside the curly braces and it’ll work fine.

Question

We are using a USMT /hardlink migration to preserve disk space and increase performance. However, performance is crazy slow and we’re actually running out of disk space on some machines that have very large files like PSTs. My scanstate log shows:

Error [0x000000] Write error 112 for C:\users\ned\Desktop [somebig.pst]. Windows error 112 description: There is not enough space on the disk.[gle=0x00000070]

Error [0x080000] Error 2147942512 while gathering object C:\users\ned\Desktop\somebig.pst. Shell application requested abort![gle=0x00000070]

Answer

These files are encrypted and you are using /efs:copyraw instead of /efs:hardlink. Encrypted files are copied into the store whole instead of hardlink'ing, unless you specify /efs:hardlink. If you had not included /efs, this file would have failed with, "File X is encrypted. Use the /efs option to specify a different way to handle this file".

Yes, I realize that we should probably just require that option. But think of all the billable hours we just gave you!

Question

I was using your DFSR pre-seeding post and am finding that robocopy /B is slows down my migration compared to not using it. Is that required for preseeding?

Answer

The /B mode, while inherently slower, ensures that files are copied using a backup API regardless of permissions. It is the safest way, so I took the prudent route when I wrote the sample command. It’s definitely expected to be slower – in my semi-scientific repro’s the difference was ~1.75 times slower on average.

However, /B not required if you are 100% sure you have at least READ permissions to all files.  The downside here is a lot of failures due to permissions might end up making things even slower than just going /B; you will have to test it.

If you are using Windows Server 2012 and have plenty of hardware to back it up, you can use the following options that really make the robocopy fly, at the cost of memory, CPU, and network utilization (and possibly, some files not copying at all):

Robocopy <foo> <bar> /e /j /copyall /xd dfsrprivate /log:<sna.foo> /tee /t:128 /r:1

For those that have used this before, it will look pretty similar – but note:

  • Adds /J option (first introduced in Win8 robocopy) - now performs unbuffered IO, which means gigantic files like ISO and VHD really fly and a 1Gbps network is finally heavily utilized. Adds significant memory overhead, naturally.
  • Add /MT:128 to use 128 simultaneous file copy threads. Adds CPU overhead, naturally.
  • Removes /B and /R:6 in order to guarantee fastest copy method. Make sure you review the log and recopy any failures individually, as you are now skipping any files that failed to copy on the first try.

 

Question

Recently I came across an user account that keeps locking out (yes, I've read several of your blogs where you say account lockout policies are bad "Turning on account lockouts is a way to guarantee someone with no credentials can deny service to your entire domain"). We get the Event ID of 4740 saying the account has been locked out, but the calling computer name is blank:

 

Log Name:     Security

 

Event ID:     4740

 

Level:         Information

 

Description:

 

A user account was locked out.

 

Subject:

 

Security ID: SYSTEM

 

Account Name: someaccount

 

Account Domain: somedomain

 

Logon ID: 0x3e7

 

Account That Was Locked Out:

 

Security ID: somesid

 

Account Name: someguy

 

Additional Information:

 

Caller Computer Name:

 

The 0xC000006A indicates a bad password attempt. This happens every 5 minutes and eventually results in the account being locked out. We can see that the bad password attempts are coming via COMP1 (which is a proxy server) however we can't work out what is sending the requests to COMP1 as the computer is blank again (there should be a computer name).

Are we missing something here? Is there something else we could be doing to track this down? Is the calling computer name being blank indicative of some other problem or just perhaps means the calling device is a non-Microsoft device?

Answer

(I am going to channel my inner Eric here):

A blank computer name is not unexpected, unfortunately. The audit system relies on the sending computers to provide that information as part of the actual authentication attempt. Kerberos does not have a reliable way to provide the remote computer info in many cases. Name resolution info about a sending computer is also easily spoofed. This is especially true with transitive NTLM logons, where we are relying on one computer to provide info for another computer. NTLM provides names but they are also easily spoofed so even when you see a computer name in auditing, you are mainly asking an honest person to tell you the truth.

Since it happens very frequently and predictably, I’d configure a network capture on the sending server to run in a circular fashion, then wait for the lock out and stop the event. You’d see all of the traffic and now know exactly who sent it. If the lockout was longer running and less predictable, I’d recommend using a network capture configured to trace in a circular fashion until that 4740 event writes. Then you can see what the sending IP address is and hunt down that machine. Different techniques here:

[And the customer later noted that since it’s a proxy server, it has lots of logs – and they told him the offender]

Question

I am testing USMT 5.0 and finding that if I migrate certain Windows 7 computers to Windows 8 Consumer Preview, Modern Apps won’t start. Some have errors, some just start then go away.

Answer

Argh. The problem here is Windows 7’s built-in manifest that implements microsoft-windows-com-base , which then copies this registry key:

HKEY_LOCAL_MACHINE\Software\Microsoft\OLE

If the DCOM permissions are modified in that key, they migrate over and interfere with the ones needed by Modern Apps to run. This is a known issue and already fixed so that we don’t copy those values onto Windows 8 anymore. It was never a good idea in the first place, as any applications needing special permissions will just set their own anyways when installed.

And it’s burned us in the past too…

Question

Are there any available PowerShell, WMI, or command-line options for configuring an OCSP responder? I know that I can install the feature with the Add-WindowsFeature, but I'd like to script configuring the responder and creating the array.

Answer

[Courtesy of the Jonathan “oh no, feet!” Stephens– Ned]

There are currently no command line tools or dedicated PowerShell cmdlets available to perform management tasks on the Online Responder. You can, however, use the COM interfaces IOCSPAdmin and IOSCPCAConfiguration to manage the revocation providers on the Online Responder.

  1. Create an IOSCPAdmin object.
  2. The IOSCPAdmin::OCSPCAConfigurationCollection property will return an IOCSPCAConfigurationCollection object.
  3. Use IOCSPCAConfigurationCollection::CreateCAConfiguration to create a new revocation provider.
  4. Make sure you call IOCSPAdmin::SetConfiguration when finished so the online responder gets updated with the new revocation configuration.

Because these are COM interfaces, you can call them from VBScript or PowerShell, so you have great flexibility in how you write your script.

Question

I want to use Windows Desktop Search with DFS Namespaces but according to this TechNet Forum thread it’s not possible to add remote indexes on namespaces. What say you?

Answer

There is no DFSN+WDS remote index integration in any OS, including Windows 8 Consumer Preview. At its heart, this comes down to being a massive architectural change in WDS that just hasn’t gotten traction. You can still point to the targets as remote indexes, naturally.

Question

Certain files – as pointed out here by AlexSemi– that end with invalid characters like a dot or a space break USMT migration. One way to create these files is to use the echo command into a device path like so:

image

These files can’t be opened by anything in Windows, it seems.

image

When you try to migrate, you end up with a fatal “windows error 2” “the system cannot find the file specified” error unless you skip the files using /C:

image

What gives?

Answer

Quit making invalid files! :-)

USMT didn’t invent CreateFile() so its options here are rather limited… USMT 5.0 handles this case correctly through error control - it skips these files when hardlink’ing because Windows returns that they “don’t exist”. Here is my scanstate log using USMT 5.0 beta, where I used /hardlink and did NOT provide /C:

image

In the case of non-hardlink, scanstate copies them without their invalid names and they become non-dotted/non-spaced valid files (even in USMT 4.0). To make it copy these invalid files with the actual invalid name would require a complete re-architecting of USMT or the Win32 file APIs. And why – so that everyone could continue to not open them?

Other Stuff

In case you missed it, Windows 8 Enterprise Edition details. With all the new licensing and activation goodness, Enterprise versions are finally within reach of any size customer. Yes, that means you!

Very solid Mother’s Day TV mash up (a little sweary, but you can’t fight a something that combines The Wire, 30 Rock, and The Cosbys)

Zombie mall experience. I have to fly to Reading in June to teach… this might be on the agenda

Well, it’s about time - Congress doesn't "like" it when employers ask for Facebook login details

Your mother is not this awesome:

image
That, my friend, is a Skyrim birthday cake

SportsCenter wins again (thanks Mark!)

Don’t miss the latest Between Two Ferns (veeerrrry sweary, but Zach Galifianakis at his best; I just wish they’d add the Tina Fey episode)

But what happens if you eat it before you read the survival tips, Land Rover?!

 

Until next time,

- Ned “demon spawn” Pyle

Viewing all 67 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>