ShellShock/BashBug – Bash Vulnerability Affects Linux, Unix, & Mac OSX

10/01/2014 Updates:

Updated (9/29/2014):

Overview
This is a “game-over” type of vulnerability due to the ubiquity of bash on *nix systems.

Fixing it involves recompiling bash or downloading a patched version from the OS vendor/provider.

It’s easy to attack and there is no authentication required when exploiting Bash via CGI scripts.
http://www.troyhunt.com/2014/09/everything-you-need-to-know-about.html

“GNU Bash through 4.3 processes trailing strings after function definitions in the values of environment variables, which allows remote attackers to execute arbitrary code via a crafted environment, as demonstrated by vectors involving the ForceCommand feature in OpenSSH sshd, the mod_cgi and mod_cgid modules in the Apache HTTP Server, scripts executed by unspecified DHCP clients, and other situations in which setting the environment occurs across a privilege boundary from Bash execution.”
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-6271

“Trend Micro describes this vulnerability as “plague-like,” dwarfing Heartbleed, and hitting “approximately a half-billion Web servers and other Internet-connected devices.” Shellshock gives attackers command access to Linux- and UNIX-based systems that use Bash. Therefore, industry experts say, there are a huge number of potential attack vectors — Mac OSX devices, Android devices, OpenBSD, DHCP clients, SSH servers, web servers using CGI or Apache (including hosting servers), home routers, Bitcoin Core, and embedded systems in other Internet of Things objects like medical devices, digital cameras, and televisions.”
http://www.darkreading.com/shellshock-bash-bug-impacts-basically-everything-exploits-appear-in-wild/d/d-id/1316064?_mc=sm_dr

Vulnerable Systems & Patches:
“The vulnerability affects versions 1.14 through 4.3 of GNU Bash. Patches have been issued by many of the major Linux distribution vendors for affected versions, including:

  • Red Hat Enterprise Linux (versions 4 through 7) and the Fedora distribution
  • CentOS (versions 5 through 7)
  • Ubuntu 10.04 LTS, 12.04 LTS, and 14.04 LTS
  • Debian
  • Mac OS X

Akamai is patched: https://blogs.akamai.com/2014/09/environment-bashing.html

VMWare is looking into it: http://blogs.vmware.com/security/2014/09/vmware-investigating-bash-command-injection-vulnerability-aka-shell-shock-cve-2014-6271-cve-2014-7169.html

F5 BIG-IP is not known to be vulnerable, but is patching anyway:  https://devcentral.f5.com/articles/cve-2014-6271-shellshocked

How to manually patch Apple Bash

http://apple.stackexchange.com/questions/146849/how-do-i-recompile-bash-to-avoid-the-remote-exploit-cve-2014-6271-and-cve-2014-7/146851#146851

Lots and lots of *nix systems are vulnerable: http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html

Remediation:

Remediation is obviously going to be most successful by applying patches to affected systems. Check with relevant vendors for updated information. This is also an opportunity to review systems for unused services, like FTP, Telnet, and DCHPd, and disable them when they are not required. As of writing, there is no word on a patch for OSX, but a workaround exists if one is willing to recompile Bash on vulnerable OSX systems.

Prevention capabilities will evolve as various exploits are made public or discovered in the wild. As of writing, the most critical attack vector is via Apache mod_cgi scripts, as several working exploits can be found on the web, and a Metasploit module has already been developed to exploit it. This attack vector can be mitigated with mod_security rules published by Red Hat, F5 LineRate, of course there’s an F5 BIG-IP iRule for that, and Cisco has updated signatures to detect and block attacks.

Finally, be advised that many embedded systems like SOHO routers and consumer electronics like Network Attached Storage (NAS) devices are likely vulnerable. Further, these devices could be difficult or even impossible to patch. Consumers should be on the lookout for firmware updates from manufacturers of these devices, and if NAS or router administration interfaces are connected to the Internet, they should be disconnected to avoid exploitation. These connected devices likely pose the biggest threat, as research is already underway to convert exploits into self-propagating worms.
http://www.guidepointsecurity.com/940/vulnerability-management/how-shocking-is-shellshock/

 

Checking Your Systems:

Scanning systems by sending a http header with code in it to a vulnerable system running a web server may run arbitrary code. This is bad!

target = 0.0.0.0/0
port = 80
banners = true
http-user-agent = shellshock-scan (http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html)
http-header[Cookie] = () { :; }; ping -c 3 209.126.230.74
http-header[Host] = () { :; }; ping -c 3 209.126.230.74
http-header[Referer] = () { :; }; ping -c 3 209.126.230.74

A shellshock worm is just a matter of time…
http://blog.erratasec.com/2014/09/bash-shellshock-bug-is-wormable.html

MetaSploit has a module for finding & ultimately exploiting this bug:  https://community.rapid7.com/community/infosec/blog/2014/09/25/bash-ing-into-your-network-investigating-cve-2014-6271

Run this commands to test Bash:
$ env x='() { :;}; echo vulnerable’ bash -c “echo this is a test”
If you see an error, you are probably safe. If you see vulnerable this is a test”, you are vulnerable.


Technical Detail:

Rapid 7 posted a video describing “How Does Bashbug (AKA shellshock) Work?  How Do I Remediate?”

“What is vulnerable?
This attack revolves around Bash itself, and not a particular application, so the paths to exploitation are complex and varied. So far, the Metasploit team has been focusing on the web-based vectors since those seem to be the most likely avenues of attack. Standard CGI applications accept a number of parameters from the user, including the browser’s user agent string, and store these in the process environment before executing the application. A CGI application that is written in Bash or calls system() or popen() is likely to be vulnerable, assuming that the default shell is Bash.

Secure Shell (SSH) will also happily pass arbitrary environment variables to Bash, but this vector is only relevant when the attacker has valid SSH credentials, but is restricted to a limited environment or a specific command. The SSH vector is likely to affect source code management systems and the administrative command-line consoles of various network appliances (virtual or otherwise).

There are likely many other vectors (DHCP client scripts, etc), but they will depend on whether the default shell is Bash or an alternative such as Dash, Zsh, Ash, or Busybox, which are not affected by this issue.

Modern web frameworks are generally not going to be affected. Simpler web interfaces, like those you find on routers, switches, industrial control systems, and other network devices are unlikely to be affected either, as they either run proprietary operating systems, or they use Busybox or Ash as their default shell in order to conserve memory. A quick review of a approximately 50 firmware images from a variety of enterprise, industrial, and consumer devices turned up no instances where Bash was included in the filesystem. By contrast, a cursory review of a handful of virtual appliances had a 100% hit rate, but the web applications were not vulnerable due to how the web server was configured. As a counter-point, Digital Bond believes that quite a few ICS and SCADA systems include the vulnerable version of Bash, as outlined in their blog post. Robert Graham of Errata Security believes there is potential for a worm after he identified a few thousand vulnerable systems using Masscan. The esteemed Michal Zalewski also weighed in on the potential impact of this issue.
In summary, there just isn’t enough information available to predict how many systems are potentially exploitable today.

The two most likely situations where this vulnerability will be exploited in the wild:

  • Diagnostic CGI scripts that are written in Bash or call out to system() where Bash is the default shell
  • PHP applications running in CGI mode that call out to system() and where Bash is the default shell

Bottom line: This bug is going to affect an unknowable number of products and systems, but the conditions to exploit it are fairly uncommon for remote exploitation.
Update: A DDoS bot that exploits this issue has already been found in the wild by @yinettesys

https://community.rapid7.com/community/infosec/blog/2014/09/25/bash-ing-into-your-network-investigating-cve-2014-6271

Bash supports exporting not just shell variables, but also shell functions to other bash instances, via the process environment to (indirect) child processes.  Current bash versions use an environment variable named by the function name, and a function definition starting with “() {” in the variable value to propagate function definitions through the environment.  The vulnerability occurs because bash does not stop after processing the function definition; it continues to parse and execute shell commands following the function definition. 
For example, an environment variable setting of

  VAR=() { ignored; }; /bin/id
will execute /bin/id when the environment is imported into the bash process.  (The process is in a slightly undefined state at this point. The PATH variable may not have been set up yet, and bash could crash after executing /bin/id, but the damage has already happened at this point.)

The fact that an environment variable with an arbitrary name can be used as a carrier for a malicious function definition containing trailing commands makes this vulnerability particularly severe; it enables network-based exploitation.

So far, HTTP requests to CGI scripts have been identified as the major attack vector.

A typical HTTP request looks like this:

GET /path?query-param-name=query-param-value HTTP/1.1
Host: www.example.com
Custom: custom-header-value

The CGI specification maps all parts to environment variables.  With Apache httpd, the magic string “() {” can appear in these places:

* Host (“www.example.com”, as REMOTE_HOST)
* Header value (“custom-header-value”, as HTTP_CUSTOM in this example)
* Server protocol (“HTTP/1.1”, as SERVER_PROTOCOL)

The user name embedded in an Authorization header could be a vector as well, but the corresponding REMOTE_USER variable is only set if the user name corresponds to a known account according to the
authentication configuration, and a configuration which accepts the magic string appears somewhat unlikely.

In addition, with other CGI implementations, the request method (“GET”), path (“/path”) and query string
(“query-param-name=query-param-value”) may be vectors, and it is conceivable for “query-param-value” as well, and perhaps even “query-param-name”.

The other vector is OpenSSH, either through AcceptEnv variables, TERM or SSH_ORIGINAL_COMMAND.

Other vectors involving different environment variable set by additional programs are expected.”
http://seclists.org/oss-sec/2014/q3/650

In Linux, environment variables provide a way to influence the behavior of software on the system. They typically consists of a name which has a value assigned to it. The same is true of the bash shell. It is common for a lot of programs to run bash shell in the background. It is often used to provide a shell to a remote user (via ssh, telnet, for example), provide a parser for CGI scripts (Apache, etc) or even provide limited command execution support (git, etc)

Coming back to the topic, the vulnerability arises from the fact that you can create environment variables with specially-crafted values before calling the bash shell. These variables can contain code, which gets executed as soon as the shell is invoked. The name of these crafted variables does not matter, only their contents. As a result, this vulnerability is exposed in many contexts, for example:

  • ForceCommand is used in sshd configs to provide limited command execution capabilities for remote users. This flaw can be used to bypass that and provide arbitrary command execution. Some Git and Subversion deployments use such restricted shells. Regular use of OpenSSH is not affected because users already have shell access.
  • Apache server using mod_cgi or mod_cgid are affected if CGI scripts are either written in bash, or spawn subshells. Such subshells are implicitly used by system/popen in C, by os.system/os.popen in Python, system/exec in PHP (when run in CGI mode), and open/system in Perl if a shell is used (which depends on the command string).
  • PHP scripts executed with mod_php are not affected even if they spawn subshells.
  • DHCP clients invoke shell scripts to configure the system, with values taken from a potentially malicious server. This would allow arbitrary commands to be run, typically as root, on the DHCP client machine.
  • Various daemons and SUID/privileged programs may execute shell scripts with environment variable values set / influenced by the user, which would allow for arbitrary commands to be run.
  • Any other application which is hooked onto a shell or runs a shell script as using bash as the interpreter. Shell scripts which do not export variables are not vulnerable to this issue, even if they process untrusted content and store it in (unexported) shell variables and open subshells.

Like “real” programming languages, Bash has functions, though in a somewhat limited implementation, and it is possible to put these bash functions into environment variables. This flaw is triggered when extra code is added to the end of these function definitions (inside the enivronment variable). Something like:

$ env x='() { :;}; echo vulnerable’ bash -c “echo this is a test”
 vulnerable
 this is a test

The patch used to fix this flaw, ensures that no code is allowed after the end of a bash function. So if you run the above example with the patched version of bash, you should get an output similar to:

 $ env x='() { :;}; echo vulnerable’ bash -c “echo this is a test”
 bash: warning: x: ignoring function definition attempt
 bash: error importing function definition for `x’
 this is a test

https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/

Looks like the original patch is incomplete:
Red Hat has become aware that the patch for CVE-2014-6271 is incomplete. An attacker can provide specially-crafted environment variables containing arbitrary commands that will be executed on vulnerable systems under certain conditions. The new issue has been assigned CVE-2014-7169. Red Hat is working on patches in conjunction with the upstream developers as a critical priority.”
https://access.redhat.com/articles/1200223

Red Hat’s review of how this affects common configurations:

Package Description
httpd CGI scripts are likely affected by this issue: when a CGI script is run by the web server, it uses environment variables to pass data to the script. These environment variables can be controlled by the attacker. If the CGI script calls Bash, the script could execute arbitrary code as the httpd user. mod_php, mod_perl, and mod_python do not use environment variables and we believe they are not affected.
Secure Shell (SSH) It is not uncommon to restrict remote commands that a user can run via SSH, such as rsync or git. In these instances, this issue can be used to execute any command, not just the restricted command.
dhclient The Dynamic Host Configuration Protocol Client (dhclient) is used to automatically obtain network configuration information via DHCP. This client uses various environment variables and runs Bash to configure the network interface. Connecting to a malicious DHCP server could allow an attacker to run arbitrary code on the client machine.
CUPS It is believed that CUPS is affected by this issue. Various user supplied values are stored in environment variables when cups filters are executed.
sudo Commands run via sudo are not affected by this issue. Sudo specifically looks for environment variables that are also functions. It could still be possible for the running command to set an environment variable that could cause a Bash child process to execute arbitrary code.
Firefox We do not believe Firefox can be forced to set an environment variable in a manner that would allow Bash to run arbitrary commands. It is still advisable to upgrade Bash as it is common to install various plug-ins and extensions that could allow this behavior.
Postfix The Postfix server will replace various characters with a ?. While the Postfix server does call Bash in a variety of ways, we do not believe an arbitrary environment variable can be set by the server. It is however possible that a filter could set environment variables.

https://access.redhat.com/articles/1200223

 

References:

 

 

 

Azure & Active Directory

 

  • Azure is big. It’s really big. Seriously, it’s hard to comprehend just how big it really is. (Apologies to Douglas Adams.) In July of last year, then-CEO Steve Ballmer stated that Azure data centers held “comfortably over a million physical servers.” Last year, Azure server purchases accounted for 17% of all server purchases worldwide. And Azure is only getting bigger. In May of 2013, Global Foundation Services general manager Christian Belady stated that his division was performing data center build-outs “at a scale no one has ever seen before“. At Tech Ed North America in June, Technical Fellow (and now Azure CTO) Mark Russinovich stated that Microsoft’s plan was to double Azure’s capacity in 2015…and double it again in 2016. Can you even wrap your head around how big that is?
  • Azure AD is at the center of Azure. As Active Directory director of program management Alex Simons puts it, “identity is the control plane” upon which cloud services depend. And for Azure, this control plane is Azure Active Directory.
  • Microsoft is not content to let Azure AD be just a “lowest common denominator” solution. A long-recognized Microsoft product pattern is to provide basic capabilities, and allow a rich independent software vendor ecosystem to enhance these capabilities with their own products. In contrast to this strategy, Simons has a team of 500 working on building out Azure AD with a competitive set of features to compete in the IDaaS (identity management as a service) market. 30 developers are working on machine learning-based reporting alone.

Read the Article at Windows IT Pro

Disarming EMET 5

EMET version 5 has been out for only a few months and Offensive Security has identified bypass methods:

INTRODUCTION

In our previous Disarming Emet 4.x blog post, we demonstrated how to disarm the ROP mitigations introduced in EMET 4.x by abusing a global variable in the .data section located at a static offset. A general overview of the EMET 5 technical preview has been recently published here. However, the release of the final version introduced several changes that mitigated our attack and we were curious to see how difficult it would be to adapt our previous disarming technique to this new version of EMET. In our research we targeted 32-bit systems and compared the results across different operating systems (Windows 7 SP1, Windows 2008 SP1, Windows 8, Windows 8.1, Windows XP SP3 and Windows 2003 SP2). We chose to use the IE8 ColspanID vulnerability once again in order to maintain consistency through our research.

ROP PROTECTIONS CONFIGURATION HARDENING

The very first thing that we noticed is that the global variable we exploited to disarm the ROP Protections (ROP-P) routine is not pointing directly to the ROP-P general switch anymore. This variable, which is now at offset 0x000aa84c from the EMET.dll base address, holds an encoded pointer to a structure of 0x560 bytes (See CONFIG_STRUCT in Fig. 1). The ROP-P general switch is now located at CONFIG_STRUCT+0x558 (Fig. 1, Fig. 2)

Read the rest of the article at Offensive Security.

PowerShell: ADSI and Case Sensitivity

In developing a custom PowerShell script which leveraged ADSI, I noticed that the script wasn’t working properly.

Here’s a sample block of the script which uses ADSI to get changes made to ExtensionAttribute11 as part of an Active Directory Convergence test script:

1
2
3
4
$ADSITarget = [ADSI]”LDAP://$DC”
$Searcher = New-Object DirectoryServices.DirectorySearcher($ADSITarget,”(sAMAccountName=$ConvergenceObject)”)
$ConvergenceObjectData = ($Searcher.FindOne()).properties
$ConvergenceObjectDataValue = (($Searcher.FindOne()).properties).ExtensionAttribute11

I usually use Title Case when typing attributes and the script block above was not populating the variable “$ConvergenceObjectDataValue” with any data even though the attribute had data. I realized after enumerating the variable $ConvergenceObjectData that the attribute name was displayed as extensionattribute11 not ExtensionAttribute11.  After changing line #1 to line #2, it worked:

Line #1:
$ConvergenceObjectDataValue = (($Searcher.FindOne()).properties).ExtensionAttribute11

Line #2:
$ConvergenceObjectDataValue = (($Searcher.FindOne()).properties).extensionattribute11

So, be careful when using ADSI (or any other API) since it may be case sensitive.

Powershell Remote Use of Module Commandlets (Remoting Import-Module)

Practically all of my Powershell scripts use an Active Directory commandlet. Ok, so they use several.  I like to query AD to get environmental information so when I run the script, I know what I am working with from an AD perspective. I can’t help it, I’m an AD Guy.

In order to run the AD commandlets, a Domain Controller in the domain has to be running ADWS (Active Directory Web Services aka Active Directory Management Gateway Service). Windows 2008 (and Windows 2008 R2) Domain Controllers run this by default and it is available for Windows 2003 DCs.

Sometimes, the server I need to run a script on doesn’t have the AD commandlets installed.
The quick solution to this is to run the following:

powershell set-executionpolicy unrestricted
powershell import-module servermanager  ;  add-windowsfeature rsat-ad-powershell

Assuming this isn’t possible, there is a way to import-modules available on another server.  I often use this with Exchange commandlets since I rarely have them installed on servers I use for running scripts.

Here’s how this works…

# Create a Powershell remote session to a server with the #commandlets installed.
$Session = New-PSsession -Computername Server1
# Use the newly created remote Powershell session to send a #command to that session
Invoke-Command -Command {Import-Module ActiveDirectory} -Session $Session
# Use that session with the modules to add the available # commandlets to your existing Powershell command shell with a #new command name prefix.
Import-PSSession -Session $Session -Module ActiveDirectory -Prefix RM

The code above enables the use of Active Directory commandlets on a server that doesn’t have them installed.

Use AD commandlets in the Powershell command shell with modified names based on the -Prefix set above:

Get-RMAdUser  instead of the standard Get-ADUser
Get-RMAdComputer instead of the standard Get-ADComputer

You can also drop the “-Prefix RM” and it will use the native commandlets using their standard names.

Don Jones calls this Powershell implicit remoting and it is very cool.

 

PowerShell Code: Find Active Directory Site Containing AD Subnet

Here’s a quick script that returns the site in the Active Directory forest given a subnet (ex. 10.20.30.0).

Match-Subnet2Site.ps1

 


Param
(
[string]$Subnet
)

$IPSubnetRegEx = '\b((25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])\.){3}(25[0-5]|2[0-4][0-9]|1[0-9][0-9]|0)\b'
# $IPRegEx = '\b((25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])\.){3}(25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])\b'

IF ($Subnet -match $IPSubnetRegEx)
{ Write-Output "Searching the AD forest for subnet: $Subnet " }
ELSE
{ Write-Error "The provided subnet ($Subnet) is not valid. Please enter as follows #.#.#.0 (ex. 10.22.33.0)" }

$ADForestName = [System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest().Name
$DomainDNS = [System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain().Name

$ADSites = [System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest().Sites
$ADSites = $ADSites | sort-Object Name
[int]$ADSitesCount = $ADSites.Count
Write-output "Searching $ADSitesCount AD Sites in $ADForestName `r"

[string]$SearchResults = "Subnet $Subnet could not be found in the current Active Directory forest ($ADForestName)"
ForEach ($ADSitesItem in $ADSites)
{ ## OPEN ForEach ($ADSitesItem in $ADSites)
$ADSitesItemName = $ADSitesItem.Name
$ADSitesItemSubnetsCount = $ADSitesItem.Subnets.Count
IF ($ADSitesItem.Subnets.Count -gt 1)
{ ## OPEN IF ($ADSitesItem.Subnets.Count -gt 1)
$ADSitesItemSubnetsArray = $ADSitesItem.Subnets
Write-Verbose "The site $ADSitesItemName has $ADSitesItemSubnetsCount subnets "
ForEach ($ADSitesItemSubnetsItem in $ADSitesItemSubnetsArray)
{ ## OPEN ForEach ($ADSitesItemSubnetsItem in $ADSitesItemSubnets)
$ADSitesItemSubnets = $ADSitesItemSubnetsItem.Name
$ADSitesItemSubnetSite = $ADSitesItemSubnetsItem.Site
$ADSitesItemSubnetLocation = $ADSitesItemSubnetsItem.Location
Write-Verbose "Checking Site $ADSitesItemName subnet $ADSitesItemSubnets"
IF ($ADSitesItemSubnets -like "*$Subnet*")
{ [string]$SearchResults = "The subnet $Subnet is configured as part of the AD site $ADSitesItemName ($ADSitesItemSubnetLocation)" }
} ## CLOSE ForEach ($ADSitesItemSubnetsItem in $ADSitesItemSubnets)
} ## CLOSE IF ($ADSitesItem.Subnets.Count -gt 1)
ELSE
{ ## OPEN ELSE ($ADSitesItem.Subnets.Count -lt 1)
$ADSitesItemSubnets = $ADSitesItem.Subnets[0].Name
$ADSitesItemSubnetSite = $ADSitesItem.Subnets[0].Site
$ADSitesItemSubnetLocation = $ADSitesItem.Subnets[0].Location

Write-Verbose "Checking Site $ADSitesItemName single subnet $ADSitesItemSubnets"
IF ($ADSitesItemSubnets -like "*$Subnet*")
{ [string]$SearchResults = "The subnet $Subnet is configured as part of the AD site $ADSitesItemName ($ADSitesItemSubnetLocation)" }
} ## CLOSE ELSE ($ADSitesItem.Subnets.Count -lt 1)

[array]$ADSitesItemSubnetsArray = $ADSitesItemSubnets -Split(", ")

} ## CLOSE ForEach ($ADSitesItem in $ADSites)

return $SearchResults

Azure Active Directory Stats

 

  • Over 2.9 Million Organizations are using Azure Active Directory
  • More than 10 Billion Authentications per week
  • Azure Active Directory is spread out across 14 data centers
  • Contains more than 240 million user accounts
  • Organizations using Azure Active Directory across 127 countries
  • Supports over 1400 integrated third-party apps

Azure AD Statistics

LOL! Lingering Object Liquidator for Active Directory

Microsoft released the LOL GUI tool for removing Active Directory lingering objects. Historically, removing lingering objects from AD had been a painful process.

Note that LOL is not a straightforward download. Follow the following steps to download:

  1. Log on to the Microsoft Connect site (using the Sign in) link with a Microsoft account:: http://connect.microsoft.com
    Note: You may have to create a profile on the site if you have never participated in Connect.
  2. Open the Non-feedback Product Directory:
    https://connect.microsoft.com/directory/non-feedback
  3. Join the following program:
    AD Health
    Product Azure Active Directory Connection Join link
  4. Click the Downloads link to see a list of downloads or this link to go directly to the Lingering Objects Liquidator download. (Note: the direct link may become invalid as the tool gets updated.)
  5. Download all associated files
  6. Double click on the downloaded executable to open the tool.

 

Why you should care about lingering object removal

Widely known as the gift that keeps on giving, it is important to remove lingering objects for the following reasons

  • Lingering objects can result in a long term divergence for objects and attributes residing on different DCs in your Active Directory forest
  • The presence of lingering objects prevents the replication of newer creates, deletes and modifications to destination DCs configured to use strict replication consistency. These un-replicated changes may apply to objects or attributes on users, computers, groups, group membership or ACLS.
  • Objects intentionally deleted by admins or application continue to exist as live objects on DCs that have yet to inbound replicate knowledge of the deletes.

Lingering Object Liquidator automates the discovery and removal of lingering objects by using the DRSReplicaVerifyObjects method used by repadmin /removelingeringobjects and repldiag combined with the removeLingeringObject rootDSE primitive used by LDP.EXE. Tool features include:

  • Combines both discovery and removal of lingering objects in one interface
  • Is available via the Microsoft Connect site
  • The version of the tool at the Microsoft Connect site is an early beta build and does not have the fit and finish of a finished product
  • Feature improvements beyond what you see in this version are under consideration

 

 

From Microsoft KB 910205:

Lingering objects can occur if a domain controller does not replicate for an interval of time that is longer than the tombstone lifetime (TSL). The domain controller then reconnects to the replication topology. Objects that are deleted from the Active Directory directory service when the domain controller is offline can remain on the domain controller as lingering objects. This article contains detailed information about the events that indicate the presence of lingering objects, the causes of lingering objects, and the methods that you can use to remove lingering objects.

 

Tombstone lifetime and replication of deletions

When an object is deleted, Active Directory replicates the deletion as a tombstone object. A tombstone object consists of a small subset of the attributes of the deleted object. By inbound-replicating this object, other domain controllers in the domain and in the forest receive information about the deletion. The tombstone is retained in Active Directory for a specified period. This specified period is called the TSL. At the end of the TSL, the tombstone object is permanently deleted.

The default value of the TSL depends on the version of the operating system that is running on the first domain controller that is installed in a forest. The following table indicates the default TSL values for different Windows operating systems.

First domain controller in forest root Default tombstone lifetime
Windows 2000 60 days
Windows Server 2003 60 days
Windows Server 2003 with Service Pack 1 180 days

Note The existing TSL value does not change when a domain controller is upgraded to Windows Server 2003 with Service Pack 1 (SP1). The existing TSL value is maintained until you manually change it.

After the tombstone is permanently deleted, the object deletion can no longer be replicated. The TSL defines how long domain controllers in the forest retain information about a deleted object. The TSL also defines the time during which all direct and transitive replication partners of the originating domain controller must receive a unique deletion.

How lingering objects occur

When a domain controller is disconnected for a period that is longer than the TSL, one or more objects that are deleted from Active Directory on all other domain controllers may remain on the disconnected domain controller. Such objects are called lingering objects. Because the domain controller is offline during the time that the tombstone is alive, the domain controller never receives replication of the tombstone.

When this domain controller is reconnected to the replication topology, it acts as a source replication partner that has an object that its destination partner does not have.

Replication problems occur when the object on the source domain controller is updated. In this case, when the destination partner tries to inbound-replicate the update, the destination domain controller responds in one of two ways:

  • If the destination domain controller has Strict Replication Consistency enabled, the controller recognizes that it cannot update the object. The controller locally stops inbound replication of the directory partition from the source domain controller.
  • If the destination domain controller has Strict Replication Consistency disabled, the controller requests the full replica of the updated object. In this case, the object is reintroduced into the directory.

Causes of long disconnections

The following conditions can cause long disconnections:

  • A domain controller is disconnected from the network and is put in storage.
  • The shipment of a pre-staged domain controller to its remote location takes longer than a TSL.
  • Wide area network (WAN) connections are unavailable for long periods. For example, a domain controller onboard a cruise ship may be unable to replicate because the ship is at sea for longer than the TSL.
  • The reported event is a false positive because an administrator shortened the TSL to force the garbage collection of deleted objects.
  • The reported event is a false positive because the system clock on the source or on the destination domain controller is incorrectly advanced or rolled back. Clock skews are most common following a system restart. Clock skews may occur for the following reasons:
    • There is a problem with the system clock battery or with the motherboard.
    • The time source for a computer is configured incorrectly. This includes a time source server that is configured by using Windows Time service (W32Time), by using a third-party time server, or by using network routers.
    • An administrator advances or rolls back the system clock to extend the useful life of a system state backup or to accelerate the garbage collection of deleted objects. Make sure that the system clock reflects the actual time. Also, make sure that event logs do not contain invalid events from the future or from the past.

Removing lingering objects from the forest

Windows 2000-based forests

For more information about how to remove lingering objects in a Windows 2000-based domain, click the following article number to view the article in the Microsoft Knowledge Base:

314282 Lingering objects may remain after you bring an out-of-date global catalog server back online

Windows Server 2003-based forests

For information about how to remove lingering objects from Windows Server 2003-based forests, visit the following Microsoft Web site:

For more information, click the following article number to view the article in the Microsoft Knowledge Base:

892777 Windows Server 2003 Service Pack 1 Support Tools

Preventing lingering objects

The following are methods that you can use to prevent lingering objects.

Method 1: Enable the Strict Replication Consistency registry entry

You can enable the Strict Replication Consistency registry entry so that suspect objects are quarantined. Then, administrations can remove these objects before they spread throughout the forest.

If a writable lingering object is located in your environment, and an attempt is made to update the object, the value in the Strict Replication Consistency registry entry determines whether replication proceeds or is stopped. The Strict Replication Consistency registry entry is located in the following registry subkey:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters

The data type for this entry is REG_DWORD. If you set the value to 1, the entry is enabled. Inbound replication of the specified directory partition from the source is stopped on the destination. If you set the value to 0, the entry is disabled. The destination requests the full object from the source domain controller. The lingering object is revived in the directory as a new object.

The default value for the Strict Replication Consistency registry entry is determined by the conditions under which the domain controller was installed in the forest.

Note Raising the functional level of the domain or the forest does not change the replication consistency setting on any domain controller.

By default, the value of the Strict Replication Consistency registry entry on domain controllers that are installed in a forest is 1 (enabled) if the following conditions are true:

  • The Windows Server 2003 version of Winnt32.exe is used to upgrade a Windows NT 4.0 primary domain controller (PDC) to Windows Server 2003. This computer creates the forest root domain of a new forest.
  • Active Directory is installed on a server that is running Windows Server 2003. This computer creates the forest root domain of a new forest.

By default, the value of the Strict Replication Consistency registry entry on domain controllers is 0 (disabled) if the following conditions are true:

  • A Windows 2000-based domain controller is upgraded to Windows Server 2003.
  • Active Directory is installed on a Windows Server 2003-based member server in a Windows 2000-based forest.

If you have a domain controller that is running Windows Server 2003 with SP1, you do not have to modify the registry to set the value of the Strict Replication Consistency registry entry. Instead, you can use the Repadmin.exe tool to set this value for one domain controller in the forest or for all the domain controllers in the forest.

For more information about how to use Repadmin.exe to set Strict Replication Consistency, visit the following Microsoft Web site:

Method 2: Monitor replication by using a command-line command

To monitor replication by using the repadmin /showrepl command, follow these steps:

  1. Click Start, click Run, type cmd, and then click OK.
  2. Type repadmin /showrepl * /csv >showrepl.csv, and then press ENTER.
  3. In Microsoft Excel, open the Showrepl.csv file.
  4. Select the A + RPC column and the SMTP column.
  5. On the Edit menu, click Delete.
  6. Select the row that is immediately under the column headers.
  7. On the Windows menu, click Freeze Pane.
  8. Select the complete spreadsheet.
  9. On the Data menu, point to Filter, and then click Auto-Filter.
  10. On the heading of the Last Success column, click the down arrow, and then click Sort Ascending.
  11. On the heading of the src DC column, click the down arrow, and then click Custom.
  12. In the Custom AutoFilter dialog box, click does not contain.
  13. In the box to the right of does not contain, type del.

    Note This step prevents deleted domain controllers from appearing in the results.

  14. On the heading of the Last Failure column, click the down arrow, and then click Custom.
  15. In the Custom AutoFilter dialog box, click does not equal.
  16. In the box to the right of does not equal, type 0.
  17. Resolve the replication failures that are displayed.

 

PowerShell Code: Active Directory Domain Controller Discovery

There are several different ways to find AD Domain Controllers (DCs).

Here are a few:

AD PowerShell Module: Discover the closest Domain Controller running the AD web services (support PowerShell AD cmdlets):

import-module activedirectory
Get-ADDomainController -discover -forcediscover -nextclosestsite -service ADWS

  • discover – find a DC
  • forcediscover – re-discover a DC and not use a cached DC
  • nextclosestsite – if there is no DC discovered in the local site, use the AD topology to find the closest DC in another site.
  • service – the DC must support these services.


AD PowerShell Module: Discover all Domain Controller in the domain:

import-module activedirectory
Get-ADDomainController -filter *

  • filter * – find all Domain Controllers


Discover all Domain Controller in the domain using ADSI:

[System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain().DomainControllers

 

Discover all Global Catalogs in the forest using ADSI:

[System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest().GlobalCatalogs

 

You can also use the Active Directory cmdlets to get computer information about Domain Controllers:

import-module activedirectory
get-ADComputer -filter { PrimaryGroupID -eq “516” } -properties PrimaryGroupID

 

 

 

Powershell Filter Operators

Once you get used to Powershell, you will want to do more and more with it.  One of the keys to leveraging the power of PowerShell is filters.
PowerShell commandlets all support filters (well, most of them anyway).  This means you can drill down to resulting data subsets.
If you run into commandlets that don’t support the native -filter you can always pipe to where-object (aka “where”).

In other words you can do this: get-service | Where {$_.Status -eq “Running”}
This takes the results of a generic get-service request which returns a full list of system services and pares it down to only the running services.
Change “Running” to “Stopped” and you get, obviously a list of services that are stopped.

You can also pipe the service name into the get-service commandlet: “W32Time” | get-service

Here’s a great chart I found on the MSDN Blogs that describes what each filter operator does:

Logical Operator Description Equivalent LDAP operator/expression
-eq Equal to. This will not support wild card search. =
-ne Not equal to. This will not support wild card search. ! x = y
-like Similar to -eq and supports wildcard comparison. The only wildcard character supported is: * =
-notlike Not like. Supports wild card comparison. ! x = y
-approx Approximately equal to ~=
-le Lexicographically less than or equal to <=
-lt Lexicographically less than ! x >= y
-ge Lexicographically greater than or equal to >=
-gt Lexicographically greater than ! x <= y
-and AND &
-or OR |
-not NOT !
-bor Bitwise OR :1.2.840.113556.1.4.804:=
-band Bitwise AND :1.2.840.113556.1.4.803:=
-recursivematch Uses LDAP_MATCHING_RULE_IN_CHAIN (Win2k3 SP2 and above) :1.2.840.113556.1.4.1941:=

Using filters is extremely helpful is narrowing down the scope to fine-tune the data you need to work with and this chart is one I frequently reference.

Load more