PowerShell: Identifying Cloned Computers by CMID or SID

Here’s the PowerShell command for identifying the computer SID by finding local accounts:
Get-WmiObject -class Win32_UserAccount

This command shows the Information for the first account in the list which should be local:
(Get-WmiObject -class Win32_UserAccount)[0]

Here’s a PowerShell command to run on each of the servers. If the result is the same, they have the same Client Machine ID (CMID):
Get-WmiObject -class SoftwareLicensingService | Select-object ClientMachineID

Here’s a PowerShell command to run on each of the servers. If the result is the same, they have the same Client Machine ID (CMID):
Import-module activedirectory
$AllServers = Get-ADComputer –filter {OperatingSystem –like “*Server*”} –property *
ForEach ($AllServersItem in $AllServers )
{
$ServerName = $AllServersItem.Hostname
$ServerCMID = Get-WmiObject –computer $ServerName  -class SoftwareLicensingService | Select-object ClientMachineID
Write-Output “$ServerName has the CMID: $ServerCMID “
}

57 Tips Every Admin Should Know

GFI’s 57 Tips Every Admin Should Know:

The longer a person serves as a network admin, the more tips and tricks they are likely to pick up along the way. Some could be shortcuts, others might seem like magic, but all are intended to save you time and help you solve problems. Assume that all of these Windows commands should be run from an administrative command prompt if you are using Vista, Windows 7, or Windows 2008.

This list covers the following topics:

  • Active Directory
  • Windows Networking
  • Windows 7
  • Windows 2008
  • Linux

Related Posts:

New APT Approaches

The Trend Micro Security Intelligence Blog has an interesting article on how hackers are using legitimate tools as part of APT attacks.

 

In our 2013 predictions, we noted how malware would only gradually evolve without much in the way of significant change. This can be seen in the use of some (otherwise legitimate) hacking tools in APT attacks.

How is this a problem? Hacking tools are grayware which are not always detected by anti-malware products or at least ethico-legal issues are keeping them from doing so. Unfortunately, this means less visibility in APT forensic investigations. In addition, it also saves attackers the trouble of writing their own tools. Some of the common hacking tools we see are:

  • Password recovery tools – tools for extracting passwords or password hashes stored by applications or the operating system in the local drive or in registry entries. These are typically used to clone or impersonate user accounts for obtaining administrator rights. Pass the hash technique is one common method for attackers to gain administrator rights via stolen password hashes.
  • User account clone tools – used to clone a user account once password has been obtained by the attacker. Upon acquiring enough privileges, the attacker can then execute malicious intent while bypassing the system’s security measures.
  • File manipulation tools – tools for manipulating files such as copying, deleting, modifying timestamps, and searching for specific files. It is used for adjusting timestamps of accessed files or for deleting components to cover tracks of compromise. It can also be used for searching key documents for extraction where the attacker can search for files with specific file extensions.
  • Scheduled job tools – software for disabling or creating scheduled tasks. This can help the attacker to lower the security of the infected system by disabling scheduled tasks for software updates. Likewise, it can also be used maliciously. For instance, the attackers can create a scheduled task that will allow them to automatically steal files within a certain timeframe.
  • FTP tools – tools that aid in FTP transactions like uploading files to a specific FTP site. Since FTP transactions would look less suspicious in the network, some APT threat actors prefer to upload stolen data to a remote FTP site instead of uploading them to the actual C&C server. It should be noted that there are several legitimate FTP applications, which may also be utilized by cybercriminals.
  • Data compression tools – these tools are neither malicious nor considered as hacking tools. In most cases, these are legitimate file compression tools, such as WinRAR, being utilized by attackers to compress and archive multiple stolen files. This aids the attacker in the data exfiltration phase where they can upload stolen documents as a single archive. In a few cases, however, we have seen these applications being packaged and configured to compress a predefined set of files.

How can we identify an APT using these tools?

We have seen how these tools are used in APTs to gain administrator rights and collect key documents. So how can IT administrators and power users then use this information to identify an APT that uses these tools?

  1. Suspicious instances of command shell process may indicate possible compromise. The tools listed above are either command line tools or runs both in command line and via GUI. Attackers use these tools through a hidden command prompt instance thus regularly checking your environment for unknown command shell process can help you identify possible infection. Additionally, using process utilities such as Process Explorer will allow you to see the parameters in a command process. This may help you correlate possible components of an APT.
  2. The presence of tool(s), whether legitimate or not, can be a sign of compromise. Attackers have long been leveraging legitimate software for malicious purposes. As such, users should be wary on the software present on their systems and should be able to identify what they install. It may be tedious, yes, but being vigilant to files present in your system could spell the difference between mitigating an APT compromise and mass pilfering of your organization’s classified documents.
  3. In addition, we have observed that these tools are sometimes saved by the attackers using odd file names or with fake file extensions. Being able to identify added files in your system is again key in identifying possible compromise.
  4. Paying attention to FTP connections in the network logs is a good idea. While it is more common to check for malicious C&C connections, checking for FTP connections gives another opportunity to identify a breach in your network. In a corporate setting, FTP sites are usually Intranet sites. Thus, it is easier to sort out legitimate FTPs from malicious ones. FTP transactions are significantly smaller than other type communications in the network, which may allow you to identify a breach faster. Furthermore, checking for archive files or files with odd file names being uploaded to a remote site may also suggest compromise.
  5. Review scheduled jobs. Scheduled jobs are a common auto-start method not only for APTs, but to malware in general. Scrutinizing the properties of scheduled jobs will not only allow you identify infection, but will also most likely help you identify components of the attack through the files they execute. Considering the growing number of APT campaigns today, identifying existing APT compromise from an organization’s network is as important as preventing initial APT infection.

By understanding targeted attacks from different perspectives, users, security administrators, as well as security researchers are empowered to better combat these threats. Highlighting APT components, in this case, extend our visibility in identifying existing compromise by knowing what and where to look for.

We previously noted that we will see an increase in attacks that have destructive capacity rather than motivated by espionage. Furthermore, localized attacks with certain defined conditions (like specific language settings, or geographic locations) will increase.

PowerShell 101: PowerShell Guide/CheatSheet

Michael Sorens has put together a comprehensive guide to using PowerShell:

This series of articles evolved out of my own notes on PowerShell as I poked and prodded it to show me more. As my collection  burgeoned, I began to organize them until I had one-line recipes for most any simple PowerShell task. Simple, though, does not mean trivial. You can do quite a lot in one line of PowerShell, such as grabbing the contents of specific elements of a web page or converting a CSV file into a collection of PowerShell objects.

  • PowerShell One-Liners: Help, Syntax, Display and Files
    begins by showing you how to have PowerShell itself help you figure out what you need to do to accomplish a task, covering the help system as well as its handy command-line intellisense. The next sections deal with locations, files, and paths: the basic currency of any shell. You are introduced to some basic but key syntactic constructs and then ways to cast your output in list, table, grid, or chart form.
  • PowerShell One-Liners: Variables, Parameters, Properties, and Objects
    rounded out with a few other vital bits on leveraging the PowerShell environment.
  • PowerShell One-Liners: Collections, Hashtables, Arrays and Strings
    covers the two fundamental data structures of PowerShell: the collection (array) and the hash table (dictionary), examining everything from creating, accessing, iterating, ordering, and selecting. Part 3 also covers converting between strings and arrays, and rounds out with techniques for searching, most commonly applicable to files (searching both directory structures as well as file contents).
  • Part 4 – pending.
    is your information source for a variety of input and output techniques: reading and writing files; writing the various output streams; file housekeeping operations; and various techniques related to CSV, JSON, database, network, and XML.

Each part of this series is available as both an online reference here at Simple-Talk.com, in a wide-form as well, and as a downloadable wallchart (from the link at the head of the article) in PDF format for those who prefer a printed copy near at hand. Please keep in mind though that this is a quick reference, not a tutorial. So while there are a few brief introductory remarks for each section, there is very little explanation for any given incantation. But do not let that scare you off—jump in and try things! You should find more than a few “aha!” moments ahead of you!

Great InfoWorld Interview with Mark Russinovich on Azure and Cloud Computing

InfoWorld has a great Interview with Mark Russinovich, Microsoft Technical Fellow, on Azure and Cloud Computing.
I included my favorite quotes below:

Intro:

Mark Russinovich is a legendary figure in the computer industry. A former teenage hacker who went on to earn a PhD in computer engineering from Carnegie Mellon, Russinovich cofounded Winternals Software — a Windows utilities vendor renowned for understanding the guts of Windows as well as Microsoft itself.

After a stint at IBM’s Thomas J. Watson Research Center and after discovering a number of high-profile Windows security vulnerabilities, not to mention the infamous Sony rootkit[1], Russinovich joined Microsoft when Winternals was acquired in 2006. Russinovich is also an accomplished novelist, whose cyberthrillers Zero Day and Trojan Horse have been well received (the third novel in the series, Rogue Code, comes out this May).

Today, Russinovich is a Technical Fellow, the highest technical position at Microsoft. He’s the sole Technical Fellow in the Windows Azure Group, acting as lead architect for Microsoft’s bet-the-company cloud initiative — $15 billion have been invested in cloud infrastructure to date. Much of what Russinovich has been working on pertains to the complex automation necessary to manage that cloud infrastructure at scale. The interview began with an examination of Azure technology and moved to broader concerns about IT’s march to the public cloud. The folllowing is an edited version.


On Microsoft Cloud Transparency:

I think that’s one place where we’ve been way more transparent than anybody else. I’ve given talks for three years since I joined Azure at TechEd and Build on Windows Azure Internals about how our virtual machine technology is implemented and how we implement that multitenancy. You don’t see Amazon or Google talking about that.


Azure Overview:

Sure. When it comes to virtual machines, which are really the building blocks of the cloud, we’ve got pools of servers, we’ve got something called a fabric controller, which is like the brain.

The Azure fabric. And that manages a pool of machines. And then there’s an application front-end, a virtual machine deployment front-end we call RDFE — Red Dog Front End. Red Dog is a carryover from Microsoft from Azure’s code name.

Here’s what happens when a customer deploys a PaaS application (what we call a Cloud Service, a collection of virtual machines) or when they deploy IaaS as virtual machines: It goes to RDFE, then RDFE finds a fabric controller that has, based on heuristics, the best utilization and capacity available for the deployment and gives the deployment to the fabric controller, which then goes and finds servers to deploy the virtual machines onto.

It uses a bunch of heuristics as well as constraint satisfaction to figure out which servers are the ones that the virtual machines should land on. We’ve got the concept of update domains and fault domains [5], so that when the infrastructure is being updated we don’t take down the whole application. We split the application across different servers so that when we’re servicing the infrastructure of the servers, it’s only taking down a slice of the application.


Regarding the future of computing:

When I joined Microsoft, I’d done a lot of Windows stuff before, but operating systems had already pretty much matured. I mean, Windows today in the internals isn’t very different than 20 years ago, and Linux is the same way — just like UNIX back in the ’70s.

This cloud operating system, data center operating system, is brand new. So the problems are new, the algorithms are new, the computer science is new. How do you detect failures quickly? How do you respond to them? How do you best do resource allocation?


Active Directory as the central piece of Azure:

One of the most valuable assets that we recognize within Microsoft when it comes to cloud and getting that integration is Windows Azure Active Directory.

The name is not a mistake. It’s completely deliberate because Active Directory became the center of on-premises network architecture. And we see Windows Azure Active Directory becoming that for the cloud.


Cloud Technology is in it’s infancy:

We’re constantly adding new functionality and features. Like I said, the cloud is new. If you look at the mature environment of the on-premises IT world, there’s not just one thing that does whatever you want it to, but probably 20 or 30 different vendors that offer products that do what you’re talking about. The cloud is not there yet. There are a lot of holes in the basic functionality, in the layered functionality of the services that would be added on top of that. This is why it’s going to be just a great economic opportunity for lots of people.


IT Career Advice:

If you look at the evolution of IT, people aren’t doing today what they were doing ten years ago. Change has just been a fact of life all along.

Now, of course, some changes are bigger than others. But change has been there all along. And if you’re not adapting, you shouldn’t be in this business. IT professionals, I think, have to step up and play a key role in this migration for their companies. Because if they don’t, shadow IT is just going to go around them

 

Read the rest of the interview at InfoWorld.

 

PowerShell is Central to Everything Microsoft

So how important is Windows PowerShell? Well for starters, Windows PowerShell grabbed three of the top ten TechEd 2014 talks in Houston this year. PowerShell.Org printed out 3,000 DSC Resource guide books to hand out at the Scripting Guys booth, and to give out in presentations – they were gone in two days. In addition, there have been more than 10,000 downloads of the electronic version from the web site. At the Scripting Guys booth this year, we talked to more than 5,000 people during the week. This equates to like ½ of all attendees at TechEd – and after the first two days, we had nothing to give away – but people came to talk to Windows PowerShell people. This is incredible.

http://blogs.msdn.com/b/powershell/archive/2014/06/02/powershell-predict-the-future-via-teched.aspx

Active Directory 2012 DCPromo

Starting with Windows Server 2012, DCPromo is no longer used to promote a member server to be a Domain Controller. Since DCPromo no longer works (Microsoft calls this featured deprecated), there is a new GUI option and associated Powershell commandlets.

There are major changes to the promotion process which integrate the process. This simplified process includes:

  • AD DS role deployment is now part of the new Server Manager architecture and allows remote installation.
  • The AD DS deployment and configuration engine is now Windows PowerShell, even when using a graphical setup.
  • Promotion now includes prerequisite checking that validates forest and domain readiness for the new domain controller, lowering the chance of failed promotions.
  • The Windows Server 2012 forest functional level does not implement new features and domain functional level is required only for a subset of new Kerberos features, relieving administrators of the frequent need for a homogenous domain controller environment.

NOTE: The new “DCPromo” GUI takes longer than before since it performs many more checks than in the past. Since the GUI provides the PowerShell script code, it’s a great idea to script the promotion of all new 2012 DCs.

Install the Active Directory Domain Services (ADDS) role:

  1. Install the role “Active Directory Domain Services (ADDS)” on the target server (local or remote).
  2. Check the Restart checkbox.
  3. Click on Export Configuration Settings to get the Powershell command line equivalent.

Powershell command:

Add-WindowsFeature AD-Domain-Services

Promote the server to DC:

  1. Run the Active Directory Domain Services Configuration Wizard.
  2. Select Add a Domain Controller to an Existing Domain.
  3. Select the appropriate DC options and enter the DSRM password.
  4. Change any options on the following pages as appropriate.
  5. Click on View Script to view the Powershell script command.
  6. Click Install.

Here’s the Powershell script the GUI creates when creating a new forest accepting all defaults:

Import-Module ADDSDeployment
Install-ADDSForest `
-CreateDNSDelegation:$False `
-DatabasePath “c:\Windows\NTDS” `
-DomainMode “Win2012″ `
-DomainName “MCLab.net” `
-DomainNetbiosName “MCLAB” `
-ForestMode “Win2012″ `
-InstallDNS:$true `
-LogPath “C:\Windows\NTDS” `
-NoRebootOnCompletion:$false `
-Sysvolpath “C:\Windows\SYSVOL” `
-Force:$true

Here’s the Powershell script the GUI creates when adding a new Domain Controller to an existing domain accepting all defaults:

Import-Module ADDSDeployment
$SafeModeAdministratorPasswordText = ‘&P@ssw0rd2013&’
$SafeModeAdministratorPassword = ConvertTo-SecureString -AsPlainText $SafeModeAdministratorPasswordText -Force

Install-ADDSDomainController `
-NoGlobalCatalog:$false `
-CreateDNSDelegation:$false `
-Credential (Get-Credential) `
-CriticalReplication:$false `
-DatabasePath “C:\Windows\NTDS” `
-DomainName “mcdevlab.net” `
-InstallDNS:$true `
-LogPath “C:\Windows\NTDS\Logs” `
-SiteName “Default-First-Site-Name” `
-SYSVOLPath “C:\Windows\SYSVOL” `
-Force:$true `
-SafeModeAdministratorPassword $SafeModeAdministratorPassword

Powershell AD commands (with switches):

Install-ADDSDomainController

-ADPrepCredential
-AllowDomainControllerReinstall
-AllowPasswordReplicationAccountName
-ApplicationPartitionsToReplicate
-CreateDnsDelegation
-Credential
-CriticalReplicationOnly
-DatabasePath
-DelegatedAdministratorAccountName
-DenyPasswordReplicationAccountName
-DnsDelegationCredential
-DomainName **
-Force
-InstallationMediaPath
-InstallDns
-LogPath
-MoveInfrastructureOperationMasterRoleIfNecessary
-NoDnsOnNetwork
-NoGlobalCatalog
-NoRebootOnCompletion
-ReadOnlyReplica
-ReplicationSourceDC
-SafeModeAdministratorPassword
-SiteName
-SkipAutoConfigureDns
-SkipPreChecks
-SystemKey
-SysvolPath
-UseExistingAccount
-Confirm
-WhatIf

Install-ADDSForest

-Confirm
-CreateDNSDelegation
-DatabasePath
-DomainMode
-DomainName **
-DomainNetBIOSName **
-DNSDelegationCredential
-ForestMode
-Force
-InstallDNS
-LogPath
-NoDnsOnNetwork
-NoRebootOnCompletion
-SafeModeAdministratorPassword
-SkipAutoConfigureDNS
-SkipPreChecks
-SYSVOLPath
-Whatif

Install-ADDSDomain

-ADPrepCredential
-AllowDomainReinstall
-CreateDnsDelegation
-Credential
-DatabasePath
-DnsDelegationCredential
-DomainMode
-DomainType
-Force
-InstallDns
-LogPath
-NewDomainName **
-NewDomainNetbiosName
-NoDnsOnNetwork
-NoGlobalCatalog
-NoRebootOnCompletion
-ParentDomainName **
-ReplicationSourceDC
-SafeModeAdministratorPassword
-SiteName
-SkipAutoConfigureDns
-SkipPreChecks
-SysvolPath
-Confirm
-WhatIf

** Required Powershell switches

DC Prerequisite Checking:
Domain controller configuration also implements a prerequisite checking phase that evaluates the forest and domain prior to continuing with domain controller promotion. This includes FSMO role availability, user privileges, extended schema compatibility and other requirements. This new design alleviates issues where domain controller promotion starts and then halts midway with a fatal configuration error. This lessens the chance of orphaned domain controller metadata in the forest or a server that incorrectly believes it is a domain controller.

The following tools are installed as part of the DC promotion:

  • Active Directory Administrative Center
  • Active Directory Domains and Trusts
  • Active Directory Module for Windows PowerShell
  • Active Directory Sites and Services
  • Active Directory Users and Computers
  • ADSI Edit
  • DNS
  • Group Policy Management

NOTE: Running dcpromo /unattend still installs the binaries as before, but produces a warning.
References:

PowerShell: Useful WMI Classes

Here are some WMI Classes I have found useful:
  • Get-WmiObject -Class Win32_BIOS
  • Get-WmiObject -Class Win32_ComputerSystem
  • Get-WmiObject -Class Win32_OperatingSystem
  • Get-WmiObject -Class Win32_NetworkAdapter
  • Get-WmiObject -Class Win32_NetworkAdapterConfiguration
  • Get-WmiObject -Class Win32_Product
Enumerating Win32 WMI Classes:
[array]$WMINames = Get-WmiObject -Query ‘Select * From Meta_Class WHERE __Class LIKE “win32%”‘ |
Where-Object { $_.PSBase.Methods } |
Select-Object Name, Methods
$WMINames = $WMINames | sort -Property Name
$WMINames

PowerShell Code: Get & Set Active Directory Tombstone Lifetime and Active Directory Delete & Recycle Operations

Active Directory is a multi-master database replicated among multiple Domain Controllers. In order to ensure that objects are fully replicated before deletions are processed (purged), objects that are marked for deletion before they are completely purged from Active Directory. Active Directory marks the object as deleted by performing the following actions on the object:

  • The isDeleted attribute of the deleted object is set to TRUE (objects with an isDeleted attribute value set to TRUE are called tombstones.)
  • The deleted object is moved to the Deleted Objects container for its naming context. If the object systemFlags property contains the 0x02000000 flag, the object is not moved to the Deleted Objects container. The Deleted Objects container is flat, so all objects reside at the same level within the Deleted Objects container.
  • Thus, the relative distinguished name of the deleted object is changed to ensure that the name is unique within the Deleted Objects container. If the original name is longer than 75 characters, it is truncated to 75 characters.
  • The following are then appended to the new name:
    A 0x0A character
    The string “DEL:”
    The string form of a unique GUID, such as “947e3228-70c9-4311-8b7a-e5c9b5bd4432”

 

The AD tombstone lifetime determines how long deleted items exist in AD before they are purged. The default value was originally 60 days, but this was increased to 180 days starting with new AD forests created with Windows 2003 SP1. While the tombstone lifetime directly affects deleted items, it also has an impact on Domain Controllers. If a DC hasn’t replicated within the tombstone lifetime with another DC, it is effectively orphaned from the domain. Additionally, DC backups are only useful for restoring AD data within this tombstone lifetime – a backup that is 181 days old is no longer useful when the tombstone lifetime is 180 days.

 

First Domain Controller Operating System Version Default Tombstone Lifetime Setting (days)
Windows 2000 Server 60
Windows Server 2003 RTM 60
Windows Server 2003 R2 (SP1) 60
Windows Server 2003 SP1 and SP2 180
Windows Server 2003 R2 SP2 180
Windows Server 2008 and higher 180

Since this value is stored as an attribute (tombstonelifetime) on the AD object “cn=directory service,cn=windows nt,cn=services,cn=configuration,dc=<forestDN>”, it can be queried and modified.

There are some changes to how this process works once the AD Forest is set to Windows Server 2008 R2 mode and the AD Recycle Bin is enabled.Once enabled, there is a 180 day threshold from when an object is deleted by an admin within which it may be restored. Then at day 181, it is effectively “tombstoned” and may not be restored using the recycle bin undelete method. At day 360, this object is removed from the directory (purged). In other words, enabling the recycle bin keeps the object in the directory for 360 days after it is deleted. Microsoft states that this increases the size of AD by about 10 – 15%.

The AD Recycle Bin enables rapid restoration of deleted objects without a restore operation by implementing two new attributes, and using two existing attributes:

  • isDeleted

    • Has existed since Windows 2000
    • Exists on every object
    • Describes if an object is deleted but restorable
  • isRecycled

    • New to Windows Server 2008 R2
    • Exists on every object once it is recycled
    • Describes if an object is deleted but not restorable
  • msDS-deletedObjectLifetime

    • New to Windows Server 2008 R2
    • Is set on the “CN=Directory Service,CN=Windows NT, CN=Services, CN=Configuration, DC=COMPANY,DC=COM” container
    • Describes how long a deleted object will be restorable
  • tombstoneLifetime

    • Has existed since Windows 2000
    • Is set on the “CN=Directory Service,CN=Windows NT, CN=Services, CN=Configuration, DC=COMPANY,DC=COM” container
    • Describes how long a deleted object will not be restorable

 

Get Tombstone Lifetime:

 

Write-Output “Get Tombstone Setting `r”
Import-Module ActiveDirectory

 

$ADForestconfigurationNamingContext = (Get-ADRootDSE).configurationNamingContext

 

$DirectoryServicesConfigPartition = Get-ADObject -Identity “CN=Directory Service,CN=Windows NT,CN=Services,$ADForestconfigurationNamingContext” -Partition $ADForestconfigurationNamingContext -Properties *

 

$TombstoneLifetime = $DirectoryServicesConfigPartition.tombstoneLifetime

 

Write-Output “Active Directory’s Tombstone Lifetime is set to $TombstoneLifetime days `r “

Note that no value returned means the tombstone lifetime setting is set to 60 days (default for AD forests installed with Windows 2003 or older).

Set Tombstone Lifetime to 365 days (for example):
Import-Module ActiveDirectory
$ADForestconfigurationNamingContext = (Get-ADRootDSE).configurationNamingContext

Set-ADObject -Identity “CN=Directory Service,CN=Windows NT,CN=Services,$ADForestconfigurationNamingContext” -Partition $ADForestconfigurationNamingContext -Replace @{tombstonelifetime=’365′}

This same process can be leveraged to identify the msDS-deletedObjectLifetime value (180 days by default).

 

References:

Microsoft TechEd 2014 Sessions Posted

One of the toughest parts of being in the IT field is staying up to date with technology trends, directions, and products. I have found that free-to-view online content is a great way to do this.
Microsoft has TechEd sessions posted going back to 2008:
Here are some sessions from Microsoft TechEd 2014 I find interesting:

Continue reading