PowerShell: Using Active Directory .Net methods in PowerShell Part 1

There are times you don’t have access to the Active Directory PowerShell cmdlets. One of the great things about PowerShell is the ability to use .Net in PowerShell scripts. For more, check out Part 2.

Here are some alternatives to using Get-ADForest & Get-Domain:

 

# Get Active Directory Forest Information
$ADForestInfo = [System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest()
$ADForestInfo.Name
$ADForestInfo.Sites
$ADForestInfo.Domains
$ADForestInfo.GlobalCatalogs
$ADForestInfo.ApplicationPartitions
$ADForestInfo.ForestMode
$ADForestInfo.RootDomain
$ADForestInfo.Schema
$ADForestInfo.SchemaRoleOwner
$ADForestInfo.NamingRoleOwner
# OR
[System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest().Name
[System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest().Sites
[System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest().Domains
[System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest().GlobalCatalogs
[System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest().ApplicationPartitions
[System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest().ForestMode
[System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest().RootDomain
[System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest().Schema
[System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest().SchemaRoleOwner
[System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest().NamingRoleOwner
###
# Get Active Directory Domain Information
  # Target the current (local) computer’s domain:
  $ADDomainInfo = [System.DirectoryServices.ActiveDirectory.Domain]::GetComputerDomain()
  # Target the current user’s domain:
  $ADDomainName = [System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain()
$ADDomainInfo.Forest
$ADDomainInfo.DomainControllers
$ADDomainInfo.Children
$ADDomainInfo.DomainMode
$ADDomainInfo.Parent
$ADDomainInfo.PdcRoleOwner
$ADDomainInfo.RidRoleOwner
$ADDomainInfo.DomainControllers
# OR
[System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain().Forest
[System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain().DomainControllers
[System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain().Children
[System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain().DomainMode
[System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain().Parent
[System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain().PdcRoleOwner
[System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain().RidRoleOwner
[System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain().DomainControllers
# Note: Use [System.DirectoryServices.ActiveDirectory.Domain]::GetCOMPUTERDomain().Attribute for the local computer’s domain info.
# Example: [System.DirectoryServices.ActiveDirectory.Domain]::GetCOMPUTERDomain().Forest
###
# Get the local computer’s site information:
$LocalSiteInfo = [System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite()
$LocalSiteInfo.Name
$LocalSiteInfo.Domains
$LocalSiteInfo.Subnets
$LocalSiteInfo.Servers
$LocalSiteInfo.AdjacentSites
$LocalSiteInfo.SiteLinks
$LocalSiteInfo.InterSiteTopologyGenerator
$LocalSiteInfo.Options
$LocalSiteInfo.Location
$LocalSiteInfo.BridgeheadServers
$LocalSiteInfo.PreferredSmtpBridgeheadServers
$LocalSiteInfo.PreferredRpcBridgeheadServers
$LocalSiteInfo.IntraSiteReplicationSchedule
# OR
[System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite().Name
[System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite().Domains
[System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite().Subnets
[System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite().Servers
[System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite().AdjacentSites
[System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite().SiteLinks
[System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite().InterSiteTopologyGenerator
[System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite().Options
[System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite().Location
[System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite().BridgeheadServers
[System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite().PreferredSmtpBridgeheadServers
[System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite().PreferredRpcBridgeheadServers
[System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite().IntraSiteReplicationSchedule

PowerShell: Get the Dates When the Active Directory Schema Was Updated

 

The Microsoft Scripting Guys blog has a great article on determining when schema updates were performed along with some information about the schema changes – at least enough to see if it was an Exchange update.

 


###########################
# Get Schema Update Dates #
###########################
# Code from: http://blogs.technet.com/b/heyscriptingguy/archive/2012/01/05/how-to-find-active-directory-schema-update-history-by-using-powershell.aspx
write-output "Reading all schema data... " `r
import-module activedirectory
$schema = Get-ADObject -SearchBase ((Get-ADRootDSE).schemaNamingContext) `
-SearchScope OneLevel -Filter * -Property objectClass, name, whenChanged,`
whenCreated | Select-Object objectClass, name, whenCreated, whenChanged, `
@{name="event";expression={($_.whenCreated).Date.ToShortDateString()}} | `
Sort-Object whenCreated

#"`nDetails of schema objects changed by date:"
#$schema | Format-Table objectClass, name, whenCreated, whenChanged `
#-GroupBy event -AutoSize

write-output "`nCount of schema objects changed by date:" `r
Write-output "This displays the approximate date each each schema update was performed." `r
$schema | Group-Object event | Format-Table Count,Name,Group –AutoSize

LSASS Crashing, CNF Objects May Be the Cause

What Happens and How Do I Know if I’m Affected?

When CNF mangled NTDS settings objects are created, the Lsass.exe process may crash and unexpectedly reboot one or more domain controllers. So there is a pretty good chance you’ll know about it. You may not know the root cause of the crash. More specifically though you’ll see the following events in the Application Log which you can look for.

Log Name: Application
Source: Application Error
Date: DateTime
Event ID: 1000
Task Category: Application Crashing Events
Level: Error
Keywords: Classic
User: N/A
Computer: ComputerName
Description:
Faulting application name: lsass.exe, version: 6.1.7601.17725, time stamp: 0x4ec483fc
Faulting module name: ntdll.dll, version: 6.1.7601.18229, time stamp: 0x51fb164a
Exception code: 0xc0000374
Fault offset: 0x00000000000c4102
Faulting process id: 0x1f4
Faulting application start time: 0x01ceb94c671de3dd
Faulting application path: C:\Windows\system32\lsass.exe
Faulting module path: C:\Windows\SYSTEM32\ntdll.dll
Report Id: 80a2cd04-2540-11e3-99e2-441ea1d316a4
Faulting package full name: %14
Faulting package-relative application ID: %15

And

Log Name: Application
Source: Microsoft-Windows-Wininit
Date: DateTime
Event ID: 1015
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: ComputerName
Description:
A critical system process, C:\Windows\system32\lsass.exe, failed with status code 255. The machine must now be restarted.

Read more of the blog post:

http://blogs.technet.com/b/askpfeplat/archive/2014/06/23/lsass-crashing-cnf-objects-may-be-the-cause.aspx

 

PowerShell: Get Active Directory Instantiation Date

 

The Scripting Guys blog posted a very useful script on how to determine when the Active Directory Forest was stood up.

 


#############################
# Get AD Instantiation Date #
#############################
# Code from: http://blogs.technet.com/b/heyscriptingguy/archive/2012/01/05/how-to-find-active-directory-schema-update-history-by-using-powershell.aspx
write-output "Checking Active Directory Creation Date... " `r
write-output "Displaying AD partition creation information " `r

Import-Module ActiveDirectory
Get-ADObject -SearchBase (Get-ADForest).PartitionsContainer `
-LDAPFilter "(&(objectClass=crossRef)(systemFlags=3))" `
-Property dnsRoot,nETBIOSName,whenCreated | Sort-Object whenCreated | Format-Table dnsRoot,nETBIOSName,whenCreated -AutoSize

PowerShell: Using a HashTable to Identify Active Directory Schema & Exchange Version

It’s easy to get the Active Directory schema version as well as the installed Exchange (schema) version by using the Active Directory PowerShell cmdlet, Get-ADObject. This script leverages a built-out HashTable to perform a lookup against the version numbers.


###################################
# Create Schema Version Hashtable # 20140606-14
###################################
Write-Verbose "Create Schema Version HashTable `r "
$SchemaVersionTable =
@{
"13" = "Windows 2000 Active Directory Schema" ;
"30" = "Windows 2003 Active Directory Schema";
"31" = "Windows 2003 R2 Active Directory Schema" ;
"44" = "Windows 2008 Active Directory Schema" ;
"47" = "Windows 2008 R2 Active Directory Schema" ;
"51" = "Windows Server 8 BETA Active Directory Schema" ;
"56" = "Windows 2012 Active Directory Schema" ;
"69" = "Windows 2012 R2 Active Directory Schema " ;
"4397" = "Exchange 2000 RTM Schema" ;
"4406" = "Exchange 2000 SP3 Schema" ;
"6870" = "Exchange 2003 RTM Schema" ;
"6936" = "Exchange 2003 SP3 Schema" ;
"10637" = "Exchange 2007 RTM Schema" ;
"11116" = "Exchange 2007 RTM Schema" ;
"14622" = "Exchange 2007 SP2 & Exchange 2010 RTM Schema" ;
"14625" = "Exchange 2007 SP3" ;
"14726" = "Exchange 2010 SP1 Schema" ;
"14732" = "Exchange 2010 SP2 Schema" ;
"14734" = "Exchange 2010 SP3 Schema" ;
"15137" = "Exchange 2013 RTM Schema" ;
"15254" = "Exchange 2013 CU1 Schema" ;
"15281" = "Exchange 2013 CU2 Schema" ;
"15283" = "Exchange 2013 CU3 Schema" ;
"15292" = "Exchange 2013 SP1 Schema" ;
"15300" = "Exchange 2013 CU5 Schema"

}
################################
# Get AD Schema Version Number # 20111029-14
################################
Import-Module ActiveDirectory
Write-Output “Checking Schema version on the PDC Emulator ($ADDomainPDCEmulator) `r ”
$ADSchemaConfigurationDistinguishedName = (Get-ADRootDSE).schemaNamingContext
$ADSchemaVersion = (Get-ADObject $ADSchemaConfigurationDistinguishedName -Property objectVersion).objectVersion
$ADSchemaVersionName = $SchemaVersionTable.Get_Item(“$ADSchemaVersion”)
Write-Output “The current AD Schema Version is $ADSchemaVersion which is $ADSchemaVersionName `r ”
######################################
# Get Exchange Schema Version Number #
######################################
Write-Output “Checking Exchange Schema version `r ”
$ExchangeSchemaConfigurationDistinguishedName = ‘cn=ms-exch-schema-version-pt,’ + $ADSchemaConfigurationDistinguishedName
$ExchangeSchemaVersion = (Get-ADObject $ExchangeSchemaConfigurationDistinguishedName -Property rangeUpper).rangeUpper
$ExchangeSchemaVersionName = $SchemaVersionTable.Get_Item(“$ExchangeSchemaVersion”)
Write-Output “The current Exchange Schema Version is $ExchangeSchemaVersion which is $ExchangeSchemaVersionName `r “

PowerShell: Identifying Cloned Computers by CMID or SID

Here’s the PowerShell command for identifying the computer SID by finding local accounts:
Get-WmiObject -class Win32_UserAccount

This command shows the Information for the first account in the list which should be local:
(Get-WmiObject -class Win32_UserAccount)[0]

Here’s a PowerShell command to run on each of the servers. If the result is the same, they have the same Client Machine ID (CMID):
Get-WmiObject -class SoftwareLicensingService | Select-object ClientMachineID

Here’s a PowerShell command to run on each of the servers. If the result is the same, they have the same Client Machine ID (CMID):
Import-module activedirectory
$AllServers = Get-ADComputer –filter {OperatingSystem –like “*Server*”} –property *
ForEach ($AllServersItem in $AllServers )
{
$ServerName = $AllServersItem.Hostname
$ServerCMID = Get-WmiObject –computer $ServerName  -class SoftwareLicensingService | Select-object ClientMachineID
Write-Output “$ServerName has the CMID: $ServerCMID “
}

57 Tips Every Admin Should Know

GFI’s 57 Tips Every Admin Should Know:

The longer a person serves as a network admin, the more tips and tricks they are likely to pick up along the way. Some could be shortcuts, others might seem like magic, but all are intended to save you time and help you solve problems. Assume that all of these Windows commands should be run from an administrative command prompt if you are using Vista, Windows 7, or Windows 2008.

This list covers the following topics:

  • Active Directory
  • Windows Networking
  • Windows 7
  • Windows 2008
  • Linux

Related Posts:

New APT Approaches

The Trend Micro Security Intelligence Blog has an interesting article on how hackers are using legitimate tools as part of APT attacks.

 

In our 2013 predictions, we noted how malware would only gradually evolve without much in the way of significant change. This can be seen in the use of some (otherwise legitimate) hacking tools in APT attacks.

How is this a problem? Hacking tools are grayware which are not always detected by anti-malware products or at least ethico-legal issues are keeping them from doing so. Unfortunately, this means less visibility in APT forensic investigations. In addition, it also saves attackers the trouble of writing their own tools. Some of the common hacking tools we see are:

  • Password recovery tools – tools for extracting passwords or password hashes stored by applications or the operating system in the local drive or in registry entries. These are typically used to clone or impersonate user accounts for obtaining administrator rights. Pass the hash technique is one common method for attackers to gain administrator rights via stolen password hashes.
  • User account clone tools – used to clone a user account once password has been obtained by the attacker. Upon acquiring enough privileges, the attacker can then execute malicious intent while bypassing the system’s security measures.
  • File manipulation tools – tools for manipulating files such as copying, deleting, modifying timestamps, and searching for specific files. It is used for adjusting timestamps of accessed files or for deleting components to cover tracks of compromise. It can also be used for searching key documents for extraction where the attacker can search for files with specific file extensions.
  • Scheduled job tools – software for disabling or creating scheduled tasks. This can help the attacker to lower the security of the infected system by disabling scheduled tasks for software updates. Likewise, it can also be used maliciously. For instance, the attackers can create a scheduled task that will allow them to automatically steal files within a certain timeframe.
  • FTP tools – tools that aid in FTP transactions like uploading files to a specific FTP site. Since FTP transactions would look less suspicious in the network, some APT threat actors prefer to upload stolen data to a remote FTP site instead of uploading them to the actual C&C server. It should be noted that there are several legitimate FTP applications, which may also be utilized by cybercriminals.
  • Data compression tools – these tools are neither malicious nor considered as hacking tools. In most cases, these are legitimate file compression tools, such as WinRAR, being utilized by attackers to compress and archive multiple stolen files. This aids the attacker in the data exfiltration phase where they can upload stolen documents as a single archive. In a few cases, however, we have seen these applications being packaged and configured to compress a predefined set of files.

How can we identify an APT using these tools?

We have seen how these tools are used in APTs to gain administrator rights and collect key documents. So how can IT administrators and power users then use this information to identify an APT that uses these tools?

  1. Suspicious instances of command shell process may indicate possible compromise. The tools listed above are either command line tools or runs both in command line and via GUI. Attackers use these tools through a hidden command prompt instance thus regularly checking your environment for unknown command shell process can help you identify possible infection. Additionally, using process utilities such as Process Explorer will allow you to see the parameters in a command process. This may help you correlate possible components of an APT.
  2. The presence of tool(s), whether legitimate or not, can be a sign of compromise. Attackers have long been leveraging legitimate software for malicious purposes. As such, users should be wary on the software present on their systems and should be able to identify what they install. It may be tedious, yes, but being vigilant to files present in your system could spell the difference between mitigating an APT compromise and mass pilfering of your organization’s classified documents.
  3. In addition, we have observed that these tools are sometimes saved by the attackers using odd file names or with fake file extensions. Being able to identify added files in your system is again key in identifying possible compromise.
  4. Paying attention to FTP connections in the network logs is a good idea. While it is more common to check for malicious C&C connections, checking for FTP connections gives another opportunity to identify a breach in your network. In a corporate setting, FTP sites are usually Intranet sites. Thus, it is easier to sort out legitimate FTPs from malicious ones. FTP transactions are significantly smaller than other type communications in the network, which may allow you to identify a breach faster. Furthermore, checking for archive files or files with odd file names being uploaded to a remote site may also suggest compromise.
  5. Review scheduled jobs. Scheduled jobs are a common auto-start method not only for APTs, but to malware in general. Scrutinizing the properties of scheduled jobs will not only allow you identify infection, but will also most likely help you identify components of the attack through the files they execute. Considering the growing number of APT campaigns today, identifying existing APT compromise from an organization’s network is as important as preventing initial APT infection.

By understanding targeted attacks from different perspectives, users, security administrators, as well as security researchers are empowered to better combat these threats. Highlighting APT components, in this case, extend our visibility in identifying existing compromise by knowing what and where to look for.

We previously noted that we will see an increase in attacks that have destructive capacity rather than motivated by espionage. Furthermore, localized attacks with certain defined conditions (like specific language settings, or geographic locations) will increase.

PowerShell 101: PowerShell Guide/CheatSheet

Michael Sorens has put together a comprehensive guide to using PowerShell:

This series of articles evolved out of my own notes on PowerShell as I poked and prodded it to show me more. As my collection  burgeoned, I began to organize them until I had one-line recipes for most any simple PowerShell task. Simple, though, does not mean trivial. You can do quite a lot in one line of PowerShell, such as grabbing the contents of specific elements of a web page or converting a CSV file into a collection of PowerShell objects.

  • PowerShell One-Liners: Help, Syntax, Display and Files
    begins by showing you how to have PowerShell itself help you figure out what you need to do to accomplish a task, covering the help system as well as its handy command-line intellisense. The next sections deal with locations, files, and paths: the basic currency of any shell. You are introduced to some basic but key syntactic constructs and then ways to cast your output in list, table, grid, or chart form.
  • PowerShell One-Liners: Variables, Parameters, Properties, and Objects
    rounded out with a few other vital bits on leveraging the PowerShell environment.
  • PowerShell One-Liners: Collections, Hashtables, Arrays and Strings
    covers the two fundamental data structures of PowerShell: the collection (array) and the hash table (dictionary), examining everything from creating, accessing, iterating, ordering, and selecting. Part 3 also covers converting between strings and arrays, and rounds out with techniques for searching, most commonly applicable to files (searching both directory structures as well as file contents).
  • Part 4 – pending.
    is your information source for a variety of input and output techniques: reading and writing files; writing the various output streams; file housekeeping operations; and various techniques related to CSV, JSON, database, network, and XML.

Each part of this series is available as both an online reference here at Simple-Talk.com, in a wide-form as well, and as a downloadable wallchart (from the link at the head of the article) in PDF format for those who prefer a printed copy near at hand. Please keep in mind though that this is a quick reference, not a tutorial. So while there are a few brief introductory remarks for each section, there is very little explanation for any given incantation. But do not let that scare you off—jump in and try things! You should find more than a few “aha!” moments ahead of you!

Great InfoWorld Interview with Mark Russinovich on Azure and Cloud Computing

InfoWorld has a great Interview with Mark Russinovich, Microsoft Technical Fellow, on Azure and Cloud Computing.
I included my favorite quotes below:

Intro:

Mark Russinovich is a legendary figure in the computer industry. A former teenage hacker who went on to earn a PhD in computer engineering from Carnegie Mellon, Russinovich cofounded Winternals Software — a Windows utilities vendor renowned for understanding the guts of Windows as well as Microsoft itself.

After a stint at IBM’s Thomas J. Watson Research Center and after discovering a number of high-profile Windows security vulnerabilities, not to mention the infamous Sony rootkit[1], Russinovich joined Microsoft when Winternals was acquired in 2006. Russinovich is also an accomplished novelist, whose cyberthrillers Zero Day and Trojan Horse have been well received (the third novel in the series, Rogue Code, comes out this May).

Today, Russinovich is a Technical Fellow, the highest technical position at Microsoft. He’s the sole Technical Fellow in the Windows Azure Group, acting as lead architect for Microsoft’s bet-the-company cloud initiative — $15 billion have been invested in cloud infrastructure to date. Much of what Russinovich has been working on pertains to the complex automation necessary to manage that cloud infrastructure at scale. The interview began with an examination of Azure technology and moved to broader concerns about IT’s march to the public cloud. The folllowing is an edited version.


On Microsoft Cloud Transparency:

I think that’s one place where we’ve been way more transparent than anybody else. I’ve given talks for three years since I joined Azure at TechEd and Build on Windows Azure Internals about how our virtual machine technology is implemented and how we implement that multitenancy. You don’t see Amazon or Google talking about that.


Azure Overview:

Sure. When it comes to virtual machines, which are really the building blocks of the cloud, we’ve got pools of servers, we’ve got something called a fabric controller, which is like the brain.

The Azure fabric. And that manages a pool of machines. And then there’s an application front-end, a virtual machine deployment front-end we call RDFE — Red Dog Front End. Red Dog is a carryover from Microsoft from Azure’s code name.

Here’s what happens when a customer deploys a PaaS application (what we call a Cloud Service, a collection of virtual machines) or when they deploy IaaS as virtual machines: It goes to RDFE, then RDFE finds a fabric controller that has, based on heuristics, the best utilization and capacity available for the deployment and gives the deployment to the fabric controller, which then goes and finds servers to deploy the virtual machines onto.

It uses a bunch of heuristics as well as constraint satisfaction to figure out which servers are the ones that the virtual machines should land on. We’ve got the concept of update domains and fault domains [5], so that when the infrastructure is being updated we don’t take down the whole application. We split the application across different servers so that when we’re servicing the infrastructure of the servers, it’s only taking down a slice of the application.


Regarding the future of computing:

When I joined Microsoft, I’d done a lot of Windows stuff before, but operating systems had already pretty much matured. I mean, Windows today in the internals isn’t very different than 20 years ago, and Linux is the same way — just like UNIX back in the ’70s.

This cloud operating system, data center operating system, is brand new. So the problems are new, the algorithms are new, the computer science is new. How do you detect failures quickly? How do you respond to them? How do you best do resource allocation?


Active Directory as the central piece of Azure:

One of the most valuable assets that we recognize within Microsoft when it comes to cloud and getting that integration is Windows Azure Active Directory.

The name is not a mistake. It’s completely deliberate because Active Directory became the center of on-premises network architecture. And we see Windows Azure Active Directory becoming that for the cloud.


Cloud Technology is in it’s infancy:

We’re constantly adding new functionality and features. Like I said, the cloud is new. If you look at the mature environment of the on-premises IT world, there’s not just one thing that does whatever you want it to, but probably 20 or 30 different vendors that offer products that do what you’re talking about. The cloud is not there yet. There are a lot of holes in the basic functionality, in the layered functionality of the services that would be added on top of that. This is why it’s going to be just a great economic opportunity for lots of people.


IT Career Advice:

If you look at the evolution of IT, people aren’t doing today what they were doing ten years ago. Change has just been a fact of life all along.

Now, of course, some changes are bigger than others. But change has been there all along. And if you’re not adapting, you shouldn’t be in this business. IT professionals, I think, have to step up and play a key role in this migration for their companies. Because if they don’t, shadow IT is just going to go around them

 

Read the rest of the interview at InfoWorld.

 

Load more