Thursday, July 28, 2011

Using PowerShell to write and execute CMD and BAT

Here is the important part, right up front:  remember your encoding when using Out-File.

There, done.

No, seriously – this is one of those silly things that I just spent a while figuring out.

I have a PowerShell script and it is querying for variables for me.  I have an executable that I need to run and it will not call properly using PowerShell.  The answer; write your commands to a CMD file and then simply call it.

"@echo on" | out-File -filepath "$exPath\Thing.cmd"
        '"' + $ThingExe + '"' + ' config /user:"' + $AccountName + '" /pwd:"' + $Password + '" /dsn:"' + $exPath + '\sql.dsn"' | out-File -filepath "$exPath\Thing.cmd" -append -noclobber
        "net stop ThingService" | out-File -filepath "$exPath\Thing.cmd" -append -noclobber
        "net start ThingService" | out-File -filepath "$exPath\Thing.cmd" -append –noclobber

That is the PoSh to write the CMD file.  Great.  Now, execute it.

Immediately, an error:  ‘<box>@’ is not the name of a command

What?!?  Where is this <box> character coming from?  It is coming from Unicode.

Simply edit each and every Out-File with –encoding ASCII.  The first line will look like this:

"@echo on" | out-File -encoding ASCII -filepath "$exPath\Thing.cmd"

Remember, you need to add the encoding to all Out-File commands that affect the file.  If you don’t, the files that have it missing will end up looking like “@  e c h o  o n” and the lines with the encoding set to ASCII will look like “@echo on”.

And then to execute the CMD just add the line to your script: 

& $exPath\Thing.cmd

PS – for me the $exPath is the current path where the script is executing.  You can get this with:

$exPath = Split-Path -parent $MyInvocation.MyCommand.Definition

Wednesday, July 20, 2011

Hyper-V appears to run out of RAM when there is plenty

Here is one issue that I have been tipping folks off to in the forum for some time.

The scenario is:  I have an environment, it is running great, totally stable.  At some point try to do something and I am told there are not enough resources.

If I start to look at memory counters it appears that the management OS of the Hyper-V Server is running out of RAM.

The other thing – if you do all the math, there is adequate RAM in the system for the management OS and all of the VMs.

What I have described is the behavior that is seen.  And the messages that folks see on the screen makes them believe that the server does not have enough RAM to do what it needs to do, such as starting a VM.  But it really does.

The other symptom that might be seen is that the system appears to be sluggish.

The resolution that I tell folks is to be sure to logout of the Hyper-V Server when they are done administering.

This is where an unmentioned common thread appears:  In almost all of these cases the symptom is on a Server 2008x with Hyper-V Full installation. 

And the most important part as that the server is administered using remote sessions connected to the server and the administrators do not log out of their sessions, they simply disconnect.

What is happening is that the user shell is slowly consuming more and more resources.  This especially gets high if you open the VM consoles using the console application or if you leave the consoles open and disconnect.

What I have noticed is that the simple practice of logging out of your remote session cleans up all this extra used RAM – and this simply enforces that this is a user level behavior.

My recommendation to you – always log out.  If you have Administrators that don’t comply well use an old fashioned Group Policy to automatically logout disconnected sessions.  Since these are VMs and VM console sessions nothing will be lost.

The other little trick – if this is a situation where the system won’t let you power on a VM – simply try to power it on three times.  That magical third attempt forces RAM recovery and the system will reclaim resources.

Also, this further links into the behavior that the RAM of the management OS is dynamic (always has been) and that it is also limited in that it cannot consume all of the RAM of the hardware.

Monday, July 18, 2011

Granting Network Service permissions to a Certificates Private key

There are many many reasons why you want your applications to run under the more restricted Network Service instead of the higher Local System.

The problem that you run in to is when certificates are involved.

A Local Machine Certificate is generally available to processes running as Local System by default.

In my case I have Azure injecting the certificate into the role for me and I have a legacy application component that needs to be able to use that certificate.  For security reasons this service runs as the pretty restricted Network Service. 

If I simply add the application and point to the certificate it cannot use the certificate to perform encryption because the application does not have access not the private key.  Once again, script it!

My script assumes one thing, that you have gotten the actual SSL Certificate that you want to use.  There are lots of ways to get the certificate; here is what I used:

$sslCert = Get-ChildItem Cert:\LocalMachine\My | where {$_.Subject -match "cloudapp.net"}

Here is the script that does the rest:

$sslCertPrivKey = $sslCert.PrivateKey
$privKeyCertFile = Get-Item -path "$ENV:ProgramData\Microsoft\Crypto\RSA\MachineKeys\*"  | where {$_.Name -eq $sslCertPrivKey.CspKeyContainerInfo.UniqueKeyContainerName}
$privKeyAcl = (Get-Item -Path $privKeyCertFile.FullName).GetAccessControl("Access")
$permission = "NT AUTHORITY\NETWORK SERVICE","Read","Allow"
$accessRule = new-object System.Security.AccessControl.FileSystemAccessRule $permission
$privKeyAcl.AddAccessRule($accessRule)
Set-Acl $privKeyCertFile.FullName $privKeyAcl

From the certificate we can discover its private key.  Using this I can then turn to the file system and discover that physical private key.

The gotcha resides in one line when setting $privKeyAcl  Notice the GetAccessControl(“Access”) – that only fetches the Access properties, if you don’t use that you get all properties and will end up with an error when you try to Set or Add the new permissions.  (thank you to Bilal Aslam for posting the workaround here.)

The rest simply covers modifying the ACL of the file system object.

I hope you found this one useful.

Thursday, July 14, 2011

FirstLogonCommands to configure your images on deployment when user context is required

This is an old trick and I am sure that there are more elegant ways to handle this.  However, I thought I would share how I am using AutoAdminLogon and FirstLogonCommands in my sysprep unattend answer file to do some heavy lifting for me to drive modifying server settings and application install and configuration.

This came about through using an Azure VM role and being forced to complete as much setup as I can without having to get to the console of the VM.  It is amazing how much you can automate when you get creative.

Let me get one thing out of the way:  Can this be used securely?  Sure, why not, with some proper precautions.  First, encrypt your admin user password; second, don’t use the built-in administrator; third, know that no one can get to the console of the machine (totally headless); fourth, disable the admin account and logout as your last step.

All of the settings that I am referring to are in the “Microsoft-Windows-Shell-Setup” section of your unattend.xml answer file.

First – you have to create and provide a password for the local administrator account.  Note, I am naughty by using the built-in local administrator.

<UserAccounts>
  <AdministratorPassword>
    <Value>IdidntEncryptMineButYouShould</Value>
    <PlainText>true</PlainText>
  </AdministratorPassword>
</UserAccounts>

Then I need to enable AutoAdminLogon

<AutoLogon>
  <Password>
    <Value>IdidntEncryptMineButYouShould</Value>
    <PlainText>true</PlainText>
  </Password>
  <Username>Administrator</Username>
  <LogonCount>1</LogonCount>
  <Enabled>true</Enabled>
</AutoLogon>

Then I define all of my tasks that run when the first user with administrator credentials logs on to the machine:

<FirstLogonCommands>
  <SynchronousCommand wcm:action="add">
    <Order>1</Order>
    <CommandLine>C:\MyFolder\vjRedist64\install.exe /q</CommandLine>
    <Description>Install Visual J# Redistribution</Description>
  </SynchronousCommand>
  <SynchronousCommand wcm:action="add">
    <Order>2</Order>
    <CommandLine>%WINDIR%\System32\WindowsPowerShell\v1.0\PowerShell.exe -command start-sleep 120</CommandLine>
    <Description>Wait for j# install</Description>
  </SynchronousCommand>
  <SynchronousCommand wcm:action="add">
    <Order>3</Order>
    <CommandLine>%WINDIR%\System32\WindowsPowerShell\v1.0\PowerShell.exe -command Import-Module ServerManager; Add-WindowsFeature Web-Server; Add-WindowsFeature Web-Asp-Net; Add-WindowsFeature Web-Windows-Auth; Add-WindowsFeature Web-Metabase</CommandLine>
    <Description>Add ASP.Net and IIS6 Metabase compatibility</Description>
  </SynchronousCommand>
  <SynchronousCommand wcm:action="add">
    <Order>4</Order>
    <CommandLine>%WINDIR%\System32\WindowsPowerShell\v1.0\PowerShell.exe -command set-executionpolicy remotesigned -force >> C:\Users\Public\Documents\setExecution.log</CommandLine>
    <Description>Set the ExecutionPolicy to RemoteSigned for the setup script to run</Description>
  </SynchronousCommand>

</FirstLogonCommands>

Note that there is a sequence number, this way you can simulate a workflow by having having your tasks or scripts execute in a synchronous order.  You just have to watch out for those tasks that run off on their own threads and don’t execute within the context of the command windows where the script executes (that Visual J# installer is a perfect example).  You can manage these spawned processes with PowerShell, but not with a batch command.

Wednesday, July 13, 2011

Using the Azure Fabric to add certificates to your VM Role

A really useful feature of Azure is that it can inject elements into the Role instances as it applies the configuration.

This is super, extra useful because all roles are sysprep’d images.  This includes your VM Roles. 

If you follow the Azure rules for creating your VM Roles you must prepare the VHD image with sysprep.

I don’t think this is very important if you only have one instance – but the Azure assumption is multiple instances of any role.  With that assumption the use of sysprep applies.

The problem is certificates.  If I sysprep my VHD I break the private key of my certificate as a new private key is generated.

The Visual Studio interface does not have a Certificates tab for the VM Role.  However, don’t let this stop you.  It is a simple edit of the Service Definition and the Service Configuration.

In the ServiceDefinition.csdef add a Certificate entry that names the certificate and the certificate store in which to place it.

<Certificates>
  <Certificate name="MyCertificate" storeLocation="LocalMachine" storeName="My" />
</Certificates>

This example places the certificate “MyCertificate” in the Local Machine Personal store.

In the ServiceConfiguration.cscfg add a mapping entry for the certificate you added to your Azure Service and this Role.

<Certificates>
  <Certificate name="MyCertificate" thumbprint="8F4A08C8A0**************A**E482****CF4AB" thumbprintAlgorithm="sha1" />
</Certificates>

This maps the thumbprint that Azure knows to the name you assigned the certificate to the store in which to place it.  And since the certificate you load into Azure includes both the public and private keys the certificate is fully functional once the Role instance is provisioned.

Now, all this being described..  If you use a Web or Worker role – just use the Certificates tab in the GUI.  Hopefully as the VM Role evolves, it will become as easy – for all the same reasons.

Wednesday, July 6, 2011

Adding a Certificate to the Trusted Root CA Store using PowerShell

Here is a little reminder for myself.

My scenario is that I am adding a simple public certificate to a Local Computer certificate store.  And I need to script it with PowerShell.

I have actually been searching around for this one for a bit and all the results I find make it seem really really complex and complicated and it isn’t.  But there are some gotchas that need to be dealt with.

Here is the script:

$certFile = get-childitem $exPath | where {$_.Extension -match "cer"}
if ($certFile -ne $NULL) {
    "Discovered a .cer in the same folder as this script, installing it in the LocalMachine\Root certificate store.."
    $cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2($certFile.FullName)
    $store = get-item Cert:\LocalMachine\Root
    $store.Open("ReadWrite")
    $store.Add($cert)
    $store.Close()
}

$exPath is the path where my script is executing.  I get that with: $exPath = Split-Path -parent $MyInvocation.MyCommand.Definition

The gotchas are: 

  • Getting the certificate as a certificate object – notice that when I get $cert I am actually getting the $certFile object as a new object that is a certificate, not a file.
  • Opening the store – if you try $store.Add without opening it read/write you actually get a really strange .ctor (a constructor) error.

I use this to include a private Root Certificate Authority with my Azure Service.  I simply add the .cer to the same folder in the Role project as my PowerShell script and publish.

I have my Azure Service certificate and private key being injected by the Azure Fabric and I use this little loop to add my Private Certificate Authority Certificate to the Local Machine Trusted Root Certificate Authorities store.  Thus completing my certificate chain and making my certificate useful – without buying a public certificate or messing with a wildcard public certificate.

Tuesday, July 5, 2011

role discovery data is unavailable with Worker Role front end

When working with the current state of the Azure platform the only thing that I can say is that creativity is king and assumptions are many.

I have spent more time working around default behavior than I like to mention.  And, mind you, I am working with a lot of ‘beta’ features.

I recently ran into a problem with a script that I found simply baffling.  seems it is just a blocked / hidden setting.

For months now I have been tuning PowerShell scripts that interact with the Azure Service Runtime to enumerate internal endpoints and settings of other roles in the Azure environment.  The cmdlet Get-RoleInstance has been highly useful to me.

In my current environment I am using a Worker Role as my front end for the Service (oh, gosh, not a Web Role!?).  Here is where stuff gets weird.  It has an https input endpoint.  (gasp!)

First I have to make this a TCP endpoint since the rules of Visual Studio won’t let me use an input endpoint of https with a Worker Role.  Fair enough.  I can still have my certificated injected.

Now, I set it all up, I add my script, and my .CMD startup task (just as I have with other roles) and my script fails with this strange error:  Get-RoleInstance : role discovery data is unavailable 

All I can wonder is; what the heck?!  I RDP to the instance and try the cmdlet interactively and I get the same.  This makes no sense. This works in my Wen Role and my VM Role, why the problem here?

A bit of searching and I run across some WCF mentions and the local emulator and other things.  Okay, change the search string and try again.  After an hour or so I ran across a hidden property:      <Runtime executionContext="elevated" /> 

I added this to my Worker Role settings in the ServiceDefinition.csdef file and re-published.  And, hey, it works.

I can only speculate that this somehow affects the firewall and fencing set-up of the instance with the fabric.  Allowing both the input endpoint and the internal instance to fabric communication to share the same port.  A few articles that I found mentioned communication to the load balancers, however I don’t think this is really it as the Service Runtime should be querying the Fabric, however I am sure that a load balancer fencing rule was involved.

on to the next script bug.