ADFS Publishing via Azure Traffic Manager

Start with a connection from your workstation to the Traffic Manager, this may require that you use a settings file if you are not configured for federated identity, which assuming this is your first Federation Service to publish publically, might well be the case.

Azure Using a Microsoft Accounts

If you are using a Microsoft Account for your authentication, then we will need to get our Azure Publishing Settings file, and place it on your workstation. To locate this file, you can use the following command which will open your browser.

Once the browser is open, you will need to authenticate with your account details, after which you will be provided with a link to download the setting file

PS> Get-AzurePublishSettingsFile

Continue to download the Settings XML file, to your machine, then back on the PowerShell prompt so that we can register this account information on our workstation

PS> dir .\Downloads

    Directory: C:\Users\Damain\Downloads

Mode                LastWriteTime     Length Name
----                -------------     ------ ----
-a---         2/17/2015   8:55 AM       7443 2-17-2015-credentials.publishsettings


PS> Import-AzurePublishSettingsFile '.\ 2-17-2015-credentials.publishsettings' –verbose

VERBOSE: Setting: Microsoft.Azure.Common.Extensions.Models.AzureSubscription as the default and current subscription.
To view other subscriptions use Get-AzureSubscription


Id          : d1d475c9-5a29-4a54-a36c-aa257705612f
Name        : My Sandbox
Environment : AzureCloud
Account     : Cxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx4
Properties  : {[SupportedModes, AzureServiceManagement], [Default, True]}

Id          : 3719257d-381a-45ca-a7aa-ea741ab966e1
Name        : My Production
Environment : AzureCloud
Account     : 3xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx4
Properties  : {[SupportedModes, AzureServiceManagement]}

Assuming there are no issues with the XML file, we should see any subscriptions which are associated with the account presented back to us, after the file has being imported, as you can see in the above example.

Next, let’s check what accounts were imported as part of this process

PS> Get-AzureAccount

Id                                       Type        Subscriptions                          Tenants
--                                       ----        -------------                          -------
Cxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx4 Certificate d1d475c9-xxxx-xxxx-xxxx-xxxxxxxx612f
3xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx4 Certificate 3719257d-xxxx-xxxx-xxxx-xxxxxxxx66e1

Azure Traffic Manager

Setting up the Azure Traffic Manager is quite trivial, all we need to know is the FQDN which we will publish the service as, note that this is going to terminate with the mandatory trafficmanager.net suffix, and we will alias this with our corporate DNS later.

Geo-Awareness

As azure is distributed, we have the ability to take into account geo-balancing with our implementation, educating Azure of where or services are hosted, in relation to the Azure points of presence over the globe. To determine the current locations of Azure we can use the following PowerShell commands

PS> get-azurelocation | select Name

Name
----
West Europe
North Europe
East US 2
Central US
South Central US
West US
North Central US
East US
Southeast Asia
East Asia
Japan West
Japan East
Brazil South

Selecting the Subscription to work with

We will begin by selecting the Azure Subscription we will be adding this service to; this is of course redundant if you only have a single subscription, as it will be selected by default; but when you have multiple subscriptions, none of these will be selected automatically.

To select your subscription which you are going to work with, we can start with the simple PowerShell command

PS> Select-AzureSubscription -SubscriptionName "My Production"

Add a new Traffic Manager Service

We will continue to manage the Traffic Manager from PowerShell, as the current Web UI will not permit us to manage services when we wish to use external endpoints; the portal will offer the ability to monitor the services however.

While creating a new Traffic Manager Service, we begin with a Profile which we will name with a friendly label to help identify the profile, in addition we will then define what the service will be published as (the domain name), the Load Balancing Method to use, a TTL for the service, and how to validate the keep alive on the endpoints of the service.

PS> $profile = New-AzureTrafficManagerProfile -Name "Production ADFS" -DomainName "mysts.trafficmanager.net" -LoadBalancingMethod "Performance" -Ttl 30 -MonitorProtocol "Http" -MonitorPort 80 -MonitorRelativePath "/adfs/probe"

In the example above the following options have being selected to support ADFS as a service

  • Name: Production ADFS
  • Domain Name: mysts.trafficmanager.net
  • Load Balancing Method: Performance
  • TTL: 30 seconds
  • Monitoring
    • Protocol: HTTP
    • TCP Port: 80
    • Relative Path: /adfs/probe

      This path is published only from WAP Service on HTTP, and requires at least the August 2014 Windows 2012 R2 roll up to be deployed.

We can check the new profile has been added, using the following command

PS> Get-AzureTrafficManagerProfile

Name                  DomainName                          Status
----                  ----------                          ------
Production ADFS       mysts.trafficmanager.net            Enabled

To get specific details, we can just look at the values which have been returned to our handling variable, which was called $profile

PS> $profile

TimeToLiveInSeconds : 300
MonitorRelativePath : /
MonitorPort         : 80
MonitorProtocol     : Http
LoadBalancingMethod : Performance
Endpoints           : {}
MonitorStatus       : Inactive
Name                : Production ADFS
DomainName          : mysts.trafficmanager.net
Status              : Enabled

Adding an endpoint to the new Service

With the service now registered, we can begin with the process of adding the endpoints of the services we are about to balance. As you add the endpoints, you will notice that you are not adding these with specific protocols, or ports; this is not currently configurable in the current version of Traffic Manager.

Adding external endpoints is possible only from PowerShell. The following command will allow us to add an endpoint to the profile, set its status as enabled, define the domain name of the service which we are adding, for example ‘adfs-node1.diginerve.net’, indicating the location of where we are hosting this service relative to the Azure locations.

PS> Add-AzureTrafficManagerEndpoint -TrafficManagerProfile $Profile -Status "Enabled" -Type "Any" -DomainName "adfs-node1.diginerve.net" -Location "East US"

TimeToLiveInSeconds : 300
MonitorRelativePath : /adfs/probe
MonitorPort         : 80
MonitorProtocol     : Http
LoadBalancingMethod : Performance
Endpoints           : {adfs-node1.diginerve.net}
MonitorStatus       : Inactive
Name                : Production ADFS
DomainName          : mysts.trafficmanager.net
Status              : Enabled

It is important to note that this change is not actually deployed to Azure Traffic Manager yet, but instead staged on your PowerShell session. This allows us to add multiple endpoints, and check the configuration is correct, by inspecting the variable’s values, for example the following allows us to look closer at the Endpoints.

PS> $Profile.Endpoints

DomainName        : adfs-node1.diginerve.net
Location          : East US
Type              : Any
Status            : Enabled
MonitorStatus     : Online
Weight            : 1
MinChildEndpoints :

Publish the new Profile Settings

When we are ready to proceed with the publishing of the new settings, all we need to do is pass this details to the Set Azure Traffic Manager Profile command, as illustrated below

PS> $Profile | Set-AzureTrafficManagerProfile -verbose

TimeToLiveInSeconds : 300
MonitorRelativePath : /adfs/probe
MonitorPort         : 80
MonitorProtocol     : Http
LoadBalancingMethod : Performance
Endpoints           : {adfs-node1.diginerve.net}
MonitorStatus       : CheckingEndpoints
Name                : Production ADFS
DomainName          : mysts.trafficmanager.net
Status              : Enabled

Query Azure Traffic Manager for its Current Configuration

Finally, we will loop back to Azure, and ask our PowerShell session to go back to Azure and query the current state of the traffic manager configuration.

PS> $Profile = Get-AzureTrafficManagerProfile -Name “Production ADFS”

And as before we can now inspect the values in the variable as returned from Azures Traffic Manager

PS> $Profile

TimeToLiveInSeconds : 300
MonitorRelativePath : /adfs/probe
MonitorPort         : 80
MonitorProtocol     : Http
LoadBalancingMethod : Performance
Endpoints           : {adfs-node1.diginerve.net}
MonitorStatus       : Online
Name                : Production ADFS
DomainName          : mysts.trafficmanager.net
Status              : Enabled

Now, that was pretty easy right!

TechDays Online UK 2015

Tuesday February 3rd 2015, sees the start of this year’s FREE Online (or in person) Technical event hosted by the Microsoft UK team. This is my 3rd time working on the event, and this year for the first time I am dumping the Skype connection and joining the team Live in Reading to present on the Day 2 – The Journey to the Cloud.

TechDays is an amazing event, and covers both Development and Operations tracks, with some really great information on how to get your procedures around the current generation of MS Technologies, with some very well-known names getting involved, Including Microsoft’s very own Jeffrey Snover and Scott Hanselman, and if you pay any attention to the Microsoft Rumour machines, then Mary Jo Foley is a name you will instantly recognise.

To get involved, all you need is a browser and some time; then OPEN THIS LINK

For more information on Day 1 and Day 3, please check this link for the latest details


Time

Session

Speaker

Overview

09:30-09:40

Overview of TechDays Online 2015 Day 2

Andrew Fryer, Microsoft UK

An overview of the sessions for Day 1 of TechDays Online 2015 and how to make the most of your participation using the online chat tool to interact with our experts and where to find more information on the topics of particular interest to you.

09:40-10:15

What’s new Windows Server /Hyper –V – a technical preview

Gordon McKenna, Microsoft Most Valued Professional (MVP)

Coming Soon!

10:30-11:05

How to find out what’s happening in your datacentre with Azure Insights

Sam Erskine, Microsoft Most Valued Professional (MVP)

How do you view and execute IT Service Management (ITSM) for your datacentre and cloud services. What if you are not yet in the cloud and just have a traditional data centre? This session provides some of the many answers to these questions by introducing you to Microsoft Azure Operational Insights (OpInsights). You will learn how you can start benefiting from this flagship Microsoft Service with virtually no change to your datacentre. You will get a practical scenario based Introduction to OpInsights, learn about the three setup/connection options available. You explore the ITSM facets known as Intelligent Packs on multiple device interfaces. Finally you get an introduction to the multi-tenancy features. This session is for IT Pros , Managers and anyone with a vested interest in continual Organisational IT Service improvement.

11:20-11:55

Host your own cloud with the Windows Azure Pack

Damien Flynn, Microsoft Most Valued Professional (MVP)

Not using System Center? Unsure about Azure? Curious about Cloud? In this session we will demonstrate the Windows Azure Pack and how to go from zero to cloud in little time, initially delivering self-service databases, websites, and finally infrastructure in a consistent manner, service orientated manner.

12:10-12:45

Taking scripting to the next level with Service Management / Azure Automation

Jonathan Noble, Microsoft Most Valued Professional (MVP)

By now you should be sold on the advantages of systems automation, and hopefully you’re using PowerShell to help you with that. Now you can use your PowerShell experience to create workflow-based runbooks to add service automation to your private or public clouds using SMA or Azure Automation. DevOps-simplify your infrastructure to save money, improve reliability and get some of your life back!

13:30-14:05

A new home for your old applications

Susan Smith, Microsoft UK

This session will focus on Containerisation using Docker on Azure. The concept of Containerisation will be introduced and compared with other Virtualisation Technologies, followed by an overview of Docker. The Technical Demos will introduce elementary Azure tasks, such as creating a Docker Host on Azure. Then a step-by-step Technical Demo will illustrate how you can re-invent your Legacy applications through migration to Docker on Azure, by Deconstructing and then Containerising your Legacy App.

14:20-14:55

20% + of Azure runs on Linux – why is this important and how to do it well?

Boris Devouge, Microsoft UK

With Microsoft Azure providing Infrastructure as a Service (IaaS) and strong of recent announcements showing that 20% of workloads in Azure run on Linux, this session will explore the best practices and different options to run and/or migrate Linux based workloads in the Cloud. We will also review the large subset of Open Source Software (OSS) and technologies available on Azure in area such as Test/Dev, CICD, Big Data (Hadoop) and containerisation (Docker).

15:10-15:45

DevOps in Microsoft Azure with Chef and Puppet for heterogeneous cloud environments

Tarun Arora, Microsoft Most Valued Professional (MVP)

 

16:00-16:35

Make Azure your DMZ

Simon Skinner, Microsoft Most Valued Professional (MVP)

The advantages of using Azure today are more than beneficial to any organization, however true not all understand the baby sips we can take to start this journey. We are in the mature era of the web, a rapidly growing struggle to get representation or even full blown commerce site in full operation. Many companies what to have their site/s integrated into their internal system but without impacting on their networks or bandwidth, both have a cost. The solution could be simpler than you think, Azure! Make Azure ‘your’ DMZ. There is more resilience and bandwidth than most DMZs and your company stays in control. You choose the IP Range and can even control the traffic through the firewall, join my session and learn just how easy this can be.

16:50-17:25

Microsoft Corporate Keynote and Interview – Jeffrey Snover

Jeffrey Snover, Distinguished Engineer at Microsoft

This session will focus on the open source interoperability that Powershell provides for other platforms and also provide an update on the Open Management Infrastructure (OMI) initiative that Microsoft has introduced with the Open Group.

17:25-17:35

Wrap-up of TechDays Online Day 2

Andrew Fryer, Microsoft UK

 

2015 Cisco Champion!

Well now I am totally lost for words!

Over the last few years I have focused primarily on the fabric which binds the magic and power of our clouds and services hosted within the data centre, with some emphasis networking technologies and software defined networks. Of course I have singled out some specific brands to which I have personal experience with, but never did I expect to be getting an awarded recognition for the work I have done and/or shared on this platform.

But today that is exactly what has just happened!

Congratulations: Welcome to the Cisco Champions Program 2015!

Because of your excellent contributions to the IT community, you have been chosen out of hundreds of applicants, to be a member of the Cisco Champions team in 2015. Congratulations!

Cisco Champions are seasoned IT technical experts and influencers who enjoy sharing their knowledge, expertise, and thoughts across the social web and with Cisco. The Cisco Champions program encompasses different areas of interest, such as Data Center, Internet of Things, Enterprise Networks, Collaboration and Security. Cisco Champions are located all over the world.

This is amazing

SMA: Get-OrchestratorRunbook Cannot Bind

As you create and publish Runbooks in SMA, some of these may require to execute runbooks which are hosted on Orchestrator. Over time these might all be working just as expected, however; one day out of the blue your latest masterpiece of Orchestration and Workflows may decide to turn evil and start failing with no logical explanation evident…

The only tell tail sign is visible in the history log for the Workflow which is failing, with this real ugly message appearing

The source of this error is the SMA command Get-OrchestratorRunbook which in our workflow would be defined as similar to line 28 in the sample below.

So, what is the issue? All the parameter’s are correct –

  • $url – points to the correct location in Orchestrator ODATA Web Services
  • $PSUserCred – references real credentials with access to the Orchestrator ODATA Web Services
  • $MyRunbookPath – is the correct path to our target runbook

Everything checks out; we have many more runbooks hosted on the same Orchestrator environment, all working fine, and to make things worse; many of them are also been successfully called from SMA at this time…

Investigation

We being the diagnostics looking at the error log as shown above, but sadly it really does not offer a lot of details, cryptic to be honest. So we need to figure out what is happening at this call, which results in the error.

To assist, we will create a function of our own, to simulate what this SMA function is trying to do, so that we can get a View into the workings of this issue.

The function will attempt to establish a connection with the ODATA service, then search for our runbook; once located the runbook will then be checked to see if we have parameters defined on the runbook; In the case where the runbook is not located, we will be informed that this is the case also.

Calling this function is also very simple, and similar to the code we use in SMA (on purpose); passing in parameters including the location of the Orchestrator ODATA URL, credentials to authenticate with the service, and the name of the runbook which we are interested in.

As an example, let’s consider the query of an older runbook which is working fine at the moment.

PS> $myCreds = Get-Credential
PS> Get-OrchestratorRunbook -RunbookName "3.1.1. My Original Runbook" -Server "api.orchestrator.diginerve.net" -Credentials $myCreds
8204a38b-fdc9-4381-9368-8da459bb793d (\3. Management\3.1. Workitems\3.1.1. My Original Runbook)
c8a4550b-2328-4fae-9241-321ca17e3ab6 (Mode)
03637ac2-a875-4555-a300-93af698c4602 (WorkItemID)

From the output of this command, we can quickly see that the runbook is located, and it had two parameters detected.

Now, lets try this again, this time using the name of the runbook which is giving us trouble from SMA.

PS> Get-OrchestratorRunbook -RunbookName "3.1.2. My New Runbook" -Server "api.orchestrator.diginerve.net" -Credentials $myCreds
Runbook Not Found

As you can see, the results are unexpected, with the function reporting that our runbook is not found… So at this point, we go back, triple check the runbook name in orchestrator, its permissions and in general that everything looks correct.

Once satisfied, we can retest, and hopefully get the details of our runbook. But, technology been technology – we know that its not going to be so simple, after all SMA has been failing on this runbook consistently, so we really have a reproduction of the problem with our test.

Deduction

With the data to confirm that our runbook is indeed created, named, published and delegated we know that the work in orchestrator is done correctly.

Similarly, in SMA we know that it is also not broken, in this case we have confirmed the error message which is reported is indeed valid, as the ODATA interface is not exposing our runbook.

Solution

With the issue clearly on the Orchestrator ODATA service, what we are experiencing here is a known issue with this web service, where it can fail to keep up-to-date with all the runbooks which are registered in our environment. To resolve this, we have some options, but the simplest and fastest method is to use the Orchestrator Health Checker.

Note: I have posted details on this tool – Install and Configure the Health Checker, followed by a post on Using the Health Checker

From the Options menu, we simple need to select the option Flush Web Service Cache and then select the appropriate option for rebuilding the ODATA endpoint, personally I will select Full Re-build. This will take a few moments depending on the size and complexity of our Orchestrator environment; but once complete a popup will be presented to confirm the action is done.

Now, finally, we can loop back and run our test function again. Assuming everything has worked out, we should now be able to locate the runbook, and expose its parameter’s

PS> Get-OrchestratorRunbook -RunbookName "3.1.2. My New Runbook" -Server "api.orchestrator.diginerve.net" -Credentials $myCreds

39314a8d-c89f-4fe0-adf9-3aade8ed7286 (\3. Management\3.1. Workitems\3.1.1. My New Runbook)
130e7f87-d08f-4201-b912-953459828b69 (WorkItemID)

With this now working, we can finally loop back to SMA, and validate the workflow is working correctly.

022813_1806_ImBack1

2015 Microsoft MVP

Wow, its already Mid January and this poor blog has been getting a whole lot of neglect. Well don’t give up just yet, as things are about to get a little busy around here.

January 1st was once again a really great day with the arrival of my 5th confirmation mail from Microsoft, honoured to be endowed once again as a Cloud and Data Centre MVP.

So what happened to 2014 content, well quite a lot of that ended up being posted on a 3rd party blog site – petri.com; but for 2015 I am back home where I really belong. with Windows 10 in the air I can confirm we will have a lot to cover; I have also nuked out my lab over the last weeks, and have begun a complete ground up rebuild, and I have chosen to document this in all its gory details, which should make from some very interesting how-to content.

On a personal note, after 17 years of being very happily married, last September we were endowed with the miracle of a little girl, who to say has turned my blogging and book writing time upside down is nothing but an understatement, though I do think that there might be some form of routine peering trough the cracks again – of course this is now jinxed…

 

PowerShell Deployment Toolkit V3.0 – Desired State Configuration

It should come as no surprise that I am an avid fan of the PowerShell deployment toolkit; given the number of posts I have published on this blog and over on perti.com

A quick reminder of some of these posts will include:

But, now with the opening of TechEd Europe 2014, the world is about to be shaken all over again, as Desired State Configuration takes grip of one of my favourite utilities.

If you have not figured out what all the fuss is about, then please, indulge yourself, as this is going nowhere, and we have a lot more amazing stuff to come. The following are some of the posts I posted on the Petri site over the last few months, and could not be more appropriate, than they could be, with the news today we are about to discover

The Guru that is Rob Willis, (who also is about to get Married in the new few days – Congratulations Rob!); has been quietly rebuilding his work of art, the PowerShell deployment tool kit, to deliver today as part of the Desired State Resource Kit – Wave 8 – the xDEPLOY resource; when you crack open this kit you will see the first of this amazing work materialize with the support for the following System Center components included

All of these components, are supplied in the kith with individual component examples; in the Windows 2012 R2 series.

But, that’s not all, as we are now in the area of threshold, I have been told by an informed insider, that this kit also includes support to deploy System Center Technical Preview, on Windows Server Technical Preview.

Go Download NOW!

And check back later from some samples!

Configuring Service Management Automation for Repeating Schedules

Service Management Automation is one of these tools which quickly becomes the central location to host and execute all our scripting activities. One of the more useful options for the service is the ability to schedule the execution of these scripts so that we can set and forget, allowing SMA to take over from what was once the task of the Windows Scheduler.

However, the first time you are faced with the requirement to execute a flow on a schedule of 30 minutes for example, you will quickly release that this is still a V1 release, as the minimum granularity which we can define for execution is one day! OPPS!

Now What?!?

Never fear, there is an answer, but it is not so pretty. As we can see for the above screen grab, we can define the Hour and Minutes at which the schedule should occur daily, therefore we can easily create a schedule to trigger for each of our required intervals. In this case we would create a schedule for example to Execute Flow at 00:30 then, save this and create another schedule Execute Flow at 01:00, and repeat this cycle. Resulting in no less than 48 schedule entries, to tackle our requirement.

While this might not be ideal, using the web UI only leads to increasing the anxiety, so I have chosen to use PowerShell to make our life a whole lot less stressful.

PowerShell to the Rescue!

I have created a simple function in PowerShell which will loop through the process of creating each of our schedules, providing just some very basic details, including the prefix for the schedule name, for example “Execute Flow At “, along with the interval for which these schedules names should be created for in minutes, and then of course the HTTPS endpoint of our SMA Service.

An example of how I call this function would be as follows

ADD-SMARecurringSchedule –Interval 20 –ScheduleNamePrefix "Execute Flow at " –WebServiceEndpoint https://pdc-sc-sma01

The results is a labour of love, with all the requested schedules created and ready for our consumption.

Workflows

We are not done yet however, all that we have succeed to do yet is define the schedules. Now we must proceed to associate some, or all of these with the workflow which we require to be triggered.

On the Web UI we can navigate to the workflow in question, and from the associated Schedule page, we can click on the Schedule icon in the drawer, and select the option Use Existing which will present the dialog to ‘Select a schedule’ offering only the options which have not already been associated with the workflow. From here we can select one of the Schedules and click on the Tick then confirm our choice.

Again, you will quickly get the V1.0 feel for the portal when you realise that we cannot multi-select options here, and we are once again facing the daunting reality of having to repeat this exercise far too many times; thus back to PowerShell.

PowerShell to Associate our Recurring Schedules to the Workflow

Leveraging the schedules we created in the previous step, I have created a second function which leverages the same logic loop as before, but this time also accepts a runbook name, to which the function will add the recurring schedule to recursively.

In a very similar experience to before, the command which I will issue would look as follows

ADD-SMARunbookRecurringSchedule –Interval 20 –ScheduleNamePrefix "Execute Flow at " –WebServiceEndpoint https://pdc-sc-sma01 –RunbookName "My Special Runbook"

One completed, the result is just as we expect, and the previously defined Schedules are now associated with the runbook, ensuring it will trigged at the desired interval.

The Module

Enough already, the following is the code which I have created to make our lives easier.

#
# Create-SMARecurringSchedules.ps1
#


function ADD-SMARecurringSchedule {
    param (
        $RecurringInterval = 30,
        $ScheduleNamePrefix = "Execute Flow at ",
        $WebServiceEndpoint
    )

    process {
        $iterations = 1440 / $interval
        $endDate   = Get-Date -Month 12 -Day 31 -year 9999 -Hour 5 -Minute 00 -second 00
        $StartDateUTC = Get-Date -Hour 00 -Minute 00 -Second 00
        $StartDateEST = Get-Date -Hour 04 -Minute 00 -Second 00
        $endDate   = Get-Date -Month 12 -Day 31 -year 9999 -Hour 5 -Minute 00 -second 00

        foreach ($iteration in 0..$iterations) {
            $dateString = get-date $StartDateEST  -Format HH:mm
            $Name = "$ScheduleNamePrefix $dateString"
            Write-Output $Name
            Set-SmaSchedule -StartTime $StartDateUTC -ExpiryTime $endDate -Name $Name -ScheduleType "DailySchedule" -dayInterval 1 -WebServiceEndpoint $WebServiceEndpoint
    
            $StartDateEST = $StartDateEST.AddMinutes($interval)
            $StartDateUTC = $StartDateUTC.AddMinutes($interval)
        }
    }
}


function ADD-SMARunbookRecurringSchedule {
    param (
        $RecurringInterval = 30,
        $RunbookName,
		$ScheduleNamePrefix = "Execute Flow at",
        $WebServiceEndpoint
    )

    process {
        $iterations = 1440 / $RecurringInterval
        $StartDateUTC = Get-Date -Hour 00 -Minute 00 -Second 00
        foreach ($iteration in 0..$iterations) {
            $dateString = get-date $StartDateUTC  -Format HH:mm
            $Name = "$ScheduleNamePrefix $dateString"
            Write-Output $Name
            Start-SMARunbook -Name $RunbookName -webserviceendpoint $WebServiceEndpoint -schedulename $Name
            $StartDateUTC = $StartDateUTC.AddMinutes($RecurringInterval)
        }
    }
}

Feel free to fork and update if you like.

Downloading SCOM Management Packs using PowerShell

One of the most important features of using SCOM, is the vast knowledge which is offered thought the use of the Management Packs. With power always comes challenges, and the key challenge in this case, is locating and downloading all these packs; and also staying up to date.

As you are hopefully aware, my buddy Stan over at the CloudAdministrator blog has published and maintained a script up on the Technet Galleries site designed to assist in allowing us to build this repository for our daily use.

His script parses the SCOM Wiki page which the MS team maintain related to all the new and updated management packs, and from this reaches out, and grabs a nice local copy of the files for us.

Wanting to use this script to do some automation and notifications related to the packs which have been updates, added new, or simply re-downloaded I quickly became blocked due to its monolithic structure. As a bit of a fiend for PowerShell modules, I set to split the script into private functions, change some of the parsing logic, and ultimately create a single PowerShell Commandlet to do the work, while telling me of its progress, and formatting the output as true objects which I can now use to group by MP Status, and so on.

As I also spend a fair amount of time working with the rest of the System Center Suite, I wanted to also change the logging to the normal CMTrace format which I have become familiar with, but not to upset current users of the script, simply added a switch called -CMTrace to the command, to have the new format enabled.

CMTrace for Get-SCOM MP

The updated work, with the blessing and overview of Stan is back on the TechNet Gallery, ready for your consumption.

This is not the final destination for this script however, I had a plan which drove me to this middle ground, so in the coming days expect to see yet another “edition” of the script which will be destined for the Gallery also – More later.

Thanks to Stan and the community for the continued sharing, and we look forward to your comments and input.

Software-Defined Networking with Windows Server and System Center Jump Start

Free online event with live Q&A with the Microsoft Networking Team: http://aka.ms/SftDnet

Wednesday, March 19th from 8am – 1pm PST

Are you exploring new networking strategies for your datacenter? Want to simplify the process? Software-defined networking (SDN) can streamline datacenter implementation through self-service provisioning, take the complexity out of network management, and help increase security with fully isolated environments. Intrigued? Bring specific questions, and get answers from the team who built this popular solution!

Windows Server 2012 R2 and System Center 2012 R2 are being used with SDN implementations in some of the largest datacenters in the world, and this Jump Start can help you apply lessons learned from those networks to your own environment. From overall best practices to deep technical guidance, this demo-rich session gives you what you need to get started, plus in-depth Q&A with top experts who have real-world SDN experience. Don’t miss it!

Register here: http://aka.ms/SftDnet

Preview Azure with MPLS!

Wow, only yesterday I realised that I missed the announcement, and today Microsoft take another step and move this from concept to realty, by offering the solution in Preview Form!

You can get started using this link – http://www.windowsazure.com/en-us/services/preview/

The first question to come to mind will be Price, which is rather encouraging, on a monthly plan a 1Gb port will cost only €224, with 15Tb outbound included (€0.027/GB after) and Unlimited Inbound… via an Exchange; or if you are working with a Network Service Provider, a 100Mbps port will cost €671 with unlimited in and outbound traffic.

These include dual ports as standard for redundancy, and are 50% discount prices for launch, but even at the full cost, these are numbers we can work with, when we start comparing apples to apples.

Finally, Azure can begin to look like a true datacenter alternative; the next budgeting cycle will be interesting :)

To tease a little more, this is an extract from the over posted here – http://www.windowsazure.com/en-us/services/expressroute/

Experience a faster, private connection to Windows Azure

Windows Azure ExpressRoute enables you to create private connections between Azure datacenters and infrastructure that’s on your premises or in a colocation environment. ExpressRoute connections do not go over the public Internet, and offer more reliability, faster speeds, lower latencies and higher security than typical connections over the Internet. In some cases, using ExpressRoute connections to transfer data between on-premises and Azure can also yield significant cost benefits.

With ExpressRoute, you can establish connections to Azure at an ExpressRoute location (Exchange Provider facility) or directly connect to Azure from your existing WAN network (such as a MPLS VPN) provided by a network service provider.

ExpressRoute connection options

Currently, ExpressRoute is available in the US through the following partners:

image

Making Incredible Technology, Incredibly Simple