A Look at Microsoft Azure Networking

Alongside its platform and infrastructure as a service offerings, Microsoft Azure comes replete with custom networking tools that make it simple to set up a virtual network and cross boundary connections between the Microsoft datacenter and you. Let’s take a look.

Microsoft Azure virtual networks

Virtual networks are just that—a virtual/logical path you can configure within your Microsoft Azure tenant (account) that runs over the top of the physical network in Microsoft’s datacenters. Virtual networks run IP only and IP address assignment is handled by DHCP; static IPs are not supported. You would want to create an Azure virtual network for three main reasons (there may be others, but these are the most popular and used scenarios):

  • Customizing the IP numbering scheme used instead of accepting the default IP numbering from Microsoft, which is useful if you have an overall address corpus and want to use IP address management tools
  • Segregating services among virtual machines that are running different tiers of a service—for example, web tier and middle tiers
  • Enable virtual private networks, or VPNs, between either a single computer on your own network or your entire on premises network, and the Microsoft Azure datacenter network

Creating virtual networks is pretty simple.

  1. Open the Azure management portal.
  2. From the New menu, under Network Services, choose Virtual Network.
  3. Click Custom Create.
  4. On the Virtual Network Details page, enter a friendly name and the service region where this virtual network should be established. Click Next.
  5. On the DNS Servers and VPN Connectivity screen, enter the information for any virtual machine that will be hosted on this virtual network. This is essentially like filling out a DHCP scope with options. You can leave it blank, and Azure will handle DNS, but if you want to create VPNs (covered later in this piece), you should enter here the IPv4 addresses of your on premises DNS servers. You can also click the check boxes to configure either point to site or site to site VPNs. More on those later, so click Next.
  6. The Virtual Network Address Spaces page appears. Here is where you determine what IP numbering scheme you will use. You can use any of the private spaces you like. By default, Azure will give you 10.0.0.0/8 and automatically will build in a 10.0.0./11 subnet for future use. Click Next.
  7. You’re done. Now you can create virtual machines as you see fit and assign them to this virtual network.

Tip: Be careful—it is very difficult and in most cases impossible to change properties of virtual networks and to reassign virtual machines attached to one virtual network to another virtual network. Additionally, you have to create virtual networks prior to spinning up new virtual machines, since you cannot attach an existing virtual machine to a new network. So plan diligently and double-check your setup before committing. Otherwise you may well have to start over if you make a mistake.

Point-to-site VPN

About a year or so ago, Microsoft introduced point-to-site VPN connectivity to Microsoft Azure, which essentially means individual computers can create VPN connections directly into the Microsoft Azure data center. The computer can then access virtual machines, databases, websites and other resources as if they were on the network local to the individual computer.

Why would you want to use a point to site VPN? If you are a developer, or an IT pro, that is looking to build out a lab environment or a testing virtual machine within Microsoft Azure,  you can use the point to site VPN to have a “single network” which frees you from worrying about addressing problems, DNS and other issues that crop up when moving a solution hosted on one network to another. It is also a cinch to set up: you can simply download a connectoid (yes, that is the preferred term for the preconfigured VPN “dialer” that is provided) packet right from the Microsoft Azure management portal and double-click on it to create the connection, which is secure by default. The VPN client included in the box with Windows 7 and 8 works just fine, and no other configuration or hardware devices are needed to create the VPN connection. It is a quick and dirty method to get connection from, say, one PC you are using on your desk to a virtual machine that is hosted up in Microsoft Azure.

To do point to site VPNs right, follow the steps in the first section of this article, but in step 5, click the point to site VPN checkbox. Then, the Point-to-Site Connectivity screen appears. You can configure the address space that the VPN will use here. Azure automatically adds a 10.0.0.0/24 space, which gives you 254 potential addresses for your on premises endpoint to use. Click Next, and follow the rest of the wizard as it appears.

Site-to-site VPN

A site to site VPN in Microsoft Azure lets you set up one network that connects your on premises network with a virtual network within Microsoft Azure. It uses the public Internet as the transport medium in between the two locations. Once you have set up a site to site VPN, you can host virtual machines in the cloud numbered in the same scheme connected to the same network as your on premises servers. You can host a domain controller in Azure to provide fault tolerance, and you can export and import VMs to your heart’s content. This essentially makes Azure a large extension to your existing datacenter that you control.

Setting these up takes some doing, so we will cover site to site VPN configuration in the next piece.

This piece originally appeared with edits at the Petri IT Knowledgebase.

The Cloud Feature Gap

We all know the cloud is here to stay, and Microsoft is gearing up to be a huge player in this space. The Office 365 service is growing by double digits, for example. But what happens when cloud ambitions clash with commitments to customers who run software in their own datacenters and server closets?

Let’s look at a case study—two new features that were announced in early 2014 that were to land first in Office 365:

  • Office Graph. Office Graph is a machine learning, Big Data style technology that purports to bubble up in a single view documents, messages, and people that  are relevant to what you are working on. It uses a comprehensive view of your Exchange inbox as well as others’ inboxes, SharePoint sites, Lync conversations, and activity on the Yammer corporate social network.
  • Clutter. Like Gmail’s “priority inbox” feature, Clutter tries to establish what is legitimate mail and what is mail that is not purely spam or junk mail but that is less useful to your daily activities. It then attempts to corral those messages into one place so that your regular view of your inbox is less cluttered and contains more actionable, relevant messages to whatever you are working on.

Those IT professionals who attended the Microsoft Exchange Conference 2014 in Austin earlier this spring left with the implicit assumption that while Office 365 would receive these and other new features first, the bug fixes and improvements Microsoft was able to make from running the code at scale for so long would be baked into an eventual on premises boxed product style server release. This has been referred to numerous times as the “virtuous cycle,” where Microsoft learns how to write better code to run its own cloud service, which then integrates into the core product itself, which then enables new scenarios for Azure and Office 365, which then goes right back into the server code, over and over again iterating for a much better product all around. Microsoft’s Perry Clarke explicitly stated this virtuous  cycle was their policy, and implicitly promised at MEC when Office Graph was showed to the attendee audience that this feature would make its way over to the server. While they did not promise a date or release interval for these features, they were touted as “cloud first”, not “cloud only.”

However, in May, Microsoft walked that back. The news emerged from Office general manager Julia White that Office Graph was not going to be in boxed product server releases. The excuse was that Office Graph was too compute intensive and depended on the ability of large companies like Microsoft to put enough compute power to the task, and it was not convinced every company would be in a position to deploy that sort of power. Around the same time, White confirmed on Twitter that the Clutter feature was not going to make the next release of Exchange Server either. No real reasoning was given here, but the turnaround and the walking back of the promise is interesting and perhaps unnerving.

Now White says that this is not a strategy to push more users to Office 365 by baking in really useful functionality that is only available in the cloud service. Sure, they may be some technical reasons why Office Graph and Clutter are harder when you do not control the total environment, as Microsoft does with Office 365, but here we have two promising new features where the company led customers to believe no matter which deployment method they were using, they would eventually be available—and now those features are restricted to the cloud service. It is hard not to draw the conclusion that Microsoft cares more about the future of the cloud than it does its on premises business. When you look at the majority’s reaction to the messages from TechEd, you see the same sentiment: Microsoft is investing in the cloud, sometimes to the exclusion of on premises customers, and IT pros are wondering if they are next on the target list. (You don’t need many administrators to add employees to a cloud service, of course.)

Essentially, the takeaway here is to no longer believe that on premises Exchange and Exchange Online and Office 365 are the same product. No longer is it safe to assume that something you see in Windows Azure or Office 365 will make it to Windows Server or Exchange Server or SharePoint Server or Lynch Server. While they share a codebase, they are on two different development tracks—and Office 365 is going to end up with the good stuff. This creates a feature gap, and you do not have to be very cynical to believe that gap is only going to widen. While that is good for companies that have embraced the cloud, those organizations that cannot or will not move to hosted services will have to live with less—and perhaps much less. Be aware.

This article originally appeared, with edits, at the eNow Software blog.

Using the Microsoft Enhanced Mitigation Experience Toolkit

Sometimes, we all need someone to have our back. Applications, believe it or not, are the same way, and sometimes applications don’t always behave as we expect them to. While Windows has some built in protections for program misbehavior and exploitation, it can sometimes be difficult to tell if those protections are active and what exactly they are defending. Enter the EMET.

What is EMET?

The Enhanced Mitigation Experience Toolkit, or EMET, is basically a shield or a shell that runs over Windows application and forces them, regardless of how those applications have actually been coded by their developer, to take advantage of security defenses that are built into the Windows operating system. EMET is a wrapper that enables and enforces a set of protections that, when used together, really enhance the security posture of a machine and greatly reduces the chance that exploits can run against your machine and cause any harm—most will simply fail to execute completely. It is particularly helpful in protecting against exploits that have not yet been patched by a software developer, making it a key tool that should be in your security arsenal.

 

EMET relies on four main defenses to offer its protection, and these are based on the following security technologies:

 

  • Data execution prevention. Data execution protection, or DEP, was introduced in Windows XP Service Pack 2 and has remained in Windows ever since. Essentially, DEP checks applications that are loaded in memory, and identifies pages in memory that are explicitly marked as executable. If a piece of code attempts to run from a place within memory that is not explicitly designated as application space, on systems with compatible CPUs (all modern CPUs from 2005 onward support this feature) Windows will disable the code from running. While DEP is always enabled on 64-bit versions of Windows and for 64-bit applications themselves, so there’s nothing to configure and no added benefit from EMET, 32-bit applications running on either a 32-bit or 64-bit version of Windows have the potential to misbehave that DEP combined with EMET can help prevent that nefarious behavior.

 

  • Address space layout randomization. Address space layout randomization, or ASLR, is a technique primarily designed to prevent the old buffer overflow attack vector. ASLR basically scrambles up the locations of an application’s base executable, the stack, heap, and associated application libraries, across all of the address space available to the operating system. The hope of ASLR—and this has borne out to be a valuable protection in production—is that if an exploit is actually loaded into memory, a cracker cannot avail himself or herself of it by simply overflowing from one buffer and jumping into the next space. The attacker basically has to guess where the vulnerable code is living in memory, and it is different each time the application is loaded. In addition, the application usually is not written to handle the guessing and will crash, further thwarting the attacker’s exploit.

 

  • Structured exception handler overwrite protection. Structured exception handler overwrite protection, or SEHOP, is a protection scheme that defends against exploits that overwrite a return address on a thread’s stack to point to bogus code that does bad things. This happens, among other times, with buffer overflows: the software’s exception handling will perform some functions that can let an attacker point the exception dispatcher to an address that will run malware. SEHOP makes sure that the list of exception handlers is valid before an application can call them. SEHOP works in conjunction with the aforementioned ASLR technology to basically prevent structured exception handling overwrites. (This is a pretty detailed developer-oriented technology, and I have tried to simplify it here but still explain the overall attack and what is done to defend against it. For a great technical explanation of SEHOP if you have the chops, look at http://blogs.technet.com/b/srd/archive/2009/02/02/preventing-the-exploitation-of-seh-overwrites-with-sehop.aspx.

 

  • Certificate trust. Sometimes called “pinning,” certificate trust is new to this version of the EMET. Basically, EMET’s certificate trust lets you create pinning rules that specify the certificates and root certificate authorities that your systems should trust. By doing this, you can prevent funny business with certificates: from man in the middle attacks that use untrusted certificates to impersonate legitimate services, to preventing illegitimate certificates issued improperly by trusted certification authorities that have experienced a breach or a misexecuted protocol. For example, Comodo, a certification authority, in March 2011 issued nine fraudulent digital certificates. In November 2011, a Malaysian certificate authority issued 22 certificates with a very weak 512-bit key that is fairly easy to break. EMET’s certificate pinning feature lets you take matters more fully into your own hands and decide specifically what certificates and which issuers you trust. Unfortunately, it is only useful if you primarily use Internet Explorer, as you cannot pin certificates from sites in Firefox, Google Chrome, or any other Internet browser.

 

As it happens, the weaknesses in each of these technologies offset one another—where DEP is weak, ASLR is not, and vice versa, and so on. DEP acts as a kind of front door on attacks, making it harder to get them to execute, and ASLR backstops it by making a scrambled mess (from the point of view of the exploiter) out of the memory space a vulnerable application uses so that the attacker has a harder time carrying out their malfeasance. SEHOP works with ASLR to prevent random code from executing in specific address spaces the attacker wants. (Certificate pinning is a new feature not really related to the other technologies, but it still offers a posture improvement over plain vanilla Windows.)

 

EMET’s current version, version 4.0, enables a pre-configured wrapper of protection for some default applications with known deficiencies and vulnerabilities, like Adobe Acrobat and the Java runtime, as well as common Microsoft applications like Office and Internet Explorer. This preconfigured protection is optional and you can configure custom protection on your own if you like, but the new defaults are a quick way to deploy some defenses sooner rather than later while you ramp up your custom deployment of the tool.

 

EMET is supported on Windows XP Service Pack 3 and later, Windows Vista Service Pack 1 and later, all levels of Windows 7, and Windows 8 on the client side. From the server operating system perspective, EMET is supported on Windows Server 2003 Service Pack 1 and later, all levels of Windows Server 2008, all levels of Windows Server 2008 R2, and Windows Server 2012 and Windows Server 2012 R2.

Setting Up EMET

The latest version of EMET is 4.0, and you can download it from the Microsoft website here [link: http://www.microsoft.com/en-us/download/confirmation.aspx?id=41138 ]. Once you have downloaded the tool and run the simple MSI installation, just like any other Windows application, load the program. You will see a window much like Figure 1.

 

 

Figure 1: the main screen from EMET

 

In the System Status area, you can see whether the overall protections of DEP, ASLR, SEHOP, and certificate trusts are enabled and active or not. The first step to actively protecting your system is to click the Configure System button, which displays the System Configuration window, as shown in Figure 2.

 

Figure 2: the System Configuration window in EMET

 

Within this window, you can choose from various profiles that are already set up. Since you are just starting out, go ahead and choose the Maximum Security Settings profile so you can get a sense of the defenses. Once you select this, you will be prompted for a reboot; do not yet.

 

Instead, open an command prompt with administrative privileges and let’s activate the default profiles.

 

  1. From the command prompt, type: cd “C:\Program Files (x86)\EMET 4.0”
  2. Type the following command:
    EMET_Conf.exe –-import “Deployment\Protection Profiles\Recommended Software.xml

 

You will see confirmation messages like the following:

EMET is importing configuration, please wait…
Processed 86 entries
The changes you have made may require restarting one or more applications

 

That is your signal the profiles are installed and are working.

 

Now, you need to tell EMET what applications you typically use. You do this through the EMET graphical user interface, although as we just imported some default profiles through the command line, there will already be some existing entries in the interface. To continue setting up and adding applications:

  1. Open the EMET GUI from the Start menu.
  2. In the ribbon, within the Configuration section, click Apps.
  3. The Application Configuration screen appears. In the ribbon, within the Add/Remove section, click Add Application.
  4. A standard browse box appears. Browse to the location of the program you want to add and then click Open.
  5. The application will be added to the list on the Application Configuration screen. Each of the columns represents the defense technology and protection that should be applied to that particular executable: DEP, SEHOP, and so on. Start out with all of the protections enabled when you add executables.
  6. Launch the program. You may need to restart your computer first if this is the initial load of EMET on that particular system.

Fine Tuning with EMET

Since EMET acts as a shell around an application, said application’s creator could not anticipate some of the methods EMET uses to affect an application running. This means that you will likely run into some compatibility issues as you start rolling out EMET across your organization. Some observations:

  • Begin with Internet facing applications. These applications are the ones most likely to contain exploits and those most likely to cause damage by spreading. Also, your users probably spend a lot of time in these applications. You will get the most “protection” bang for your buck here, and as a bonus, you can weed out terribly programmed Internet apps. Conversely, your custom line of business applications and software you wrote for your business in house is unlikely to be exploited and little value would be derived from trying to make it work with EMET if it does not by default, so I would not advise wasting your time.
  • Understand ahead of time which applications are not likely to work well with EMET. Google Chrome is one that comes to mind immediately—I have had little success in running EMET with Chrome unless the Export Address Table Access Filtering defense (EAF, the fifth column from the left in the Application Configuration section of the EMET GUI) is disabled. Skype, which I used to use on a daily basis, is another application that needs EAF turned off. But essentially, as everyone uses a different set of programs with different risk profiles, you will need to play around and test to figure out exactly which security postures work for your organization. Start by disabling different threat protectors within the tool for programs that are acting wonky; only disable EMET for that particular application as a last resort.
  • Deploy EMET and manage it using Group Policy. Look inside the EMET folder after it is installed, copy out the EMET.ADMX and EMET.ADML files, and copy them into the \Windows\PolicyDefinitions and \Windows\PolicyDefinitions\en-US folders on a domain-joined system—but NOT a domain controller. This will add settings to control the EMET tool to the Group Policy central store, so all you have to do is roll out the MSI to your workstations and let Group Policy handle the configuration for you.

The Last Word

The Microsoft Enhanced Mitigation Experience Toolkit is a complex but useful tool to protect the integrity of your desktop and laptop machines—computers that would traditionally be used by employees. I recommend it in that context. It has the potential to cause more trouble than the protection it offers is worth in server contexts and on computers where IT pros and power users who are less likely to download and activate malware are the primary users, so proceed with caution in those scenarios, but overall the EMET provides an effective additional layer of defense against potentially enterprise crippling malware.  You should evaluate it today.

This article originally appeared, with edits, at Computerworld.

The New Office 365 Trust Center

One of the big concerns many organizations and IT professionals have about moving to the cloud, especially for business critical workloads like e-mail and calendaring, is that you simply have to trust: you have to have faith in your provider that they will maintain their systems properly, assure uptime, not let anyone access or remove your data without your explicit consent, and generally keep your data that is physically located on their servers safe and secure. Various cloud providers have different levels of commitment to security and also different levels of transparency when it comes to sharing with customers what those providers are doing to fulfill those commitments.

At TechEd, Microsoft’s Julia White, a general manager in the Office 365 division, announced the Office 365 Trust Center (link: http://office.microsoft.com/en-us/business/office-365-trust-center-cloud-computing-security-FX103030390.aspx), a single place where the company reveals its efforts to keep individual organizations’ tenant data secure from both Internet based threats and also from governmental agencies and third parties that attempt to force Microsoft to turn over your data from a compliance perspective.

The Office 365 Trust Center stakes the service’s reputation on four pillars:

  • Security. Microsoft considers Office 365 security in four different ways: the security of the physical datacenters where the servers are housed; logical security, for restricting administrator rights and maintenance processes as well as application whitelisting on the servers themselves to prevent malicious code from running; data security, which involves encrypting data both when it is in transit to Office 365 and when it is at rest within the datacenters as well as monitoring threats and electronic intrusions and preventing attacks; and administrator and user controls for customers, encompassing rights management, the ability to send encrypted e-mail, and data loss prevention features that prevent your employees and users from leaking information in e-mail and attachments.Takeaway: It is probably fair to say that Microsoft engineers and facilities can carry out this responsibility at least as well as you and your IT department can, given the vast amount of resources the company has.

 

  • Privacy. The privacy space is where most reasonably objections to moving to the cloud come in. As the ability for governments to intercept and monitor traffic and data both in transit and at rest comes to the center stage after the Edward Snowden / National Security Agency leaks, the question on many minds is if cloud providers will stand up to law enforcement and intelligence agencies that attempt to gain access to customer data by asking the cloud provider, and not the business, for the keys to the kingdom. The only statement the Trust Center explicitly makes in writing regarding this phenomenon is an infirm one, if optimistic: “If a government approaches us for access to customer data, we encourage the inquiry to be made directly with you, the customer and will challenge attempts to prohibit disclosure in court.”The other point being made in this section is that your data is never used for advertising or data mining purposes nor is it sold to outside parties, unlike you might expect from Google Apps.

    Takeaway: The Trust Center will not assuage your concerns if you are worried about the government interfering with your data, even if Microsoft only fulfills the role of data custodian and processor. Microsoft makes no commitment to resist if the FBI shows up with a search warrant and demands the data from your tenant, or even the servers on which your tenant runs in the event that your tenant neighbor, not you, is under investigation. No assurances here.

 

  • Compliance. Exchange 2010 introduced several litigation related features like eDiscovery and hold and those features carried over into Exchange 2013, the basis of the Exchange Online and Office 365 service. Microsoft reveals that the Office 365 service meets HIPAA BAA, ISO 27001, FISMA, and EU model clauses and is independently verified by a third party auditor.  Microsoft also has a team evaluating regulatory standards in major markets around the world and how those standards are evolving, and that team makes design decisions for new controls based on those regulations that will eventually be integrated into the service (and, you would expect, to the on premises versions of the applicable software—eventually).

 

Takeaway: If you are already invested in Microsoft Exchange, you don’t lose any features when it comes to compliance, but you don’t gain any, either. If you’re moving from a competing e-mail solution, Office 365 delivers the compliance goods.

 

  • Transparency. In this pillar, Microsoft reveals that they will always let you know in what geographic region your data lives so that you can stay on top of regulatory data storage requirements, and they have all the usual support channels available as well as a commitment to 99.9% uptime (which, to their credit, they have exceeded in the last seven calendar quarters).

Takeaway: the service is widely available and about as reliable as you will get. This is not news to anyone, although publishing the uptime statistics publicly and exactly may be new.

The Last Word

I am not sure the Office 365 Trust Center is going to change many minds about the cloud and its suitability for any particular implementation. While the whitepapers and videos that go into the behind-the-scenes detail of the service are interesting from the standpoint of seeing faces and imagery, there is not a lot new revealed in the Trust Center materials that really stand out for business decision makers and IT professionals that are not already sold on Office 365.

I’d like to see Microsoft take a position on the following issues:

  • I’d like to see a firm commitment to not offer governments access to data unless you, the tenant and the owner of the data, are informed that data has been made available. No more secret subpoenas or “encouraging” law enforcement to let you know.
  • I’d like to see a timeframe for how the service will evolve with new features and updates—a standardized schedule and a better way administrators could trust that their end user experiences would not change.
  • I’d like data encryption to be on by default and turned off only through a series of dire-sounding warnings.

What would you like to see from Office 365 and the Office 365 Trust Center?

This article originally appeared, with edits, at the Petri IT Knowledgebase.

What’s New in Hybrid Exchange 2013 On Premises and Exchange Online Deployments

Now that Exchange Server 2013 Service Pack 1 is out, many organizations are turning to seriously evaluate the latest iteration of Microsoft’s messaging platform. Many of these same organizations may have divisions, offices, or even vast swaths of their company also using Exchange Online or Office 365, Microsoft’s hosted Exchange platform. Helpfully, Service Pack 1 enables some key functionality in Exchange as it relates to hybrid on premises and cloud deployments. In this piece, I wanted to take a brief look at two important developments in hybrid Exchange deployments that are possible now that Exchange 2013 Service Pack 1 is available:

Hybrid Deployments with More than One Active Directory Forest

For larger deployments with multiple forests in Active Directory, the Hybrid Configuration Wizard now understands what is going on and can interoperate with Exchange Online across all of your forests.

First off, each forest you want to deploy in this way needs at least an Exchange Server 2013 machine with the Client Access and Mailbox server roles server installed, including service pack 1. This server is the one that will receive mail from Exchange Online Protection, the mail hygiene service running in the cloud, and the one that will transmit outbound messages to either EOP for forwarding onto the big bad Internet or to Exchange Online mailboxes in the cloud. You also need to deploy Forefront Identity Manager 2010 R2 or later in order to enable the Active Directory synchronization required to match up with a single Office 365 tenant (a tenant is simply an organizational account).

After you have the hardware and software requirements out of the way, you will need to set up the right autodiscover DNS and/or SRV records within each of your forests’ namespaces, and you have to have SMTP namespaces configured within Exchange on premises for each of these forests. In addition, the autodiscover records need to be accessible from outside the network because Exchange Online will need to query them as part of regular service operation. In addition, each forest needs a secure digital certificate signed by a trusted third party certification authority—self signed certificates will not do the trick here. Additionally, each certificate has to be different—either the common name (the DNS name that the certificate authenticates) or the third party certification authority issuing the certificate must be different. Finally, you have to configure single sign on for all hybrid deployments with multiple forests, which lets you use Active Directory Users & Computers and other on premises tools to manage identities that are hosted in your company’s Exchange Online tenant.

How does all of this come together? Users are able to use their current on premises username and password to access your organization, and Office 365 or Exchange Online will understand and accept these credentials. You can use multiple e-mail address domains (@company.com, @department.company.net, etc.) within your organizations. Outbound mail heads to the Internet via your on premises servers and relays mail sent by users whose mailboxes lives on Exchange Online. Calendars and free and busy information share between on premises and the cloud freely. Users have one URL for webmail regardless of on which system their mailbox actually lives.

If you have a large organization, want to move to the cloud, and have the right licenses, this could be a good model for you.

Exchange 2013 OAuth Support in Hybrid Deployments

OAuth is now supported as an authentication mechanism. Briefly, while hybrid deployments between Exchange 2007 and Exchange 2010 on premises servers and Exchange Online use a federation trust relationship to authenticate and authorize users, pure Exchange 2013 hybrid deployments use Exchange OAuth, which lights up three additional Exchange 2013-specific features not available to customers using down level versions: Exchange in-place archiving, Exchange in-place eDiscovery, and message rights management, or MRM.  OAuth also manages the free and busy information shared between your network and Exchange Online, MailTips (the little pop-ups with information about recipients and their availability that your users get when they begin a draft of a message in the full Outlook client or Outlook Web Access), and the enhanced message tracking that lets administrators see the exact path a message takes through the system and to its eventual recipient. Now in service pack 1, a hybrid deployment of Exchange 2013 on premises and Exchange Online is supported.

What is cool about OAuth? For one, it is a widely accepted authentication protocol that’s used primarily when two different applications want to share a common userbase. Microsoft already lets SharePoint 2013 and Lync 2013 use OAuth to talk to Exchange, mainly for extended support of eDiscovery, but now you could write native business applications that also speak OAuth and use this in conjunction with your hybrid deployment.

Further Reading

This article originally appeared, with edits, at the eNow Software blog.

Why Net Neutrality is the Most Important Thing You Don’t Have Time to Care About

Net neutrality as an issue has recently come to a head, but I talk to many IT managers and leaders and it is clear not many have a true grasp on the implications this issue really has on the Internet and on the marketplace as a whole. As a result, I wanted to spend some time breaking down the issue and pointing out that while you may not have cared about net neutrality before, you absolutely need to care now.

What is Net Neutrality?

Essentially, net neutrality at its core is the principle that a packet is a packet is a packet when it traverses the Internet. Internet service providers adhering to net neutrality standards do not particularly care whether a packet is streaming video, an e-mail, an upload to SharePoint, or an FTP download. It is just traffic and to them, all traffic is the same. In addition to not caring about traffic, net neutrality refers to data being accessible to all customers with Internet access regardless of their provider, without the provider discriminating access or service quality between sources of traffic.

The issue of net neutrality has been around for a while but it has become increasingly mainstream as more and more services are delivered over the Internet and world wide web and not over other types of networks and transmission systems. What you are finding is that so much content is being delivered over the Internet that it is saturating the existing interconnect capacity between network providers, and many network providers are reluctant to incur the expense of expanding that capacity to favor only a few sources of that congestion.

Combining Internet and Entertainment

A very worrying new development, at least in my personal opinion, is the merger of Comcast and Time Warner Cable. I have never had Comcast service, so I cannot attest personally to it, but the company is regularly voted The Most Hated Company in America, and results so consistent must somehow be grounded in reality. I harbor absolutely no love for Time Warner Cable either after two billing snafus nearly wrecked my recent home purchase and mortgage application (in the interests of full disclosure, both issues have been rectified). Putting the two together has got to be the worst idea for consumers in a very, very long time, and honestly I would be very skeptical the merger would even make it through government scrutiny were it not for the fact that the cable plants for each of the respective companies do not overlap at all due to the nature of the local municipality sanctioned business monopolies each enjoy.

The problem is that NBC Universal, a company created in 2004, merged with Comcast in 2009 through a complicated buyout scheme that has resulted in a huge entertainment company that also serves millions and millions of broadband access customers with its Internet service provider business.  Owning all of the movies that Universal produces, and owning the popular major broadcast network NBC, and controlling the only home Internet access most of their customers have, offers NBC Universal an unprecedented opportunity to control the distribution of information and entertainment to their customers. It is not hard to envision a world where Comcast promotes and offers NBC content preferentially to its customers, perhaps even to the detriment of other entertainment sources that its customers might choose but that would need to be delivered over its network.

In fact, Comcast is already putting this opportunity to work by striking deals with companies like Netflix to offer service with less interruption and more availability to its own customers. Previously, Comcast customers had complained that streaming Netflix video was choppy and at times unusable. Indeed, a Wall Street Journal analysis found that Netflix’s primetime performance over major ISPs dwindled even over a period of three months from as around 2 megabits a second to as low as 1.5 megabits a second, mostly due to congestion because of the service’s popularity. Instead of that being Comcast’s problem to actually provide the service they are already taking money from their customers for, Comcast refused to really permanently fix the issue, announcing that it believed Netflix should bear the cost of delivering its content to Comcast customers. Netflix acquiesced, making a deal with Comcast that resolves this congestion. Reed Hastings, CEO of Netflix, said recently that he felt his company was forced to make this arrangement for faster delivery of its service “because the deal was better than the prior state. We felt we had to.”

If you take even a couple of minutes to really think this situation through, it become clear that it is patently absurd from a logical perspective. Comcast’s role as an Internet service provider is to provide Internet service to their customers, which they do in exchange for a monthly payment. To then attempt to strike deals to be compensated for alleviating bottlenecks on their own network from companies on the other end of the pipe is nothing less than greedy double-dipping. Comcast wants to be paid by customers for access to the Internet and then by Internet facing companies for access to its network customers. As Hastings says, “they want the whole Internet to pay them for when their subscribers use the Internet.”

Payola

How is this a good development for the Internet? How does this not result in a “fast lane” on the Internet, where bigger players with big pocketbooks continually provide payola to ISPs to make their content available faster (or even at all) while smaller companies with a potential to disrupt—you know, the startups that make the Internet an amazing marketplace—are edged out because they cannot pay the toll?

This is a really big deal, folks. If you are in any sort of communications, entertainment, or service provider business, consider this risk here. Assume for a moment that Comcast decides in order for their customers—your customers, too, of course, but simply the set of your customers that use Comcast to access your services—to get unfettered access to your site or your service, you have to pay them a private peering fee. Or what if Comcast decides, with a broader stroke, to tier services so that your business is in a “category” that by default does not receive an allocation of bandwidth sufficient to properly provide access to your service for your customers? What if Comcast competes directly with your business and decides that no way no how are they going to let their customers even have access to your service?

This is the power that monopolies have, and while you could argue that perhaps over time they would not get away with this sort of nonsense because the regulators would eventually step in, your argument is weak. Our government can be very slow to respond. Lawsuits are not timely and they are very expensive. They require resources from many parties. Startups and other disruptive companies by their very nature seek to cause change massively and quickly. Their premise and promise could easily be shut down by a Comcast or a Time Warner Cable or an AT&T, and even if that behavior were later considered illegal, the damage would already have been done.

You might also argue that consumers have a power of choice if one ISP decided to act in this nefarious way; however, unlike other markets, most consumers have only one choice for home wired broadband connectivity and many wireless broadband solutions have data caps or are otherwise unsuitable for sustained regular use within the home.

Get Involved

So while the title of this piece indicates net neutrality is something that you do not have time to care about, you really need to make the time to be involved in this debate. What the issue boils down to is whether the companies that offer service on pipes on the Internet, the last mile of service between a user and the greater Internet, should be able to exert control over the traffic that goes over those pipes. Weigh in on this issue. Write your congressmen. Write to the Federal Communications Commission. Direct your business where you can to companies that favor the strongest exercise of net neutrality principles. Otherwise, you might find yourself competing to reach your customers not only within your own industry, but also with your customers’ service providers.

This article originally appeared, with edits, at CIO.com.

First Look: Microsoft Azure RemoteApp

It has always been part of the Microsoft vision: your users can work anywhere, on any device, on anything, and it will all be secure, synchronized, and responsive. But the execution on this vision has heretofore unfortunately had many holes in it: with Microsoft not playing nice in the licensing department, how was anyone to offer a truly mobile Windows desktop on a cost effective basis without actually paying for a device that ran Windows?

Those days may be over as Microsoft Azure RemoteApp comes to the forefront. Let’s take a look at this new service, how to get started with it, and what it means for the enterprise AND for the consumer space.

What is Microsoft Azure RemoteApp?

Do you remember the OnLive saga? Allow me to whisk you back to January 2012, where, at the Consumer Electronics Show, OnLive announced their service called OnLive Desktop, which was essentially a dedicated Windows 7 virtual machine, containing an installation of Microsoft Office, that you could log into from iPads and Android devices. This of course immediately set off alarm bells within Microsoft, as Windows 7 is not licensed to be used in a service provider setting like that—only Windows Server 2008, appropriately covered by a service provider licensing agreement, or SPLA, was permitted here. Threats ensued and the company quickly switched to a Windows Server-based deployment to settle these issues. But since then, no one has attempted to make Windows applications as a service and market it to the general public—at least no company of which I am aware. (For its part, OnLive Desktop is still around, only for tablets, and is working on clients for PCs, Macs, and your television.)

Essentially, Microsoft Azure RemoteApp is OnLive Desktop done straight from the horse’s mouth: a desktop application as a service offering, although I hope I did not coin an unfortunate nickname just then. The preview program now starts out with your traditional Microsoft Office 2013 applications, like Word, Excel, PowerPoint, Access, and Outlook, but you can also just log into a Windows desktop and use Internet Explorer and more.

The advantages are numerous:

  • Instead of investing in a fixed capacity Remote Desktop Services or Terminal Services deployment, including the associated volume licenses required to operate Office applications in this capacity, you can simply purchase the service from Microsoft and pay as you use the service—you can scale up as needs require, or scale down as demand wanes, without having to buy hardware to satisfy your peak capacity but having much of that capacity unused over time.
  • Your IT department can outsource this service to Microsoft, at least in one of the Azure RemoteApp deployment models, as they will have all the responsibility of scheduled and ad-hoc maintenance, software patching, security updates, hardware maintenance, and more. You will always be using the latest version of the software with essentially no further action on your part or the part of your IT administrators.
  • You can use this service to finally enable a productive experience for your users who have brought their own personal devices to work or prefer to use those alternative platforms like iOS and Android when they are traveling or at home. Even on, say, an iPad, your users can work in native Microsoft Office and use your own line of business application or homegrown programs without the cost of porting them natively to these alternative platforms.

Cloud-Only and Hybrid Deployments

There are two deployment methods for Microsoft Azure RemoteApp that offer a couple of different levels of responsibility and control. Depending on your need for either complete outsourcing or a healthy extension of your existing investments, you can choose from:

  • Cloud only deployments. In this model, Azure RemoteApp hosts services independently of any services you currently offer to your user base from resources in your own datacenters. Microsoft manages the service, including patching the applications, providing security updates, managing the virtual workloads hosting the applications, and more, so all you need to do is provision the service and use it. Starting a new instance is simply a matter of a few clicks. The whole enchilada is guarded from malware by the Microsoft hosted antimalware service, which is a part of the offering and enabled at no additional charge or license requirement.
  • A hybrid deployment lets you control which applications, which operating systems, and which settings are enabled as part of the service. In this scenario, you create the template image from which the service is offered and you upload it into Microsoft Azure. This image is joined to your on premises domain and identities are synchronized (when passwords are passed between the two copies of the directory and kept in sync) or federated (where ADFS passes secure tokens back and forth to enable access but your enterprise passwords never leave your on premises directory) with your local directory via Azure Active Directory. This means your existing user credentials will work fine for your template image and application offerings even though the actual hosting of the image and the service is done in Microsoft’s datacenters.

Getting Started and Testing Out the Service

It is fairly easy to get started using Azure RemoteApp, especially in a cloud only deployment. Here is how to set up your first instances of the service in either of the two supported modes.

Setting Up Microsoft Azure RemoteApp in Cloud Only Mode

  1. Sign up for a subscription to Microsoft Azure.
  2. From the New menu in the lower left hand corner, select App Services, then select Remote App.
  3. Choose the Quick Create option.
  4. In the right hand pane, enter a simple, friendly name to help you identify this instance within the Azure management console, choose a service region closest to where either you or your users are geographically located on the planet, and then choose the template image from which this service instance should be created. As mentioned, during the preview phase, only the pre-created Windows Server 2012 R2 image along with Microsoft Office Professional Plus 2013 is available for cloud only deployments.
  5. Click the Create RemoteApp Service link.

The automated processes will then provision the service and you’ll be notified via a status update on the lower half of the Azure management console when the provisioning is complete and the service is ready to use. In my testing, this provisioning process took about eight minutes to complete.

When the service is provisioned, you need to assign users and give them permission to access the service. This involves either setting up Windows Azure Active Directory synchronization with your on premises servers, or having your users set up and configure a Microsoft account, the name of which you then give to the Azure RemoteApp service which will use it for authentication and authorization purposes. On the RemoteApp dashboard within the Azure management console, click the Configure User Access button. You then enter your users’ e-mail addresses, the accounts are matched up via whatever service applies, and then choose the individual programs those users will be able to access over the Azure RemoteApp connection.

How do you manage available programs? From the same RemoteApp dashboard within the Azure management console, click the Public RemoteApp Programs link, and you’ll be taken to a list of all of the programs that are configured for that particular Azure RemoteApp service instance. During the preview, you get Word, Excel, PowerPoint, Outlook, OneNote, Project, and Visio, as well as some true dyed in the wool programs like Paint (yes, the one that’s been around for ages) and Calc, the Windows system calculator. These are on the portal for test purposes.

Once all of your users are set up, direct them to http://www.remoteapp.windowsazure.com to download the RemoteApp Client—it is a click-to-run simple installation and available for Windows, Mac OS X, iOS, and Android platforms. They then accept a little invitation that is presented to them upon logging in and can then use the apps as they would any other desktop application—except those applications are running in the Azure datacenter and the visuals are being shipped over the remote desktop protocol.

Setting Up Microsoft Azure RemoteApp in Hybrid Deployment Mode

Recall you will want to use hybrid mode to deploy your own operating system images, custom applications and home grown programs, and in general integrate more with your own on premises network. Since these images and the applications that run on them are domain joined, they have full security trimmed access to your regular network. This is accomplished by creating virtual networks that act as site to site VPNs. (We have covered how to set up site to site VPNs with Windows Azure in a previous article on Computerworld.com, so I will not repeat those instructions here.)

On top of these VPNs, Microsoft has created “VNets,” or virtual networks, that contain some performance improvements to make application usage seem more consistent over the VPN link. You give information to the VNet configuration manager like the cloud address space, your on premises address space, the IP address of your VPN router, and the DNS server for those local on premises addresses, and Azure handles the rest of the configuration.

Once this is set up, and you have created instances of the service using the instructions in the previous section, Azure RemoteApp will let you join the virtual machines running Remote Desktop Services Session Host (the old Terminal Services role, if you are familiar with Windows) to your local domain so not only do you get Group Policy object replication and system management agents touching your Azure hosted applications, but you can install control software installation from a single console. All you need to do to enable this functionality is create a separate organizational unit (OU) within Active Directory that will contain only Azure RemoteApp VMs, and create a service account that has privileges to join machines to your domain. Keep in mind, however, that since these machines are under your control, you maintain the responsibility for patching them and otherwise maintaining them—unlike the cloud only deployment, where the applications and attendant maintenance are taken care of by Microsoft as part of the service.

For custom images, you simply create new virtual machines on premises that are based on Windows Server 2012 R2 with the Remote Desktop Session Host role installed. You can then set them up and install applications as you wish. The Azure management portal has a UI where you can upload, resume uploading, and delete custom images and manage those images’ links to the various Azure RemoteApp instances that you have set up.

Pricing and Availability

Currently, the Azure RemoteApp service is free to use because it is in preview—essentially a no-cost public beta. By default, the service will be licensed per instance, where you get 10 users per service instance and to scale up, you simply add more service instances. You can get access to Azure RemoteApp in the US East and West, Europe North and West, and Asia Pacific East and Southeast service regions, and the preview is available today.

The Last Word

One thing Windows and Office has lacked for a long time is a way to be productively remotely on non-Windows machines, and particularly on mobile devices. Azure RemoteApp looks to solve that problem, and the preview release has promising potential, albeit with some significant limitations in terms of image flexibility at this time of this writing. The eventual success and adoption of the service will depend largely on how Microsoft prices the service and whether the company will consider allowing license portability between enterprise service agreement entitlements and the Azure RemoteApp service so customers are not essentially double charged for access to software they may have already paid for. But from a purely technical standpoint, there is not much not to like about this service and you should make plans to evaluate it, as I believe it will play a big role in accessing desktop applications from anywhere in the future.

This article originally appeared, with edits, at Computerworld.

Is Hardware Back?

If you haven’t been living under a rock, you have heard pundits proclaiming the death of the PC. As consumers move to tablets and stop buying laptops and desktops, and as companies pinch their IT budgets even harder than they have in the past, it is easy to paint a doomsday scenario for hardware. Even the hardware companies play into this myth—look no further than AMD to see weakness portrayed. It seems that for many, hardware has lost its sexiness.

But I think hardware is back! Clearly the biggest two catalysts for hardware in this era are mobile devices and the explosion of cloud computing as a way to host applications, infrastructure, and services. Let’s unpack the promise of hardware from these two perspectives a little more.

The Cloud Perspective

It takes massive amounts of hardware to run cloud services at a scale. That is what astute folks call an F-A-C-T fact. Interestingly, though, saving on hardware costs began the whole cloud wave.

Cloud services began as a natural offshoot to virtualization, which ironically was a way to squeeze more from your existing hardware and buy less in the future. Virtualization said that now that we can abstract away the physical complexity and irregularity of the bare metal that make up servers, we can deploy virtual machines across physical machines and generally orchestrate datacenter operations without really having to put Dell Server A in rack P connected to switch W. So the logical next action once we have these orchestration capabilities is to scale up your capacity and offer services to paying customers, either within your own organization through a chargeback scheme or to the general public as a public cloud service. Enter Amazon Web Services, Microsoft Azure, and recently, Google.

How important is hardware to the cloud? It is essential. It is the very lifeblood of these services and to continue innovating and offering high quality, reliable services, you have to buy enormous amounts of hardware—to the point where the market has become threatening to small and boutique cloud services firms. Just this week, Rackspace, an independent cloud computing company with a market capitalization of $4.4 billion, has hired the investment banking firm Morgan Stanley to help it seek out “strategic alternatives,” which essentially means they are looking for a partner. Why? Because you have to have deep, deep pockets—a huge war chest—to compete at the scale that the likes of Microsoft, Amazon, and Google are competing in the cloud on.

Here are some interesting statistics that put this resource issue into perspective:

  • Various Microsoft sources have confirmed to me on the condition of anonymity that the scale of Microsoft Azure, the company’s cloud computing service, exceeds one million physical servers, and the company is on the path to grow the service’s capacity by 200% within a timeframe of a few years. That is a lot of hardware.
  • Many of these same sources cite a mind boggling statistic: that over 17% of the server hardware produced on the planet is ordered by and put into production by Microsoft for their cloud services, including Azure and Office 365. That, too, is a lot of hardware.
  • But others may well have even more hardware than that. Gartner estimates that Amazon Web Services (AWS) has five times more cloud infrastructure, including servers, networking gear, and other hardware, than the next 14 competitors have combined. Indeed, Amazon adds as much new cloud infrastructure every single day as they used to run their entire Amazon business (including the book selling) when it was a $7 billion business.
  • Netflix, who runs their service on the Amazon Web Services cloud—well, at least the streaming video part of its service—accounts for up to 33% of all Internet traffic (that’s all bits on the wire anywhere in the public Internet) during its peak times. That’s one third of all Internet activity on the entire planet.

A reasonably sized datacenter takes about a billion dollars, and basically there are a handful of companies that generate that kind of a free cash flow to develop multiple datacenters without realizing a complete return on their investment. It is a very capital intensive business.

So is hardware back, from a cloud perspective? Absolutely. The savings and the efficiency that we all realized from deploying virtualization in our own server closets and datacenters may have caused a temporary “softness” in the market for server hardware. We got the ability to wring out more efficiency from the investments we had already made. And it’s true that as the public cloud continues to mature, we will all most likely be buying less hardware for our own resource pools, but the scale at which Microsoft, Amazon, and Google, and even the more minor players in the industry, are deploying new servers and purchasing hardware should more than make up for that deficiency.

From the Mobile Perspective

Many experts have prognosticated the demise of the corporate desktop or laptop. And it’s clear that, at least in the corporate world, these hardware refresh cycles are being lengthened to three to five years, primarily because the pace of hardware capability innovation has outpaced the innovation of software such that we simply don’t need more powerful processors and faster memory to run, say, Microsoft Office, or a line of business application, acceptably well. In the consumer world, consumers are purchasing bottom-priced PCs or Macs but are also investing rather heavily into smartphones and tablets that run Android and Apple’s iOS operating system. Put simply, the stalling out of traditional desktops and laptops does not necessarily mean that hardware is dead.

Industry giant Microsoft has converted itself rather successfully, even in these early stages, into a company centered on a strategy that involves devices—both hardware it makes and what it depends on third party hardware ecosystem partners to manufacture—that link in with all-encompassing services that it offers to both enterprises and consumers. The Windows 8 and Windows 8.1 ecosystem is designed around tablets that function as content consumption devices as well as traditional desktop productivity machines. Apple launched the iPad juggernaut which really propelled the entire industry into its current “post-PC” state. But smartphones and tablets are made of hardware, of course, and increasingly powerful hardware at that.

Now we also hear about the Internet of Things, or IoT, where everything from the tires on your car to your refrigerator has Internet connected hardware sensors that will give data to a variety of services. That is hardware that connects to the cloud, and we have already identified how hardware intensive that cloud hosting is.

So is hardware back, from a mobile perspective? I think so. I am actually ambivalent as to whether hardware was actually ever on the “outs” in this space. Smartphones will only get more powerful and cheaper, particularly now that we have Android—which has always been available at no cost to hardware original equipment manufacturers (OEMS)—and now Windows Phone available for free without licensing cost to OEMs as well. It will be possible to get an Internet-connected smartphone for $10-20 within a couple of years, I predict, and there is plenty of potential for penetration still remaining for tablets and smartphones alike in emerging markets.

This article originally appeared, with edits, at CIO.com

WIMBoot in Windows 8.1 Update: A First Look

WIMBoot is a new feature, currently only available on the desktop side of Windows 8.1 Update, that lets computers boot directly off a Windows image file, or WIM file.

Why would you want this?

WIM files are tremendously compressed. They have been used in the last few releases of Windows as the method by which the operating system was distributed—on the DVDs and ISO files, you would see boot.wim and install.wim, two files that contained essentially the entire Windows system, highly compressed. What the installer was doing during the first phase was simply extracting the full set of files from these compressed WIM files, and then moving on into the next phase of registering DLLs and doing the housekeeping necessary to get from bare metal to a working Windows machine. Interestingly, the compressed WIM file stays on the Windows machine to allow for system file recovery, so you’re paying the disk space price twice: once to store the compressed version of all of the files, and a second time to store the fully expanded version of the files for normal daily use.

WIMBoot turns this on its head, at least on PCs with solid state drives and UEFI firmware—in other words, no traditional or legacy machines with rotating platters for disk drives you boot from, and BIOS chips. It lays down a copy of the compressed WIM file on a special imaging partition, and then during installation, Windows just creates pointers on the operating partition to the compressed files in the partition where the WIM file is stored. The imaging partition is also used as the recovery partition, so it does double duty.

By enabling a scenario where you can boot into Windows directly off of a WIM file, you gain the ability to:

  • Have PCs that can use push-button restore while maintaining much more free disk space across the entire device.
  • Install Windows on devices with very small, fast drives like tablets, hybrid PCs, and repurposed machines to use as thin clients.
  • Install Windows much more quickly by eliminating a time consuming phase of the setup process.

How Do You Create WIMBoot Files?

You can use the Deployment Image Servicing and Management (DISM) tool to create WIMBoot images. Here is a procedure you can use to get started.

  1. Install Windows 8.1 Update on a PC.
  2. If you have Windows 8.1 imagines and Windows RE (Recovery Environment) images, stream in the Windows 8.1 Update to those source files, too, so that you are working from the latest version of the operating system.
  3. Install the Windows Assessment and Deployment Kit Windows 8.1 Updater version.
  4. Create a boot drive in Windows PE by following the instructions at http://technet.microsoft.com/en-us/library/dn613859.aspx.
  5. Copy the existing install.wim file into a new temporary file that you will use to create the WIMBoot image.
    copy c:\images\install.wim c:\images\install_wimboot.wim
  6. Mount the image.
    md c:\mnt\win
    dism /mount-image /imagefile:”C:\images\install_wimboot.wim /Index:1 /MountDir:c:\mnt\win
  7. Grab the hidden Windows RE image and unhide it so you can separate it out to save even more space.
    attrib –s –h c:\mnt\win\windows\system32\recovery\winre.wim
    move c:\mnt\win\windows\system32\recovery\winre.wim c:\images\winre.wim
  8. Unmount the image and save your changes with the commit command.
    dism /unmount-image /mountdir:c:\mnt\win /commit
  9. Boot into Windows PE (the boot drive you created in step four), apply the Windows image you just created, and then reboot. When Windows starts the out of box experience upon that next boot, hit Control, Shift, and F3 at the same time to get into audit mode.
  10. Clean up the image to save even more disk space.
    Dism /cleanup-image /online /startcomponentcleanup
  11. Sysprep the PC as you normally would to scrub SIDs and make the PC ready for imaging.
    c:\windows\system32\sysprep\sysprep /generalize /shutdown /oobe
  12. When the PC reboots, boot into Windows PE again and then recapture the image, which will include the applied updates and remove superseded files to save space. Note the new WIMBoot switch, which instructs DISM to save the image in a format compatible with WIMBoot.
    DISM /Capture-Image /WIMBoot /ImageFile:”X:\Images\install_wimboot.wim” /CaptureDir:C: /Name:”WIMBoot Enterprise_x64 8.1 Update”

How do I Deploy WIMBoot into Production?

To use the WIMBoot image you created above, you need to boot your target PCs into Windows PE and then create a partition layout that includes a system partition, a Windows partition, and the image partition. Here is a sample diskpart script to create the required layout.

select disk 0
clean
convert gpt
create partition efi size=100
format quick fs=fat32 label="System"
create partition msr size=128
create partition primary
shrink minimum=10000
format quick fs=ntfs label="Windows"
assign letter=c
create partition primary
format quick fs=ntfs label="Images"
assign letter=m
set id="de94bba4-06d1-4d40-a16a-bfd50179d6ac"
gpt attributes=0x8000000000000001
list volume
exit

You can use the guidance at http://technet.microsoft.com/en-us/library/dn605112.aspx for very detailed sequences of deploying WIMBoot images to different devices based on whether or not you know how big the images will be in advance.

This article originally appeared, with edits, at SearchEnterpriseDesktop.com.

Windows Azure Pack for Windows Server: A Primer

Windows Azure Pack brings the interface of Windows Azure—well, Microsoft Azure now—to your own servers and lets you create an infrastructure as a service offering for your internal customers, for external parties, or both. The best parts about it? The control of all of it rests with you, but the burden of requesting and provisioning can be delegated and automated. Let’s take a look.

How Windows Azure Pack Works

The real appeal of the Windows Azure Pack for Windows Server is that it gets you well on your way to creating your own “private cloud.” Using the very simple interface that the Azure infrastructure as a service (IaaS) offering provides, you can create offerings for your own organization’s departments and internal customers that puts the power of self service IT at their fingertips.

Note: the Windows Azure Pack for Windows Server requires Windows Server 2012 or Windows Server 2012 R2, with no prior release being supported, to host the portal and services. The individual offerings and virtual machines that you can offer within your portal can be any version of Windows, however, that is supported under the Hyper-V hypervisor.

For example, you could have a group of servers that makes up your private cloud. As the administrator, you can define offerings—for example, a small, medium, and heavy load service. Each service might consist of a certain virtual machine configuration, spread across different physical hosts for fault tolerance, a certain network setup, and a specific guarantee of service availability. These can all be offered through a portal, so when one department comes to you and needs a web site set up, you can point them to the portal and within a few clicks, they can get their service set up and running, with all the complexity of ordering and configuring hardware completely abstracted away.

You can also create website clouds which even remove the provisioning of virtual machines from the equation—these shared web sites can host ASP.NET, PHP, and Node.js web applications specifically, and includes a gallery of common and popular open source web applications to make creating new sites simple. It also integrates with source control systems like Git. These web site clouds just exist at the site level, with their actual hosting being done across a set of virtual machines you configure once and then essentially forget about. It is quite handy, especially for businesses that create multiple websites all the time and need a way to get them spun up and ready quickly without a lot of involvement from the IT department.

Commercial hosters will also find the Windows Azure Pack to be a useful tool for offering infrastructure services out to the larger Internet. The portal works in the same way, but there is also a mechanism for chargeback of hosting costs and also the ability to run promotions and coupons to offer limited time upgrades and special services along with your normal packages.

Getting Started with the Windows Azure Pack for Windows Server

Since the Windows Azure Pack is designed to bring the convenience of the super large Windows Azure service down to the quantity of servers and other iron that we mere mortals have in our own datacenters and server closets, it is safe to assume that there are some reasonably heavy hardware requirements. This is both for fault tolerance purposes, as the WAP will attempt to spread workloads around all of the available machines in a private cloud, but also for capacity’s sake: the idea here is to host many workloads, not just one or two, so that you can fully reap the benefits of delegation and automation.

Luckily Microsoft has made available the “Windows Azure Pack: Portal and API Express” package which puts all of the components of the solution on one machine, which is great for testing and learning how the WAP works. You can download the single machine stack installer at http://go.microsoft.com/?linkid=9832690.

For the full monty, you will essentially need to dedicate eight machines to your private cloud, if not more. However, these eight machines can either be physical or virtual. Make sure each machine has two processors, 4 GB of RAM at least, and 40 GB of available hard disk space. As noted, each machine needs to be running Windows Server 2012 or Windows Server 2012 R2. Then, you can download the Web Platform Installer, or WPI, which is a very handy toolkit for deploying greenfield installations of web applications that are based on IIS.

From there, it is a matter of playing around in the administrator portal and configuring your services and offerings. If you are familiar with Windows Azure, you will feel right at home—it is literally some of the same code and structures used in the mainstream Azure public cloud service. Try it out today.