Why Net Neutrality is the Most Important Thing You Don’t Have Time to Care About

Net neutrality as an issue has recently come to a head, but I talk to many IT managers and leaders and it is clear not many have a true grasp on the implications this issue really has on the Internet and on the marketplace as a whole. As a result, I wanted to spend some time breaking down the issue and pointing out that while you may not have cared about net neutrality before, you absolutely need to care now.

What is Net Neutrality?

Essentially, net neutrality at its core is the principle that a packet is a packet is a packet when it traverses the Internet. Internet service providers adhering to net neutrality standards do not particularly care whether a packet is streaming video, an e-mail, an upload to SharePoint, or an FTP download. It is just traffic and to them, all traffic is the same. In addition to not caring about traffic, net neutrality refers to data being accessible to all customers with Internet access regardless of their provider, without the provider discriminating access or service quality between sources of traffic.

The issue of net neutrality has been around for a while but it has become increasingly mainstream as more and more services are delivered over the Internet and world wide web and not over other types of networks and transmission systems. What you are finding is that so much content is being delivered over the Internet that it is saturating the existing interconnect capacity between network providers, and many network providers are reluctant to incur the expense of expanding that capacity to favor only a few sources of that congestion.

Combining Internet and Entertainment

A very worrying new development, at least in my personal opinion, is the merger of Comcast and Time Warner Cable. I have never had Comcast service, so I cannot attest personally to it, but the company is regularly voted The Most Hated Company in America, and results so consistent must somehow be grounded in reality. I harbor absolutely no love for Time Warner Cable either after two billing snafus nearly wrecked my recent home purchase and mortgage application (in the interests of full disclosure, both issues have been rectified). Putting the two together has got to be the worst idea for consumers in a very, very long time, and honestly I would be very skeptical the merger would even make it through government scrutiny were it not for the fact that the cable plants for each of the respective companies do not overlap at all due to the nature of the local municipality sanctioned business monopolies each enjoy.

The problem is that NBC Universal, a company created in 2004, merged with Comcast in 2009 through a complicated buyout scheme that has resulted in a huge entertainment company that also serves millions and millions of broadband access customers with its Internet service provider business.  Owning all of the movies that Universal produces, and owning the popular major broadcast network NBC, and controlling the only home Internet access most of their customers have, offers NBC Universal an unprecedented opportunity to control the distribution of information and entertainment to their customers. It is not hard to envision a world where Comcast promotes and offers NBC content preferentially to its customers, perhaps even to the detriment of other entertainment sources that its customers might choose but that would need to be delivered over its network.

In fact, Comcast is already putting this opportunity to work by striking deals with companies like Netflix to offer service with less interruption and more availability to its own customers. Previously, Comcast customers had complained that streaming Netflix video was choppy and at times unusable. Indeed, a Wall Street Journal analysis found that Netflix’s primetime performance over major ISPs dwindled even over a period of three months from as around 2 megabits a second to as low as 1.5 megabits a second, mostly due to congestion because of the service’s popularity. Instead of that being Comcast’s problem to actually provide the service they are already taking money from their customers for, Comcast refused to really permanently fix the issue, announcing that it believed Netflix should bear the cost of delivering its content to Comcast customers. Netflix acquiesced, making a deal with Comcast that resolves this congestion. Reed Hastings, CEO of Netflix, said recently that he felt his company was forced to make this arrangement for faster delivery of its service “because the deal was better than the prior state. We felt we had to.”

If you take even a couple of minutes to really think this situation through, it become clear that it is patently absurd from a logical perspective. Comcast’s role as an Internet service provider is to provide Internet service to their customers, which they do in exchange for a monthly payment. To then attempt to strike deals to be compensated for alleviating bottlenecks on their own network from companies on the other end of the pipe is nothing less than greedy double-dipping. Comcast wants to be paid by customers for access to the Internet and then by Internet facing companies for access to its network customers. As Hastings says, “they want the whole Internet to pay them for when their subscribers use the Internet.”

Payola

How is this a good development for the Internet? How does this not result in a “fast lane” on the Internet, where bigger players with big pocketbooks continually provide payola to ISPs to make their content available faster (or even at all) while smaller companies with a potential to disrupt—you know, the startups that make the Internet an amazing marketplace—are edged out because they cannot pay the toll?

This is a really big deal, folks. If you are in any sort of communications, entertainment, or service provider business, consider this risk here. Assume for a moment that Comcast decides in order for their customers—your customers, too, of course, but simply the set of your customers that use Comcast to access your services—to get unfettered access to your site or your service, you have to pay them a private peering fee. Or what if Comcast decides, with a broader stroke, to tier services so that your business is in a “category” that by default does not receive an allocation of bandwidth sufficient to properly provide access to your service for your customers? What if Comcast competes directly with your business and decides that no way no how are they going to let their customers even have access to your service?

This is the power that monopolies have, and while you could argue that perhaps over time they would not get away with this sort of nonsense because the regulators would eventually step in, your argument is weak. Our government can be very slow to respond. Lawsuits are not timely and they are very expensive. They require resources from many parties. Startups and other disruptive companies by their very nature seek to cause change massively and quickly. Their premise and promise could easily be shut down by a Comcast or a Time Warner Cable or an AT&T, and even if that behavior were later considered illegal, the damage would already have been done.

You might also argue that consumers have a power of choice if one ISP decided to act in this nefarious way; however, unlike other markets, most consumers have only one choice for home wired broadband connectivity and many wireless broadband solutions have data caps or are otherwise unsuitable for sustained regular use within the home.

Get Involved

So while the title of this piece indicates net neutrality is something that you do not have time to care about, you really need to make the time to be involved in this debate. What the issue boils down to is whether the companies that offer service on pipes on the Internet, the last mile of service between a user and the greater Internet, should be able to exert control over the traffic that goes over those pipes. Weigh in on this issue. Write your congressmen. Write to the Federal Communications Commission. Direct your business where you can to companies that favor the strongest exercise of net neutrality principles. Otherwise, you might find yourself competing to reach your customers not only within your own industry, but also with your customers’ service providers.

This article originally appeared, with edits, at CIO.com.

First Look: Microsoft Azure RemoteApp

It has always been part of the Microsoft vision: your users can work anywhere, on any device, on anything, and it will all be secure, synchronized, and responsive. But the execution on this vision has heretofore unfortunately had many holes in it: with Microsoft not playing nice in the licensing department, how was anyone to offer a truly mobile Windows desktop on a cost effective basis without actually paying for a device that ran Windows?

Those days may be over as Microsoft Azure RemoteApp comes to the forefront. Let’s take a look at this new service, how to get started with it, and what it means for the enterprise AND for the consumer space.

What is Microsoft Azure RemoteApp?

Do you remember the OnLive saga? Allow me to whisk you back to January 2012, where, at the Consumer Electronics Show, OnLive announced their service called OnLive Desktop, which was essentially a dedicated Windows 7 virtual machine, containing an installation of Microsoft Office, that you could log into from iPads and Android devices. This of course immediately set off alarm bells within Microsoft, as Windows 7 is not licensed to be used in a service provider setting like that—only Windows Server 2008, appropriately covered by a service provider licensing agreement, or SPLA, was permitted here. Threats ensued and the company quickly switched to a Windows Server-based deployment to settle these issues. But since then, no one has attempted to make Windows applications as a service and market it to the general public—at least no company of which I am aware. (For its part, OnLive Desktop is still around, only for tablets, and is working on clients for PCs, Macs, and your television.)

Essentially, Microsoft Azure RemoteApp is OnLive Desktop done straight from the horse’s mouth: a desktop application as a service offering, although I hope I did not coin an unfortunate nickname just then. The preview program now starts out with your traditional Microsoft Office 2013 applications, like Word, Excel, PowerPoint, Access, and Outlook, but you can also just log into a Windows desktop and use Internet Explorer and more.

The advantages are numerous:

  • Instead of investing in a fixed capacity Remote Desktop Services or Terminal Services deployment, including the associated volume licenses required to operate Office applications in this capacity, you can simply purchase the service from Microsoft and pay as you use the service—you can scale up as needs require, or scale down as demand wanes, without having to buy hardware to satisfy your peak capacity but having much of that capacity unused over time.
  • Your IT department can outsource this service to Microsoft, at least in one of the Azure RemoteApp deployment models, as they will have all the responsibility of scheduled and ad-hoc maintenance, software patching, security updates, hardware maintenance, and more. You will always be using the latest version of the software with essentially no further action on your part or the part of your IT administrators.
  • You can use this service to finally enable a productive experience for your users who have brought their own personal devices to work or prefer to use those alternative platforms like iOS and Android when they are traveling or at home. Even on, say, an iPad, your users can work in native Microsoft Office and use your own line of business application or homegrown programs without the cost of porting them natively to these alternative platforms.

Cloud-Only and Hybrid Deployments

There are two deployment methods for Microsoft Azure RemoteApp that offer a couple of different levels of responsibility and control. Depending on your need for either complete outsourcing or a healthy extension of your existing investments, you can choose from:

  • Cloud only deployments. In this model, Azure RemoteApp hosts services independently of any services you currently offer to your user base from resources in your own datacenters. Microsoft manages the service, including patching the applications, providing security updates, managing the virtual workloads hosting the applications, and more, so all you need to do is provision the service and use it. Starting a new instance is simply a matter of a few clicks. The whole enchilada is guarded from malware by the Microsoft hosted antimalware service, which is a part of the offering and enabled at no additional charge or license requirement.
  • A hybrid deployment lets you control which applications, which operating systems, and which settings are enabled as part of the service. In this scenario, you create the template image from which the service is offered and you upload it into Microsoft Azure. This image is joined to your on premises domain and identities are synchronized (when passwords are passed between the two copies of the directory and kept in sync) or federated (where ADFS passes secure tokens back and forth to enable access but your enterprise passwords never leave your on premises directory) with your local directory via Azure Active Directory. This means your existing user credentials will work fine for your template image and application offerings even though the actual hosting of the image and the service is done in Microsoft’s datacenters.

Getting Started and Testing Out the Service

It is fairly easy to get started using Azure RemoteApp, especially in a cloud only deployment. Here is how to set up your first instances of the service in either of the two supported modes.

Setting Up Microsoft Azure RemoteApp in Cloud Only Mode

  1. Sign up for a subscription to Microsoft Azure.
  2. From the New menu in the lower left hand corner, select App Services, then select Remote App.
  3. Choose the Quick Create option.
  4. In the right hand pane, enter a simple, friendly name to help you identify this instance within the Azure management console, choose a service region closest to where either you or your users are geographically located on the planet, and then choose the template image from which this service instance should be created. As mentioned, during the preview phase, only the pre-created Windows Server 2012 R2 image along with Microsoft Office Professional Plus 2013 is available for cloud only deployments.
  5. Click the Create RemoteApp Service link.

The automated processes will then provision the service and you’ll be notified via a status update on the lower half of the Azure management console when the provisioning is complete and the service is ready to use. In my testing, this provisioning process took about eight minutes to complete.

When the service is provisioned, you need to assign users and give them permission to access the service. This involves either setting up Windows Azure Active Directory synchronization with your on premises servers, or having your users set up and configure a Microsoft account, the name of which you then give to the Azure RemoteApp service which will use it for authentication and authorization purposes. On the RemoteApp dashboard within the Azure management console, click the Configure User Access button. You then enter your users’ e-mail addresses, the accounts are matched up via whatever service applies, and then choose the individual programs those users will be able to access over the Azure RemoteApp connection.

How do you manage available programs? From the same RemoteApp dashboard within the Azure management console, click the Public RemoteApp Programs link, and you’ll be taken to a list of all of the programs that are configured for that particular Azure RemoteApp service instance. During the preview, you get Word, Excel, PowerPoint, Outlook, OneNote, Project, and Visio, as well as some true dyed in the wool programs like Paint (yes, the one that’s been around for ages) and Calc, the Windows system calculator. These are on the portal for test purposes.

Once all of your users are set up, direct them to http://www.remoteapp.windowsazure.com to download the RemoteApp Client—it is a click-to-run simple installation and available for Windows, Mac OS X, iOS, and Android platforms. They then accept a little invitation that is presented to them upon logging in and can then use the apps as they would any other desktop application—except those applications are running in the Azure datacenter and the visuals are being shipped over the remote desktop protocol.

Setting Up Microsoft Azure RemoteApp in Hybrid Deployment Mode

Recall you will want to use hybrid mode to deploy your own operating system images, custom applications and home grown programs, and in general integrate more with your own on premises network. Since these images and the applications that run on them are domain joined, they have full security trimmed access to your regular network. This is accomplished by creating virtual networks that act as site to site VPNs. (We have covered how to set up site to site VPNs with Windows Azure in a previous article on Computerworld.com, so I will not repeat those instructions here.)

On top of these VPNs, Microsoft has created “VNets,” or virtual networks, that contain some performance improvements to make application usage seem more consistent over the VPN link. You give information to the VNet configuration manager like the cloud address space, your on premises address space, the IP address of your VPN router, and the DNS server for those local on premises addresses, and Azure handles the rest of the configuration.

Once this is set up, and you have created instances of the service using the instructions in the previous section, Azure RemoteApp will let you join the virtual machines running Remote Desktop Services Session Host (the old Terminal Services role, if you are familiar with Windows) to your local domain so not only do you get Group Policy object replication and system management agents touching your Azure hosted applications, but you can install control software installation from a single console. All you need to do to enable this functionality is create a separate organizational unit (OU) within Active Directory that will contain only Azure RemoteApp VMs, and create a service account that has privileges to join machines to your domain. Keep in mind, however, that since these machines are under your control, you maintain the responsibility for patching them and otherwise maintaining them—unlike the cloud only deployment, where the applications and attendant maintenance are taken care of by Microsoft as part of the service.

For custom images, you simply create new virtual machines on premises that are based on Windows Server 2012 R2 with the Remote Desktop Session Host role installed. You can then set them up and install applications as you wish. The Azure management portal has a UI where you can upload, resume uploading, and delete custom images and manage those images’ links to the various Azure RemoteApp instances that you have set up.

Pricing and Availability

Currently, the Azure RemoteApp service is free to use because it is in preview—essentially a no-cost public beta. By default, the service will be licensed per instance, where you get 10 users per service instance and to scale up, you simply add more service instances. You can get access to Azure RemoteApp in the US East and West, Europe North and West, and Asia Pacific East and Southeast service regions, and the preview is available today.

The Last Word

One thing Windows and Office has lacked for a long time is a way to be productively remotely on non-Windows machines, and particularly on mobile devices. Azure RemoteApp looks to solve that problem, and the preview release has promising potential, albeit with some significant limitations in terms of image flexibility at this time of this writing. The eventual success and adoption of the service will depend largely on how Microsoft prices the service and whether the company will consider allowing license portability between enterprise service agreement entitlements and the Azure RemoteApp service so customers are not essentially double charged for access to software they may have already paid for. But from a purely technical standpoint, there is not much not to like about this service and you should make plans to evaluate it, as I believe it will play a big role in accessing desktop applications from anywhere in the future.

This article originally appeared, with edits, at Computerworld.

Is Hardware Back?

If you haven’t been living under a rock, you have heard pundits proclaiming the death of the PC. As consumers move to tablets and stop buying laptops and desktops, and as companies pinch their IT budgets even harder than they have in the past, it is easy to paint a doomsday scenario for hardware. Even the hardware companies play into this myth—look no further than AMD to see weakness portrayed. It seems that for many, hardware has lost its sexiness.

But I think hardware is back! Clearly the biggest two catalysts for hardware in this era are mobile devices and the explosion of cloud computing as a way to host applications, infrastructure, and services. Let’s unpack the promise of hardware from these two perspectives a little more.

The Cloud Perspective

It takes massive amounts of hardware to run cloud services at a scale. That is what astute folks call an F-A-C-T fact. Interestingly, though, saving on hardware costs began the whole cloud wave.

Cloud services began as a natural offshoot to virtualization, which ironically was a way to squeeze more from your existing hardware and buy less in the future. Virtualization said that now that we can abstract away the physical complexity and irregularity of the bare metal that make up servers, we can deploy virtual machines across physical machines and generally orchestrate datacenter operations without really having to put Dell Server A in rack P connected to switch W. So the logical next action once we have these orchestration capabilities is to scale up your capacity and offer services to paying customers, either within your own organization through a chargeback scheme or to the general public as a public cloud service. Enter Amazon Web Services, Microsoft Azure, and recently, Google.

How important is hardware to the cloud? It is essential. It is the very lifeblood of these services and to continue innovating and offering high quality, reliable services, you have to buy enormous amounts of hardware—to the point where the market has become threatening to small and boutique cloud services firms. Just this week, Rackspace, an independent cloud computing company with a market capitalization of $4.4 billion, has hired the investment banking firm Morgan Stanley to help it seek out “strategic alternatives,” which essentially means they are looking for a partner. Why? Because you have to have deep, deep pockets—a huge war chest—to compete at the scale that the likes of Microsoft, Amazon, and Google are competing in the cloud on.

Here are some interesting statistics that put this resource issue into perspective:

  • Various Microsoft sources have confirmed to me on the condition of anonymity that the scale of Microsoft Azure, the company’s cloud computing service, exceeds one million physical servers, and the company is on the path to grow the service’s capacity by 200% within a timeframe of a few years. That is a lot of hardware.
  • Many of these same sources cite a mind boggling statistic: that over 17% of the server hardware produced on the planet is ordered by and put into production by Microsoft for their cloud services, including Azure and Office 365. That, too, is a lot of hardware.
  • But others may well have even more hardware than that. Gartner estimates that Amazon Web Services (AWS) has five times more cloud infrastructure, including servers, networking gear, and other hardware, than the next 14 competitors have combined. Indeed, Amazon adds as much new cloud infrastructure every single day as they used to run their entire Amazon business (including the book selling) when it was a $7 billion business.
  • Netflix, who runs their service on the Amazon Web Services cloud—well, at least the streaming video part of its service—accounts for up to 33% of all Internet traffic (that’s all bits on the wire anywhere in the public Internet) during its peak times. That’s one third of all Internet activity on the entire planet.

A reasonably sized datacenter takes about a billion dollars, and basically there are a handful of companies that generate that kind of a free cash flow to develop multiple datacenters without realizing a complete return on their investment. It is a very capital intensive business.

So is hardware back, from a cloud perspective? Absolutely. The savings and the efficiency that we all realized from deploying virtualization in our own server closets and datacenters may have caused a temporary “softness” in the market for server hardware. We got the ability to wring out more efficiency from the investments we had already made. And it’s true that as the public cloud continues to mature, we will all most likely be buying less hardware for our own resource pools, but the scale at which Microsoft, Amazon, and Google, and even the more minor players in the industry, are deploying new servers and purchasing hardware should more than make up for that deficiency.

From the Mobile Perspective

Many experts have prognosticated the demise of the corporate desktop or laptop. And it’s clear that, at least in the corporate world, these hardware refresh cycles are being lengthened to three to five years, primarily because the pace of hardware capability innovation has outpaced the innovation of software such that we simply don’t need more powerful processors and faster memory to run, say, Microsoft Office, or a line of business application, acceptably well. In the consumer world, consumers are purchasing bottom-priced PCs or Macs but are also investing rather heavily into smartphones and tablets that run Android and Apple’s iOS operating system. Put simply, the stalling out of traditional desktops and laptops does not necessarily mean that hardware is dead.

Industry giant Microsoft has converted itself rather successfully, even in these early stages, into a company centered on a strategy that involves devices—both hardware it makes and what it depends on third party hardware ecosystem partners to manufacture—that link in with all-encompassing services that it offers to both enterprises and consumers. The Windows 8 and Windows 8.1 ecosystem is designed around tablets that function as content consumption devices as well as traditional desktop productivity machines. Apple launched the iPad juggernaut which really propelled the entire industry into its current “post-PC” state. But smartphones and tablets are made of hardware, of course, and increasingly powerful hardware at that.

Now we also hear about the Internet of Things, or IoT, where everything from the tires on your car to your refrigerator has Internet connected hardware sensors that will give data to a variety of services. That is hardware that connects to the cloud, and we have already identified how hardware intensive that cloud hosting is.

So is hardware back, from a mobile perspective? I think so. I am actually ambivalent as to whether hardware was actually ever on the “outs” in this space. Smartphones will only get more powerful and cheaper, particularly now that we have Android—which has always been available at no cost to hardware original equipment manufacturers (OEMS)—and now Windows Phone available for free without licensing cost to OEMs as well. It will be possible to get an Internet-connected smartphone for $10-20 within a couple of years, I predict, and there is plenty of potential for penetration still remaining for tablets and smartphones alike in emerging markets.

This article originally appeared, with edits, at CIO.com

WIMBoot in Windows 8.1 Update: A First Look

WIMBoot is a new feature, currently only available on the desktop side of Windows 8.1 Update, that lets computers boot directly off a Windows image file, or WIM file.

Why would you want this?

WIM files are tremendously compressed. They have been used in the last few releases of Windows as the method by which the operating system was distributed—on the DVDs and ISO files, you would see boot.wim and install.wim, two files that contained essentially the entire Windows system, highly compressed. What the installer was doing during the first phase was simply extracting the full set of files from these compressed WIM files, and then moving on into the next phase of registering DLLs and doing the housekeeping necessary to get from bare metal to a working Windows machine. Interestingly, the compressed WIM file stays on the Windows machine to allow for system file recovery, so you’re paying the disk space price twice: once to store the compressed version of all of the files, and a second time to store the fully expanded version of the files for normal daily use.

WIMBoot turns this on its head, at least on PCs with solid state drives and UEFI firmware—in other words, no traditional or legacy machines with rotating platters for disk drives you boot from, and BIOS chips. It lays down a copy of the compressed WIM file on a special imaging partition, and then during installation, Windows just creates pointers on the operating partition to the compressed files in the partition where the WIM file is stored. The imaging partition is also used as the recovery partition, so it does double duty.

By enabling a scenario where you can boot into Windows directly off of a WIM file, you gain the ability to:

  • Have PCs that can use push-button restore while maintaining much more free disk space across the entire device.
  • Install Windows on devices with very small, fast drives like tablets, hybrid PCs, and repurposed machines to use as thin clients.
  • Install Windows much more quickly by eliminating a time consuming phase of the setup process.

How Do You Create WIMBoot Files?

You can use the Deployment Image Servicing and Management (DISM) tool to create WIMBoot images. Here is a procedure you can use to get started.

  1. Install Windows 8.1 Update on a PC.
  2. If you have Windows 8.1 imagines and Windows RE (Recovery Environment) images, stream in the Windows 8.1 Update to those source files, too, so that you are working from the latest version of the operating system.
  3. Install the Windows Assessment and Deployment Kit Windows 8.1 Updater version.
  4. Create a boot drive in Windows PE by following the instructions at http://technet.microsoft.com/en-us/library/dn613859.aspx.
  5. Copy the existing install.wim file into a new temporary file that you will use to create the WIMBoot image.
    copy c:\images\install.wim c:\images\install_wimboot.wim
  6. Mount the image.
    md c:\mnt\win
    dism /mount-image /imagefile:”C:\images\install_wimboot.wim /Index:1 /MountDir:c:\mnt\win
  7. Grab the hidden Windows RE image and unhide it so you can separate it out to save even more space.
    attrib –s –h c:\mnt\win\windows\system32\recovery\winre.wim
    move c:\mnt\win\windows\system32\recovery\winre.wim c:\images\winre.wim
  8. Unmount the image and save your changes with the commit command.
    dism /unmount-image /mountdir:c:\mnt\win /commit
  9. Boot into Windows PE (the boot drive you created in step four), apply the Windows image you just created, and then reboot. When Windows starts the out of box experience upon that next boot, hit Control, Shift, and F3 at the same time to get into audit mode.
  10. Clean up the image to save even more disk space.
    Dism /cleanup-image /online /startcomponentcleanup
  11. Sysprep the PC as you normally would to scrub SIDs and make the PC ready for imaging.
    c:\windows\system32\sysprep\sysprep /generalize /shutdown /oobe
  12. When the PC reboots, boot into Windows PE again and then recapture the image, which will include the applied updates and remove superseded files to save space. Note the new WIMBoot switch, which instructs DISM to save the image in a format compatible with WIMBoot.
    DISM /Capture-Image /WIMBoot /ImageFile:”X:\Images\install_wimboot.wim” /CaptureDir:C: /Name:”WIMBoot Enterprise_x64 8.1 Update”

How do I Deploy WIMBoot into Production?

To use the WIMBoot image you created above, you need to boot your target PCs into Windows PE and then create a partition layout that includes a system partition, a Windows partition, and the image partition. Here is a sample diskpart script to create the required layout.

select disk 0
clean
convert gpt
create partition efi size=100
format quick fs=fat32 label="System"
create partition msr size=128
create partition primary
shrink minimum=10000
format quick fs=ntfs label="Windows"
assign letter=c
create partition primary
format quick fs=ntfs label="Images"
assign letter=m
set id="de94bba4-06d1-4d40-a16a-bfd50179d6ac"
gpt attributes=0x8000000000000001
list volume
exit

You can use the guidance at http://technet.microsoft.com/en-us/library/dn605112.aspx for very detailed sequences of deploying WIMBoot images to different devices based on whether or not you know how big the images will be in advance.

This article originally appeared, with edits, at SearchEnterpriseDesktop.com.

Windows Azure Pack for Windows Server: A Primer

Windows Azure Pack brings the interface of Windows Azure—well, Microsoft Azure now—to your own servers and lets you create an infrastructure as a service offering for your internal customers, for external parties, or both. The best parts about it? The control of all of it rests with you, but the burden of requesting and provisioning can be delegated and automated. Let’s take a look.

How Windows Azure Pack Works

The real appeal of the Windows Azure Pack for Windows Server is that it gets you well on your way to creating your own “private cloud.” Using the very simple interface that the Azure infrastructure as a service (IaaS) offering provides, you can create offerings for your own organization’s departments and internal customers that puts the power of self service IT at their fingertips.

Note: the Windows Azure Pack for Windows Server requires Windows Server 2012 or Windows Server 2012 R2, with no prior release being supported, to host the portal and services. The individual offerings and virtual machines that you can offer within your portal can be any version of Windows, however, that is supported under the Hyper-V hypervisor.

For example, you could have a group of servers that makes up your private cloud. As the administrator, you can define offerings—for example, a small, medium, and heavy load service. Each service might consist of a certain virtual machine configuration, spread across different physical hosts for fault tolerance, a certain network setup, and a specific guarantee of service availability. These can all be offered through a portal, so when one department comes to you and needs a web site set up, you can point them to the portal and within a few clicks, they can get their service set up and running, with all the complexity of ordering and configuring hardware completely abstracted away.

You can also create website clouds which even remove the provisioning of virtual machines from the equation—these shared web sites can host ASP.NET, PHP, and Node.js web applications specifically, and includes a gallery of common and popular open source web applications to make creating new sites simple. It also integrates with source control systems like Git. These web site clouds just exist at the site level, with their actual hosting being done across a set of virtual machines you configure once and then essentially forget about. It is quite handy, especially for businesses that create multiple websites all the time and need a way to get them spun up and ready quickly without a lot of involvement from the IT department.

Commercial hosters will also find the Windows Azure Pack to be a useful tool for offering infrastructure services out to the larger Internet. The portal works in the same way, but there is also a mechanism for chargeback of hosting costs and also the ability to run promotions and coupons to offer limited time upgrades and special services along with your normal packages.

Getting Started with the Windows Azure Pack for Windows Server

Since the Windows Azure Pack is designed to bring the convenience of the super large Windows Azure service down to the quantity of servers and other iron that we mere mortals have in our own datacenters and server closets, it is safe to assume that there are some reasonably heavy hardware requirements. This is both for fault tolerance purposes, as the WAP will attempt to spread workloads around all of the available machines in a private cloud, but also for capacity’s sake: the idea here is to host many workloads, not just one or two, so that you can fully reap the benefits of delegation and automation.

Luckily Microsoft has made available the “Windows Azure Pack: Portal and API Express” package which puts all of the components of the solution on one machine, which is great for testing and learning how the WAP works. You can download the single machine stack installer at http://go.microsoft.com/?linkid=9832690.

For the full monty, you will essentially need to dedicate eight machines to your private cloud, if not more. However, these eight machines can either be physical or virtual. Make sure each machine has two processors, 4 GB of RAM at least, and 40 GB of available hard disk space. As noted, each machine needs to be running Windows Server 2012 or Windows Server 2012 R2. Then, you can download the Web Platform Installer, or WPI, which is a very handy toolkit for deploying greenfield installations of web applications that are based on IIS.

From there, it is a matter of playing around in the administrator portal and configuring your services and offerings. If you are familiar with Windows Azure, you will feel right at home—it is literally some of the same code and structures used in the mainstream Azure public cloud service. Try it out today.

Four Strategies for Dealing with the Effects of Cyberattacks

Cyberattacks are a threat to all of us. White House officials confirmed in March 2014 that federal agents told more than 3,000 US companies that their IT deployments had been hacked, according to the Washington Post. Bloomberg reports that the SEC is looking into the constant threats of cyberattacks against stock exchanges, brokerages, and other Wall Street firms.

These attacks are going to happen. Here are four strategies to put in place to help you deal with cyberattacks and their threats.

Have a Disclosure Plan

Many industries are regulated by state, local, and federal governments and have specific rules about what must be disclosed to consumers during a cyberattack. This is especially true of the health care and financial verticals where sensitive customer information is involved. Yet sometimes in the wake of an attack, or even while an attack is still happen, the situation that is evolving can be murky enough that disclosure rules get broken or at the very least the disclosure process is delayed or confused. For that reason, it is important to plan ahead and develop an action framework when events that trigger a disclosure response occur. Here are some considerations:

  1. Understand the applicable regulatory framework. For publicly traded companies, the SEC generally has disclosure guidelines and timeframes. For financial institutions, the Office of the Comptroller of the Currency (OCC) and the Federal Deposit Insurance Corporation (FDIC) handle this on the federal side. State regulations may vary.
  2. Engage your communications team. These employees are professionals who have developed relationships with media and other external stakeholders and can help you control the messaging and disclosures that you are required to make, as well as advise on the timing and breadth of those statements.
  3. Coordinate with the required departments. CIOs most coordinate with the individual IT teams responsible for the area under attack, outside contractors and vendors that are helping with the mitigation and recovery, and government agencies if applicable to keep the disclosure plan on track. Have key personnel identified ahead of time and make sure roles and next actions to carry out disclosure plans are known.

Understand What Targets of Yours are Valuable

The real question about cyberattacks is not when they occur. Attackers are inventing new ways to do everything, connectivity to the Internet is becoming more pervasive, and it is easier and cheaper than ever to acquire a botnet to do your bidding if you are a malfeasant. Cyberattacks will happen to you, whether it happens tonight, next week, next month, or next year.

The real question about cyberattacks is where they will occur. Traditional attacks have really gone after most of the low hanging fruit, like payment information—witness the recent Target breach—or just general havoc-wreaking, like the Syrian Electronic Army enjoys carrying out with distributed denial of service (DDoS) attacks. Many attacks have been motivated by political or moral issues, or have been relatively simple attempts to harvest payment information to carry out low level fraud.

But future attacks could have more significant ramifications, including the attempt to retrieve more dangerous identity information like social security numbers. In a recent panel discussion at the Kaspersky Lab enterprise IT summit, Steve Adegbite, senior vice president of enterprise information security oversight and strategy at Wells Fargo, hinted that attackers may well be attempting to penetrate where the data is, implying that new cloud technologies and data warehouses as well as weaknesses in emerging technologies embraced by larger companies could well be future targets for attackers.

Where they will occur also pertains to the location of your enterprise. Threats in the United States will have a different profile than threats in Europe. Location matters in this equation. Take some time with your team to assess where cyberattacks are likely to be directed across your enterprise. Understand what may now be at an increased risk of attack, especially relative to the past.

Lobby for Budget Dollars to Defend and Mitigate Against Cyberattacks

IT budgets are no goldmine. CIOs have been used to having to do more with less for a long time now, and if we have sung the praises to our management group about how we can save the company money by, for example, moving to the cloud, or consolidating and virtualizing many servers, we might find ourselves with reduced budgets and reduced headcounts—right as the storm of cyberattacks threatens us. This is not a preferred position in which we find ourselves.

Unfortunately, cyberattacks are not only damaging but they are also expensive, not only in terms of the cost of services being down but also the expense directly attributable to mitigating and defending them. Vendors with experience in reacting in real time to cyberattacks and mitigating their effects are tremendously expensive, both at the time of the event and hosting data during periods of inactivity in order to be prepared if and when an attack occurs. Purchasing the hardware and software necessary to properly harden your systems is expensive. This is an important line item, an important sub area, in your budget for which you need to account. Consider it insurance on which you will almost certainly collect.

Also, look for products and technologies rated at EAL 6+, or High Robustness, which is a standard the government uses to protect intelligence information and other high value targets.

The bottom line: do not cannibalize your budget for proactive IT improvements and regular maintenance because you have failed to plan for a completely inevitable cyberattack.

In the Thick of an Attack, Ask for Help

When you’re experiencing an attack, you need good information you can rely on. Others have that information. In particular:

  • Join information sharing consortiums that can help you monitor both the overall threat level for cyberattacks and the different patterns that attack victims have noticed. For example, the National Retail Federation announced a new platform to share information and patterns that aim to arrest the data breaches the industry has recently suffered. Financial services companies have also set up an informational network, and other regulated industries often have a department of the governmental regulator that can serve as a contact point to help prevent this kind of illegal activity.
  • Develop a relationship with vendors with expertise on cyberattacks. It may be tempting to try to rely only on in house resources and talent, both as an way to control costs and also to protect valuable information about your infrastructure. But many vendors and consulting companies have worked through multiple cyberattacks in the past and have tremendous experience under their belts. Hiring one of these companies may well stop a cyberattacks before it does serious harm.
  • From the security technology you have in place, understand what readings are important and what may well be just noise. In an effort to impress and appear complete, many software vendors monitor every little thing under the sun and spin up a multitude of readings that can mask or inadvertently dilute the notifications of serious problems. Use your technology wisely and understand what notifications refer to high value targets so you can act earlier in the attack lifecycle.

Review: Goverlan Remote Administration Suite

I recently had the opportunity to spend some time evaluating systems management solutions that target a smaller price point and a smaller business set than Microsoft’s System Center product addresses, and in the process discovered a very nice product that I think can come in handy for many desktop administrators: the Goverlan Remote Administration Suite. In this review, I’ll go into more about Goverlan’s features, pricing, and drawbacks, and my recommendation.

Feature Overview

Goverlan Remote Administration Suite uses a Windows application, which in my tests performed quickly and consistently, although it naturally eliminates the ability to use non-Windows computers to perform administrative tasks and manage your network—something a web-based tool would be able to do, at least in a limited fashion. Installing and setting up Goverlan was reasonably fast, especially when installing client agents. Goverlan’s agent is only a few megabytes, has negligible memory requirements, and installs within seconds. The agent can be pushed to clients over the network without requiring a separate installation sequence, and is light enough to have no impact on system performance. The installation and uninstallation of agents from systems is very snappy—in some cases, I was able to remove and then redeploy agents in under five seconds.

I found Goverlan RAS to be quite full-featured when it comes to management, offering a central console from which to administer servers and clients. Goverlan’s interface has a distinct Office 2007/2010 look, with a task-based ribbon and logical organization, so it is not intimidating to get started with. One feature I did find highly useful is the Scope Actions tool, which allows you to easily define a scope of targets—they could be users, computers or groups—using powerful filtering and selection criteria. Then, you define the administrative actions to perform on that scope, which can essentially be anything Windows permits you to do, including Windows Management Instrumentation (WMI) queries, and then run those actions either on an ad-hoc or a scheduled or a recurring basis. I found the availability of options and the granularity of setting up a proper scope very robust with Goverlan—this is a feature that would pay off more and more as your network grows.

Goverlan also does a great job of helping administrators create custom WMI queries through its WMIX utility. It includes a WQL wizard that makes quick work of customizing WMI queries, and allows you to access even obscure classes and properties with a query that you can then export to other Windows administrative tools, like the Group Policy Management Console and Group Policy Object Editor. This can be a real boon to speeding up the creation of policies.

Remote control and desktop sharing is another area in which Goverlan excels, allowing administrators to assist and troubleshoot client machines without being physically present at that computer. Goverlan uses an agent that has access to a single TCP port for communications, and works with both Windows and Mac OS X and Linux computers. It encrypts the remote control traffic for added protection. Goverlan also offers administrators the option of masking a user’s screen while you are performing functions remotely, so the user cannot see potentially sensitive information while you are administering the client machine from a distance, and it does a much better job of organizing remote control targets in its interface. Goverlan also works well with virtual desktop infrastructure based deployments, allowing an administrator to shadow multiple client sessions on a single virtual host that uses Microsoft’s Terminal Services and Remote Desktop Services as well as Citrix’s Xen line of VDI products, just as simply as a console session on a physical computer. Finally, Goverlan provides screen capturing options including video recording alongside these remote session features.

Pricing and Support

Goverlan’s Remote Administration suite is priced at $699 per administrator with no client node fee, Goverlan provides 90 days of maintenance support with every license, and you can purchase one or two year maintenance package extensions

The Last Word

Goverlan’s Remote Administration Suite v7 lives up to its claims of automating and centralizing Windows administration and support in the enterprise. I believe Goverlan presents a robust, rich set of options and a strong overall value for the money, with a relatively low price point to begin with, a perpetual right to use the entire software suite by default, a unified set of capabilities in same product as opposed to edition thresholds to navigate, and an ability to manage an unlimited number of client machine targets per administrator.

This article was published with edits on SearchEnterpriseDesktop.com.

Windows Server 2012 R2 Work Folders: A Simple Guide, Part 2

Here, I continue with the process of setting up Work Folders. In part 1, we were working on the server, and that’s where we will pick up here in part 2.

Configuring SSL Connections

There is still more work to be done on the server; at this point, we need to configure SSL encryption. For lab deployments, you can use a self signed certificate if you are opposed to spending money or generate a certificate request for your existing public key infrastructure (PKI) deployment and then get it signed by a trusted certification authority. The feature works with either.

What names do you need to use? Work Folders client devices are hard coded to look for hosts answering at the following two subdomains:

SyncSvr.domain.tld
WorkFolders.domain.tld

You will of course need certificates covering those namespaces in order for the SSL connections to be authentic. If you are using self signed certificates, make sure to import those certificates into the local certificate stores on your client devices, so that Windows knows to trust those certificates when they are presented within the transaction. (Your normal PKI will probably not help here, since these client devices are generally going to be personal devices that are not domain members and do not exchange certificates as part of the domain join process.)

Once you get a certificate or generate a self signed certificate, find the thumbprint for that certificate. Again, PowerShell rides to the rescue:

Get-ChildItem –Path cert:\LocalMachine\My

Next, open up a regular command line session—not another instance of the PowerShell interface. Then, use netsh to bind that particular certificate to all hostnames on the server at the default IP SSL port of 443. The long appID attribute number is static, so feel free to cut and paste here if it suits you:

netsh http add sslcert ipport=0.0.0.0:443 certhash=<thumbprint_you_got_from_the_PowerShell_command_above> appid={CE66697B-3AA0-49D1-BDBD-A25C8359FD5D} certstorename=MY

Configure DNS Records

We’re almost to the finish line on the server side. We need to set up DNS A records for the following domains:

SyncSvr.domain.tld
WorkFolders.domain.tld

Use whatever DNS solution you have implemented to create these records, and make sure these records exist in your public namespace as well as the private namespace if you are doing what is commonly known as “split DNS.” The A records for each should resolve to the externally facing IP address of your sync server, assuming you want to enable syncing over the big bad wild Internet.

That’s the end of the server process. Let’s look at what a client experience is like using Work Folders.

Work Folders on the Client Side

The client process is pretty simple, to be honest. The heavy lifting is done on the server side, and since the clients are hard coded to look for sync servers at the domain names above, as we discussed, there are not a lot of settings to configure on the client side.

To enroll a client in Work Folders:

  1. Open Control Panel, and click on System and Security.
  2. Click on Work Folders, and then click on Set up work folders.
  3. Enter the user’s e-mail address. The wizard will look for WorkFolders.domain.tld and then attempt to create a partnership. You will be prompted for credentials on non domain joined machines.
  4. The wizard will ask to point out where on the local machine you would like to store synced content. It will also populate a little Work Folders icon on the left side of an Explorer window.

That’s it. By default, work folders will attempt to sync every 5 minutes.

Where You Will Run Into Trouble with Work Folders

As I mentioned in Part 1, Work Folders joins an already crowded field of sync solutions, and for a variety of reasons, I suspect your use of it will be limited in the near term. Here is why.

  • Right now, you can only use Work Folders in conjunction with Windows 8.1 devices and Windows RT 8.1 devices. There will be future clients for Windows 7 and Windows 8 devices (sorry, nothing for Windows XP!) in the coming months. On the server side, Windows Server 2012 R2 is required to run the sync share and there will not be any backporting to earlier server versions, like Windows Server 2008, R2, or 2012.
  • Work folders is pretty much an all or nothing affair. Except for being able to choose a specific subfolder on the path of the shared sync folder, Work Folders sync everything—you can’t select single files. Devices with limited local storage, like tablets and phablets, could find themselves without free space very quickly with this solution unless it is carefully managed by the end user.

With these caveats, Work Folders is still an in-the-box solution for Windows 8.1 clients to help bridge the personal device-corporate device gap and enable users to get access to what they need from wherever they are and whatever device they are using. You should evaluate it sooner or later and find where it makes sense for your organization.

This article was published, with edits, on SearchWindowsServer.com.

Windows Server 2012 R2 Work Folders: A Simple Guide, Part 1

There are as many file sync technologies as they are days of the week, and yet Microsoft has introduced yet another sync solution to its portfolio alongside OneDrive, OneDrive for Business (nee SkyDrive Pro), and Offline Files and Folders feature that’s been baked into the operating system since Windows XP. Meet Work Folders, a sync solution that is really designed to bridge the gap between personally owned devices like tablets (and in the case of Work Folders, Windows RT and Windows 8.1 tablets only for now) and corporate owned data. This is space no other sync technology from Microsoft is currently working in.

In this two part series, I’ll show you how to set up the feature and get it working, and discuss some obstacles you will run into during your evaluation.

Installing the Work Folders Feature

First off, you need to install the Work Folders role, which is an integrated part of Windows Server 2012 R2. Through PowerShell, a one line command gets it done for you.

Add-WindowsFeature FS-SyncShareService

(What’s not to love about PowerShell?)

When the command is finished running, you’ll be prompted with a notification within the command line like this:

Success Restart Needed Exit Code      Feature Result
——- ————– ———      ————–
True    No             Success        {Work Folders, IIS Hostable Web Core}}

Set up the Sync Share

The sync share is key to the Work Folders feature, the central bit of how it all works together. The sync share is essentially how Windows Server 2012 R2 manages what files to sync for which users. You set up a sync share by establishing a folder locally accessible to a Windows Server 2012 R2 server and then pointing a wizard to that location. Within that shared folder, each user will have a subfolder that will contain the files that the Work Folders feature will host and sync.

  1. Open Server Manager.
  2. Navigate to File and Storage Services in the left section and Work Folders in the middle section.
  3. From the Task drop down menu, select the New Sync Share Wizard from the menu.
  4. On the first screen, you can either choose an existing file share, which may be appropriate if you redirect user profiles and documents folders to a central file server, or you can enter a local path to create the share. Click Next.
  5. The “Specify the structure for user folders” screen appears. Here you can choose how folders will be built under your sync share. You can either use:
    1. the “user alias” method, which essentially uses the users’ login names only to create subfolders for each person under the sync share to hose their folders, or
    2. the “user alias@domain” structure if you are a larger organization that may have multiple domains and therefore an increased chance of collision between alias names.

The “Sync only the following subfolder” feature can also be enabled, which lets you choose one individual subfolder to sync. This lets you not sync music, pictures, and videos for every user. You may wish to only sync the Documents folder, or you may wish to create a new subfolder called “Work Folders” and the train your users to save documents that they want synced into that folder. You can choose a subfolder name that does not exist, and the wizard will create it for each user that has the Work Folders feature enabled (which happens later in the wizard.) Click Next.

  1. Enter a name for your sync share, and click Next.
  2. Choose which groups have access to this sync share. The best practice here is to assign all users that should have access to this particular sync share as group members in a specific security group, and then add that group here. If you want to let an administrator have access to documents with the work folders, uncheck the “Disable inherited permissions and grant users exclusive access to their files” box.  Finally, click Next.
  3. The Specify Device Policies screen appears. Here you can choose whether to instruct devices that sync to this sync share to encrypt the files within the work folders space, and whether to require a PIN on these devices—much like Exchange ActiveSync if you are familiar. Click Next.
  4. A summary screen appears. Verify all of your selections, and then click Finish.

Windows Server 2012 R2 will apply your settings and create the sync share.

PowerShell-ing Work Folders

Because I really like PowerShell, I’m fond of learning how to perform tasks with the language even if I ultimately decide to use the GUI to fully understand my options. In the case of Work Folders, one command does all the work.

Take the following example. Entering the command below creates a new sync share named SalesShareCA located at C:\syncshare\SalesCA for the California_Sales security group and requires devices to honor both an encryption and lockout and password policy.

New-SyncShare SalesShareCA –path C:\SyncShare\SalesCA –User DOMAIN\California_Sales -RequireEncryption $true –RequirePasswordAutoLock $true

Coming Up

In part 2, I’ll cover more server side setup and then show you how to connect a client and administer the solution, as well as analyze where Work Folders makes sense today and in the near term.

This article was published, with edits, on SearchWindowsServer.com.

So You’re New to Azure, Part 1

You may have heard a lot about Windows Azure, soon to become Microsoft Azure. But you might not know exactly how the Azure service can fit into your plans. Windows Azure has come a long way from its initial introduction in the fall of 2008 and really has something for most businesses and organizations. Windows Azure can effectively function as

  • A complete datacenter for microbusinesses and small business
  • A disaster recovery solution for organizations of all sizes
  • A scalable way to host web sites outside of your existing infrastructure
  • A way to run lab and test scenarios using virtual machines nearly identical to the ones you can create in your office

If you’re new to Azure, it can be difficult to know where to begin. The idea behind this new series of articles is to take you, as someone brand new to Windows Azure, through the options and show you exactly how you can get set up and running, and how you can enable some of the cooler scenarios that Azure makes possible.

What Kinds of Things Can You Set Up in Windows Azure?

Once you have established your Windows Azure account, you can set up:

  • Web sites, which are basically managed Internet Information Services (IIS) accounts that run specific web applications in either free mode, which operates at no cost to you but has significant limitations on capacity; or paid mode, which assigns specific resources at a cost
  • Virtual machines, which we’ll talk about in the next section
  • SQL databases, which can be used by web sites or other cloud services
  • Storage, including virtual hard disks of any size depending on what kind of budget you have
  • Media Services, like streaming audio and video with probably better availability metrics than you would be able to achieve on your own
  • Visual Studio Online, which is essential hosted source code control for your development teams based on the well regarded Team Foundation Service

The console (shown in Figure 1) is easy to use and shows all of these options in the left menu area.

Figure 1: The Windows Azure Management Console

 

Setting up a Virtual Machine

One of the best ways to use Microsoft Azure is as a test bed. In just a few clicks, you can spin up any number of virtual machines, customize their networking, and use them just as if you unboxed physical hardware and plugged it into your local switch. It’s great for testing scenarios, getting to know new software, working on certification exercises, and more. Let’s look at how to get started creating virtual machines.

  1. On the manage.windowsazure.com control panel, click Virtual Machines.
  2. In the middle pane, click the Create a Virtual Machine link.
  3. The UI experience will bring up a gray box in four columns, as shown in Figure 2. You can choose Quick create, which lets you input all of the options needed to spin up a VM on one screen to get you started as quickly as possible, or the From Gallery option, which lets you browse a library of virtual machine options with all manner of operating systems including Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, various flavors of Linux, Oracle, and pre-built application suites like SharePoint, BizTalk, and Dynamics NAV and GP. For the purposes of our example, we’ll just set up a quick Windows Server 2012 R2 virtual machine using the Quick Create option.
  4. In the rightmost column, enter:
    1. The name for the virtual machine (between three and 15 characters, and no funky symbols as it is a DNS name)
    2. The base OS image to use—if you’re in doubt, for now just create a Windows Server 2012 R2 machine to check things out
    3. The size, which is basically how many processor cores your VM will use and how much RAM will be dedicated to the machine—more cores and more memory costs money, of course
    4. The administrator’s username and password
    5. Which datacenter this virtual machine should run in—pick the closes one to your current location.
    6. Click Create Virtual Machine.

Figure 2: Creating a New Windows Azure Virtual Machine

At that point, the console will redirect you to the Virtual Machines page, and you will see the provisioning process take place under the status column, which you will see in Figure 3. After five minutes or so, your machine should have a green check mark and the text “Running” listed, which means it is ready to go. Congratulations, you have an instance of Windows Server running in the cloud.

Figure 3: Provisioning a new virtual machine

 

Virtual machines are accessible from the Internet using a *.cloudapp.net address, where the asterisk is the name of the virtual machine you entered during setup. You do need to enable Remote Desktop in order to get an RDP session going to your new virtual machine, which we will cover in the next section.

How to use Remote Desktop to Administer Your Azure Virtual Machine

As you might suspect, to manage virtual machines in Windows Azure, you use Remote Desktop Protocol. Advantages? The client is built directly into Windows, so there is no need to download any additional software. Disadvantages? Not many, unless you’re on a Linux or Mac machine, where the clients are less full featured.

To log in via remote desktop, go over to the Virtual Machines page, click on the virtual machine you want, and then on the bottom control panel, click Connect. Your browser will prompt you whether to open or save an RDP file with the name of your virtual machine, as shown in Figure 4. Save this to your desktop or other location. Then, open up the RDP file which will launch the Remote Desktop Connection client. Accept the security warning (this is simply because the certificate being presented by your new virtual machine is not trusted by your current machine), enter the username and password you configured when creating the machine, and then you are in.

Figure 4: Downloading the RDP file from the management console

 

What’s Next?

In the next post, I’ll show you how to make your Azure virtual machines a natural extension of your own on-premises network.