WIMBoot in Windows 8.1 Update: A First Look

WIMBoot is a new feature, currently only available on the desktop side of Windows 8.1 Update, that lets computers boot directly off a Windows image file, or WIM file.

Why would you want this?

WIM files are tremendously compressed. They have been used in the last few releases of Windows as the method by which the operating system was distributed—on the DVDs and ISO files, you would see boot.wim and install.wim, two files that contained essentially the entire Windows system, highly compressed. What the installer was doing during the first phase was simply extracting the full set of files from these compressed WIM files, and then moving on into the next phase of registering DLLs and doing the housekeeping necessary to get from bare metal to a working Windows machine. Interestingly, the compressed WIM file stays on the Windows machine to allow for system file recovery, so you’re paying the disk space price twice: once to store the compressed version of all of the files, and a second time to store the fully expanded version of the files for normal daily use.

WIMBoot turns this on its head, at least on PCs with solid state drives and UEFI firmware—in other words, no traditional or legacy machines with rotating platters for disk drives you boot from, and BIOS chips. It lays down a copy of the compressed WIM file on a special imaging partition, and then during installation, Windows just creates pointers on the operating partition to the compressed files in the partition where the WIM file is stored. The imaging partition is also used as the recovery partition, so it does double duty.

By enabling a scenario where you can boot into Windows directly off of a WIM file, you gain the ability to:

  • Have PCs that can use push-button restore while maintaining much more free disk space across the entire device.
  • Install Windows on devices with very small, fast drives like tablets, hybrid PCs, and repurposed machines to use as thin clients.
  • Install Windows much more quickly by eliminating a time consuming phase of the setup process.

How Do You Create WIMBoot Files?

You can use the Deployment Image Servicing and Management (DISM) tool to create WIMBoot images. Here is a procedure you can use to get started.

  1. Install Windows 8.1 Update on a PC.
  2. If you have Windows 8.1 imagines and Windows RE (Recovery Environment) images, stream in the Windows 8.1 Update to those source files, too, so that you are working from the latest version of the operating system.
  3. Install the Windows Assessment and Deployment Kit Windows 8.1 Updater version.
  4. Create a boot drive in Windows PE by following the instructions at http://technet.microsoft.com/en-us/library/dn613859.aspx.
  5. Copy the existing install.wim file into a new temporary file that you will use to create the WIMBoot image.
    copy c:\images\install.wim c:\images\install_wimboot.wim
  6. Mount the image.
    md c:\mnt\win
    dism /mount-image /imagefile:”C:\images\install_wimboot.wim /Index:1 /MountDir:c:\mnt\win
  7. Grab the hidden Windows RE image and unhide it so you can separate it out to save even more space.
    attrib –s –h c:\mnt\win\windows\system32\recovery\winre.wim
    move c:\mnt\win\windows\system32\recovery\winre.wim c:\images\winre.wim
  8. Unmount the image and save your changes with the commit command.
    dism /unmount-image /mountdir:c:\mnt\win /commit
  9. Boot into Windows PE (the boot drive you created in step four), apply the Windows image you just created, and then reboot. When Windows starts the out of box experience upon that next boot, hit Control, Shift, and F3 at the same time to get into audit mode.
  10. Clean up the image to save even more disk space.
    Dism /cleanup-image /online /startcomponentcleanup
  11. Sysprep the PC as you normally would to scrub SIDs and make the PC ready for imaging.
    c:\windows\system32\sysprep\sysprep /generalize /shutdown /oobe
  12. When the PC reboots, boot into Windows PE again and then recapture the image, which will include the applied updates and remove superseded files to save space. Note the new WIMBoot switch, which instructs DISM to save the image in a format compatible with WIMBoot.
    DISM /Capture-Image /WIMBoot /ImageFile:”X:\Images\install_wimboot.wim” /CaptureDir:C: /Name:”WIMBoot Enterprise_x64 8.1 Update”

How do I Deploy WIMBoot into Production?

To use the WIMBoot image you created above, you need to boot your target PCs into Windows PE and then create a partition layout that includes a system partition, a Windows partition, and the image partition. Here is a sample diskpart script to create the required layout.

select disk 0
clean
convert gpt
create partition efi size=100
format quick fs=fat32 label="System"
create partition msr size=128
create partition primary
shrink minimum=10000
format quick fs=ntfs label="Windows"
assign letter=c
create partition primary
format quick fs=ntfs label="Images"
assign letter=m
set id="de94bba4-06d1-4d40-a16a-bfd50179d6ac"
gpt attributes=0x8000000000000001
list volume
exit

You can use the guidance at http://technet.microsoft.com/en-us/library/dn605112.aspx for very detailed sequences of deploying WIMBoot images to different devices based on whether or not you know how big the images will be in advance.

This article originally appeared, with edits, at SearchEnterpriseDesktop.com.

Windows Azure Pack for Windows Server: A Primer

Windows Azure Pack brings the interface of Windows Azure—well, Microsoft Azure now—to your own servers and lets you create an infrastructure as a service offering for your internal customers, for external parties, or both. The best parts about it? The control of all of it rests with you, but the burden of requesting and provisioning can be delegated and automated. Let’s take a look.

How Windows Azure Pack Works

The real appeal of the Windows Azure Pack for Windows Server is that it gets you well on your way to creating your own “private cloud.” Using the very simple interface that the Azure infrastructure as a service (IaaS) offering provides, you can create offerings for your own organization’s departments and internal customers that puts the power of self service IT at their fingertips.

Note: the Windows Azure Pack for Windows Server requires Windows Server 2012 or Windows Server 2012 R2, with no prior release being supported, to host the portal and services. The individual offerings and virtual machines that you can offer within your portal can be any version of Windows, however, that is supported under the Hyper-V hypervisor.

For example, you could have a group of servers that makes up your private cloud. As the administrator, you can define offerings—for example, a small, medium, and heavy load service. Each service might consist of a certain virtual machine configuration, spread across different physical hosts for fault tolerance, a certain network setup, and a specific guarantee of service availability. These can all be offered through a portal, so when one department comes to you and needs a web site set up, you can point them to the portal and within a few clicks, they can get their service set up and running, with all the complexity of ordering and configuring hardware completely abstracted away.

You can also create website clouds which even remove the provisioning of virtual machines from the equation—these shared web sites can host ASP.NET, PHP, and Node.js web applications specifically, and includes a gallery of common and popular open source web applications to make creating new sites simple. It also integrates with source control systems like Git. These web site clouds just exist at the site level, with their actual hosting being done across a set of virtual machines you configure once and then essentially forget about. It is quite handy, especially for businesses that create multiple websites all the time and need a way to get them spun up and ready quickly without a lot of involvement from the IT department.

Commercial hosters will also find the Windows Azure Pack to be a useful tool for offering infrastructure services out to the larger Internet. The portal works in the same way, but there is also a mechanism for chargeback of hosting costs and also the ability to run promotions and coupons to offer limited time upgrades and special services along with your normal packages.

Getting Started with the Windows Azure Pack for Windows Server

Since the Windows Azure Pack is designed to bring the convenience of the super large Windows Azure service down to the quantity of servers and other iron that we mere mortals have in our own datacenters and server closets, it is safe to assume that there are some reasonably heavy hardware requirements. This is both for fault tolerance purposes, as the WAP will attempt to spread workloads around all of the available machines in a private cloud, but also for capacity’s sake: the idea here is to host many workloads, not just one or two, so that you can fully reap the benefits of delegation and automation.

Luckily Microsoft has made available the “Windows Azure Pack: Portal and API Express” package which puts all of the components of the solution on one machine, which is great for testing and learning how the WAP works. You can download the single machine stack installer at http://go.microsoft.com/?linkid=9832690.

For the full monty, you will essentially need to dedicate eight machines to your private cloud, if not more. However, these eight machines can either be physical or virtual. Make sure each machine has two processors, 4 GB of RAM at least, and 40 GB of available hard disk space. As noted, each machine needs to be running Windows Server 2012 or Windows Server 2012 R2. Then, you can download the Web Platform Installer, or WPI, which is a very handy toolkit for deploying greenfield installations of web applications that are based on IIS.

From there, it is a matter of playing around in the administrator portal and configuring your services and offerings. If you are familiar with Windows Azure, you will feel right at home—it is literally some of the same code and structures used in the mainstream Azure public cloud service. Try it out today.

Four Strategies for Dealing with the Effects of Cyberattacks

Cyberattacks are a threat to all of us. White House officials confirmed in March 2014 that federal agents told more than 3,000 US companies that their IT deployments had been hacked, according to the Washington Post. Bloomberg reports that the SEC is looking into the constant threats of cyberattacks against stock exchanges, brokerages, and other Wall Street firms.

These attacks are going to happen. Here are four strategies to put in place to help you deal with cyberattacks and their threats.

Have a Disclosure Plan

Many industries are regulated by state, local, and federal governments and have specific rules about what must be disclosed to consumers during a cyberattack. This is especially true of the health care and financial verticals where sensitive customer information is involved. Yet sometimes in the wake of an attack, or even while an attack is still happen, the situation that is evolving can be murky enough that disclosure rules get broken or at the very least the disclosure process is delayed or confused. For that reason, it is important to plan ahead and develop an action framework when events that trigger a disclosure response occur. Here are some considerations:

  1. Understand the applicable regulatory framework. For publicly traded companies, the SEC generally has disclosure guidelines and timeframes. For financial institutions, the Office of the Comptroller of the Currency (OCC) and the Federal Deposit Insurance Corporation (FDIC) handle this on the federal side. State regulations may vary.
  2. Engage your communications team. These employees are professionals who have developed relationships with media and other external stakeholders and can help you control the messaging and disclosures that you are required to make, as well as advise on the timing and breadth of those statements.
  3. Coordinate with the required departments. CIOs most coordinate with the individual IT teams responsible for the area under attack, outside contractors and vendors that are helping with the mitigation and recovery, and government agencies if applicable to keep the disclosure plan on track. Have key personnel identified ahead of time and make sure roles and next actions to carry out disclosure plans are known.

Understand What Targets of Yours are Valuable

The real question about cyberattacks is not when they occur. Attackers are inventing new ways to do everything, connectivity to the Internet is becoming more pervasive, and it is easier and cheaper than ever to acquire a botnet to do your bidding if you are a malfeasant. Cyberattacks will happen to you, whether it happens tonight, next week, next month, or next year.

The real question about cyberattacks is where they will occur. Traditional attacks have really gone after most of the low hanging fruit, like payment information—witness the recent Target breach—or just general havoc-wreaking, like the Syrian Electronic Army enjoys carrying out with distributed denial of service (DDoS) attacks. Many attacks have been motivated by political or moral issues, or have been relatively simple attempts to harvest payment information to carry out low level fraud.

But future attacks could have more significant ramifications, including the attempt to retrieve more dangerous identity information like social security numbers. In a recent panel discussion at the Kaspersky Lab enterprise IT summit, Steve Adegbite, senior vice president of enterprise information security oversight and strategy at Wells Fargo, hinted that attackers may well be attempting to penetrate where the data is, implying that new cloud technologies and data warehouses as well as weaknesses in emerging technologies embraced by larger companies could well be future targets for attackers.

Where they will occur also pertains to the location of your enterprise. Threats in the United States will have a different profile than threats in Europe. Location matters in this equation. Take some time with your team to assess where cyberattacks are likely to be directed across your enterprise. Understand what may now be at an increased risk of attack, especially relative to the past.

Lobby for Budget Dollars to Defend and Mitigate Against Cyberattacks

IT budgets are no goldmine. CIOs have been used to having to do more with less for a long time now, and if we have sung the praises to our management group about how we can save the company money by, for example, moving to the cloud, or consolidating and virtualizing many servers, we might find ourselves with reduced budgets and reduced headcounts—right as the storm of cyberattacks threatens us. This is not a preferred position in which we find ourselves.

Unfortunately, cyberattacks are not only damaging but they are also expensive, not only in terms of the cost of services being down but also the expense directly attributable to mitigating and defending them. Vendors with experience in reacting in real time to cyberattacks and mitigating their effects are tremendously expensive, both at the time of the event and hosting data during periods of inactivity in order to be prepared if and when an attack occurs. Purchasing the hardware and software necessary to properly harden your systems is expensive. This is an important line item, an important sub area, in your budget for which you need to account. Consider it insurance on which you will almost certainly collect.

Also, look for products and technologies rated at EAL 6+, or High Robustness, which is a standard the government uses to protect intelligence information and other high value targets.

The bottom line: do not cannibalize your budget for proactive IT improvements and regular maintenance because you have failed to plan for a completely inevitable cyberattack.

In the Thick of an Attack, Ask for Help

When you’re experiencing an attack, you need good information you can rely on. Others have that information. In particular:

  • Join information sharing consortiums that can help you monitor both the overall threat level for cyberattacks and the different patterns that attack victims have noticed. For example, the National Retail Federation announced a new platform to share information and patterns that aim to arrest the data breaches the industry has recently suffered. Financial services companies have also set up an informational network, and other regulated industries often have a department of the governmental regulator that can serve as a contact point to help prevent this kind of illegal activity.
  • Develop a relationship with vendors with expertise on cyberattacks. It may be tempting to try to rely only on in house resources and talent, both as an way to control costs and also to protect valuable information about your infrastructure. But many vendors and consulting companies have worked through multiple cyberattacks in the past and have tremendous experience under their belts. Hiring one of these companies may well stop a cyberattacks before it does serious harm.
  • From the security technology you have in place, understand what readings are important and what may well be just noise. In an effort to impress and appear complete, many software vendors monitor every little thing under the sun and spin up a multitude of readings that can mask or inadvertently dilute the notifications of serious problems. Use your technology wisely and understand what notifications refer to high value targets so you can act earlier in the attack lifecycle.

Review: Goverlan Remote Administration Suite

I recently had the opportunity to spend some time evaluating systems management solutions that target a smaller price point and a smaller business set than Microsoft’s System Center product addresses, and in the process discovered a very nice product that I think can come in handy for many desktop administrators: the Goverlan Remote Administration Suite. In this review, I’ll go into more about Goverlan’s features, pricing, and drawbacks, and my recommendation.

Feature Overview

Goverlan Remote Administration Suite uses a Windows application, which in my tests performed quickly and consistently, although it naturally eliminates the ability to use non-Windows computers to perform administrative tasks and manage your network—something a web-based tool would be able to do, at least in a limited fashion. Installing and setting up Goverlan was reasonably fast, especially when installing client agents. Goverlan’s agent is only a few megabytes, has negligible memory requirements, and installs within seconds. The agent can be pushed to clients over the network without requiring a separate installation sequence, and is light enough to have no impact on system performance. The installation and uninstallation of agents from systems is very snappy—in some cases, I was able to remove and then redeploy agents in under five seconds.

I found Goverlan RAS to be quite full-featured when it comes to management, offering a central console from which to administer servers and clients. Goverlan’s interface has a distinct Office 2007/2010 look, with a task-based ribbon and logical organization, so it is not intimidating to get started with. One feature I did find highly useful is the Scope Actions tool, which allows you to easily define a scope of targets—they could be users, computers or groups—using powerful filtering and selection criteria. Then, you define the administrative actions to perform on that scope, which can essentially be anything Windows permits you to do, including Windows Management Instrumentation (WMI) queries, and then run those actions either on an ad-hoc or a scheduled or a recurring basis. I found the availability of options and the granularity of setting up a proper scope very robust with Goverlan—this is a feature that would pay off more and more as your network grows.

Goverlan also does a great job of helping administrators create custom WMI queries through its WMIX utility. It includes a WQL wizard that makes quick work of customizing WMI queries, and allows you to access even obscure classes and properties with a query that you can then export to other Windows administrative tools, like the Group Policy Management Console and Group Policy Object Editor. This can be a real boon to speeding up the creation of policies.

Remote control and desktop sharing is another area in which Goverlan excels, allowing administrators to assist and troubleshoot client machines without being physically present at that computer. Goverlan uses an agent that has access to a single TCP port for communications, and works with both Windows and Mac OS X and Linux computers. It encrypts the remote control traffic for added protection. Goverlan also offers administrators the option of masking a user’s screen while you are performing functions remotely, so the user cannot see potentially sensitive information while you are administering the client machine from a distance, and it does a much better job of organizing remote control targets in its interface. Goverlan also works well with virtual desktop infrastructure based deployments, allowing an administrator to shadow multiple client sessions on a single virtual host that uses Microsoft’s Terminal Services and Remote Desktop Services as well as Citrix’s Xen line of VDI products, just as simply as a console session on a physical computer. Finally, Goverlan provides screen capturing options including video recording alongside these remote session features.

Pricing and Support

Goverlan’s Remote Administration suite is priced at $699 per administrator with no client node fee, Goverlan provides 90 days of maintenance support with every license, and you can purchase one or two year maintenance package extensions

The Last Word

Goverlan’s Remote Administration Suite v7 lives up to its claims of automating and centralizing Windows administration and support in the enterprise. I believe Goverlan presents a robust, rich set of options and a strong overall value for the money, with a relatively low price point to begin with, a perpetual right to use the entire software suite by default, a unified set of capabilities in same product as opposed to edition thresholds to navigate, and an ability to manage an unlimited number of client machine targets per administrator.

This article was published with edits on SearchEnterpriseDesktop.com.

Windows Server 2012 R2 Work Folders: A Simple Guide, Part 2

Here, I continue with the process of setting up Work Folders. In part 1, we were working on the server, and that’s where we will pick up here in part 2.

Configuring SSL Connections

There is still more work to be done on the server; at this point, we need to configure SSL encryption. For lab deployments, you can use a self signed certificate if you are opposed to spending money or generate a certificate request for your existing public key infrastructure (PKI) deployment and then get it signed by a trusted certification authority. The feature works with either.

What names do you need to use? Work Folders client devices are hard coded to look for hosts answering at the following two subdomains:

SyncSvr.domain.tld
WorkFolders.domain.tld

You will of course need certificates covering those namespaces in order for the SSL connections to be authentic. If you are using self signed certificates, make sure to import those certificates into the local certificate stores on your client devices, so that Windows knows to trust those certificates when they are presented within the transaction. (Your normal PKI will probably not help here, since these client devices are generally going to be personal devices that are not domain members and do not exchange certificates as part of the domain join process.)

Once you get a certificate or generate a self signed certificate, find the thumbprint for that certificate. Again, PowerShell rides to the rescue:

Get-ChildItem –Path cert:\LocalMachine\My

Next, open up a regular command line session—not another instance of the PowerShell interface. Then, use netsh to bind that particular certificate to all hostnames on the server at the default IP SSL port of 443. The long appID attribute number is static, so feel free to cut and paste here if it suits you:

netsh http add sslcert ipport=0.0.0.0:443 certhash=<thumbprint_you_got_from_the_PowerShell_command_above> appid={CE66697B-3AA0-49D1-BDBD-A25C8359FD5D} certstorename=MY

Configure DNS Records

We’re almost to the finish line on the server side. We need to set up DNS A records for the following domains:

SyncSvr.domain.tld
WorkFolders.domain.tld

Use whatever DNS solution you have implemented to create these records, and make sure these records exist in your public namespace as well as the private namespace if you are doing what is commonly known as “split DNS.” The A records for each should resolve to the externally facing IP address of your sync server, assuming you want to enable syncing over the big bad wild Internet.

That’s the end of the server process. Let’s look at what a client experience is like using Work Folders.

Work Folders on the Client Side

The client process is pretty simple, to be honest. The heavy lifting is done on the server side, and since the clients are hard coded to look for sync servers at the domain names above, as we discussed, there are not a lot of settings to configure on the client side.

To enroll a client in Work Folders:

  1. Open Control Panel, and click on System and Security.
  2. Click on Work Folders, and then click on Set up work folders.
  3. Enter the user’s e-mail address. The wizard will look for WorkFolders.domain.tld and then attempt to create a partnership. You will be prompted for credentials on non domain joined machines.
  4. The wizard will ask to point out where on the local machine you would like to store synced content. It will also populate a little Work Folders icon on the left side of an Explorer window.

That’s it. By default, work folders will attempt to sync every 5 minutes.

Where You Will Run Into Trouble with Work Folders

As I mentioned in Part 1, Work Folders joins an already crowded field of sync solutions, and for a variety of reasons, I suspect your use of it will be limited in the near term. Here is why.

  • Right now, you can only use Work Folders in conjunction with Windows 8.1 devices and Windows RT 8.1 devices. There will be future clients for Windows 7 and Windows 8 devices (sorry, nothing for Windows XP!) in the coming months. On the server side, Windows Server 2012 R2 is required to run the sync share and there will not be any backporting to earlier server versions, like Windows Server 2008, R2, or 2012.
  • Work folders is pretty much an all or nothing affair. Except for being able to choose a specific subfolder on the path of the shared sync folder, Work Folders sync everything—you can’t select single files. Devices with limited local storage, like tablets and phablets, could find themselves without free space very quickly with this solution unless it is carefully managed by the end user.

With these caveats, Work Folders is still an in-the-box solution for Windows 8.1 clients to help bridge the personal device-corporate device gap and enable users to get access to what they need from wherever they are and whatever device they are using. You should evaluate it sooner or later and find where it makes sense for your organization.

This article was published, with edits, on SearchWindowsServer.com.

Windows Server 2012 R2 Work Folders: A Simple Guide, Part 1

There are as many file sync technologies as they are days of the week, and yet Microsoft has introduced yet another sync solution to its portfolio alongside OneDrive, OneDrive for Business (nee SkyDrive Pro), and Offline Files and Folders feature that’s been baked into the operating system since Windows XP. Meet Work Folders, a sync solution that is really designed to bridge the gap between personally owned devices like tablets (and in the case of Work Folders, Windows RT and Windows 8.1 tablets only for now) and corporate owned data. This is space no other sync technology from Microsoft is currently working in.

In this two part series, I’ll show you how to set up the feature and get it working, and discuss some obstacles you will run into during your evaluation.

Installing the Work Folders Feature

First off, you need to install the Work Folders role, which is an integrated part of Windows Server 2012 R2. Through PowerShell, a one line command gets it done for you.

Add-WindowsFeature FS-SyncShareService

(What’s not to love about PowerShell?)

When the command is finished running, you’ll be prompted with a notification within the command line like this:

Success Restart Needed Exit Code      Feature Result
——- ————– ———      ————–
True    No             Success        {Work Folders, IIS Hostable Web Core}}

Set up the Sync Share

The sync share is key to the Work Folders feature, the central bit of how it all works together. The sync share is essentially how Windows Server 2012 R2 manages what files to sync for which users. You set up a sync share by establishing a folder locally accessible to a Windows Server 2012 R2 server and then pointing a wizard to that location. Within that shared folder, each user will have a subfolder that will contain the files that the Work Folders feature will host and sync.

  1. Open Server Manager.
  2. Navigate to File and Storage Services in the left section and Work Folders in the middle section.
  3. From the Task drop down menu, select the New Sync Share Wizard from the menu.
  4. On the first screen, you can either choose an existing file share, which may be appropriate if you redirect user profiles and documents folders to a central file server, or you can enter a local path to create the share. Click Next.
  5. The “Specify the structure for user folders” screen appears. Here you can choose how folders will be built under your sync share. You can either use:
    1. the “user alias” method, which essentially uses the users’ login names only to create subfolders for each person under the sync share to hose their folders, or
    2. the “user alias@domain” structure if you are a larger organization that may have multiple domains and therefore an increased chance of collision between alias names.

The “Sync only the following subfolder” feature can also be enabled, which lets you choose one individual subfolder to sync. This lets you not sync music, pictures, and videos for every user. You may wish to only sync the Documents folder, or you may wish to create a new subfolder called “Work Folders” and the train your users to save documents that they want synced into that folder. You can choose a subfolder name that does not exist, and the wizard will create it for each user that has the Work Folders feature enabled (which happens later in the wizard.) Click Next.

  1. Enter a name for your sync share, and click Next.
  2. Choose which groups have access to this sync share. The best practice here is to assign all users that should have access to this particular sync share as group members in a specific security group, and then add that group here. If you want to let an administrator have access to documents with the work folders, uncheck the “Disable inherited permissions and grant users exclusive access to their files” box.  Finally, click Next.
  3. The Specify Device Policies screen appears. Here you can choose whether to instruct devices that sync to this sync share to encrypt the files within the work folders space, and whether to require a PIN on these devices—much like Exchange ActiveSync if you are familiar. Click Next.
  4. A summary screen appears. Verify all of your selections, and then click Finish.

Windows Server 2012 R2 will apply your settings and create the sync share.

PowerShell-ing Work Folders

Because I really like PowerShell, I’m fond of learning how to perform tasks with the language even if I ultimately decide to use the GUI to fully understand my options. In the case of Work Folders, one command does all the work.

Take the following example. Entering the command below creates a new sync share named SalesShareCA located at C:\syncshare\SalesCA for the California_Sales security group and requires devices to honor both an encryption and lockout and password policy.

New-SyncShare SalesShareCA –path C:\SyncShare\SalesCA –User DOMAIN\California_Sales -RequireEncryption $true –RequirePasswordAutoLock $true

Coming Up

In part 2, I’ll cover more server side setup and then show you how to connect a client and administer the solution, as well as analyze where Work Folders makes sense today and in the near term.

This article was published, with edits, on SearchWindowsServer.com.

So You’re New to Azure, Part 1

You may have heard a lot about Windows Azure, soon to become Microsoft Azure. But you might not know exactly how the Azure service can fit into your plans. Windows Azure has come a long way from its initial introduction in the fall of 2008 and really has something for most businesses and organizations. Windows Azure can effectively function as

  • A complete datacenter for microbusinesses and small business
  • A disaster recovery solution for organizations of all sizes
  • A scalable way to host web sites outside of your existing infrastructure
  • A way to run lab and test scenarios using virtual machines nearly identical to the ones you can create in your office

If you’re new to Azure, it can be difficult to know where to begin. The idea behind this new series of articles is to take you, as someone brand new to Windows Azure, through the options and show you exactly how you can get set up and running, and how you can enable some of the cooler scenarios that Azure makes possible.

What Kinds of Things Can You Set Up in Windows Azure?

Once you have established your Windows Azure account, you can set up:

  • Web sites, which are basically managed Internet Information Services (IIS) accounts that run specific web applications in either free mode, which operates at no cost to you but has significant limitations on capacity; or paid mode, which assigns specific resources at a cost
  • Virtual machines, which we’ll talk about in the next section
  • SQL databases, which can be used by web sites or other cloud services
  • Storage, including virtual hard disks of any size depending on what kind of budget you have
  • Media Services, like streaming audio and video with probably better availability metrics than you would be able to achieve on your own
  • Visual Studio Online, which is essential hosted source code control for your development teams based on the well regarded Team Foundation Service

The console (shown in Figure 1) is easy to use and shows all of these options in the left menu area.

Figure 1: The Windows Azure Management Console

 

Setting up a Virtual Machine

One of the best ways to use Microsoft Azure is as a test bed. In just a few clicks, you can spin up any number of virtual machines, customize their networking, and use them just as if you unboxed physical hardware and plugged it into your local switch. It’s great for testing scenarios, getting to know new software, working on certification exercises, and more. Let’s look at how to get started creating virtual machines.

  1. On the manage.windowsazure.com control panel, click Virtual Machines.
  2. In the middle pane, click the Create a Virtual Machine link.
  3. The UI experience will bring up a gray box in four columns, as shown in Figure 2. You can choose Quick create, which lets you input all of the options needed to spin up a VM on one screen to get you started as quickly as possible, or the From Gallery option, which lets you browse a library of virtual machine options with all manner of operating systems including Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, various flavors of Linux, Oracle, and pre-built application suites like SharePoint, BizTalk, and Dynamics NAV and GP. For the purposes of our example, we’ll just set up a quick Windows Server 2012 R2 virtual machine using the Quick Create option.
  4. In the rightmost column, enter:
    1. The name for the virtual machine (between three and 15 characters, and no funky symbols as it is a DNS name)
    2. The base OS image to use—if you’re in doubt, for now just create a Windows Server 2012 R2 machine to check things out
    3. The size, which is basically how many processor cores your VM will use and how much RAM will be dedicated to the machine—more cores and more memory costs money, of course
    4. The administrator’s username and password
    5. Which datacenter this virtual machine should run in—pick the closes one to your current location.
    6. Click Create Virtual Machine.

Figure 2: Creating a New Windows Azure Virtual Machine

At that point, the console will redirect you to the Virtual Machines page, and you will see the provisioning process take place under the status column, which you will see in Figure 3. After five minutes or so, your machine should have a green check mark and the text “Running” listed, which means it is ready to go. Congratulations, you have an instance of Windows Server running in the cloud.

Figure 3: Provisioning a new virtual machine

 

Virtual machines are accessible from the Internet using a *.cloudapp.net address, where the asterisk is the name of the virtual machine you entered during setup. You do need to enable Remote Desktop in order to get an RDP session going to your new virtual machine, which we will cover in the next section.

How to use Remote Desktop to Administer Your Azure Virtual Machine

As you might suspect, to manage virtual machines in Windows Azure, you use Remote Desktop Protocol. Advantages? The client is built directly into Windows, so there is no need to download any additional software. Disadvantages? Not many, unless you’re on a Linux or Mac machine, where the clients are less full featured.

To log in via remote desktop, go over to the Virtual Machines page, click on the virtual machine you want, and then on the bottom control panel, click Connect. Your browser will prompt you whether to open or save an RDP file with the name of your virtual machine, as shown in Figure 4. Save this to your desktop or other location. Then, open up the RDP file which will launch the Remote Desktop Connection client. Accept the security warning (this is simply because the certificate being presented by your new virtual machine is not trusted by your current machine), enter the username and password you configured when creating the machine, and then you are in.

Figure 4: Downloading the RDP file from the management console

 

What’s Next?

In the next post, I’ll show you how to make your Azure virtual machines a natural extension of your own on-premises network.

The Hunt for the Data Scientist

It is hard to resist the sparkly nirvana that Big Data, leveraged appropriately, promises to those who choose to embrace it. You can transform your business, become more relevant to your customers, increase your profits, and target efficiencies in your market all by simply taking a look at the data you probably already have in your possession, but have been ignoring due to a lack of qualified talent to massage it.

Enter the data scientist, arguably becoming one of the hottest job descriptions on the market. The perfect candidate is a number whiz, a savant at office politics, and plays statistical computing languages like a skilled pianist. But it can be hard to translate that ideal into actionable job description and screening criteria. In this piece, I explain several points and virtues to look for when identifying suitable candidates for an open data scientist position on your team, and also note some market dynamics when it comes to establishing compensation packages for data scientists too. Let’s dig in.

Qualities to Look For in a Data Scientist

Since “data scientist” is a bit of a new concept, without a lot of proven job descriptions, you will want to work closely with your human resources department on the rubric and qualifications you are using to screen initial resumes and also set up a first round of interviews. However, here are some salient points to add to your list to qualify candidates that should prove useful during the process:

  • A good data scientists candidate will have a keen understanding of statistics and laws of large numbers. Trends are seen in numbers. For example, a good data scientist understands, “this many customers behave in this certain way.” “This many customers intersect with others at this many precise points.” Over large quantities of data, trends pop out in numbers. A great data scientist has the skillset to understand trends in large numbers and an ability to translate that into predictive analytics. A good data scientist can interrogate large quantities of data, extract trends, and then use predictive modeling techniques to anticipate behavior across that aggregate dataset. Statistics are also helpful in preparing reports for management and also prescribing recommended courses of action. While a mathematics degree would be ideal, many qualified candidates will have taken a slightly more practical academic path and you should not be scared away by interviewees that do not have advanced mathematics credentials. A focus on statistics in a candidate’s academic career, whether at the bachelor level or above, would prove sufficient for this type of position. 
  • A good data scientist will have a strong inquisitive sense. Part of the allure and mystique of big data is the art of teasing actionable conclusions from a giant haystack of (typically) unstructured data. It is generally not enough to know how to write queries to find specific information without being able to generate the context of what queries should be run, what data we would like to know, and what data we might not know we would like to know but that could possibly be of interest. Great data scientists execute queries and database runs, but they also design suggestions for architecting queries in ways that not only return a defined set of results to answer a question someone already asked, but that also reveal new insights into questions that have not yet been asked by an organization. This is where the real value of a data scientist will present itself over the coming years, I predict. While some might argue that this is a soft skill that is difficult to interview for, I disagree. Carefully crafted hypothetical scenarios can be presented to candidates during interviews that will help you understand their thought process, their approach to a problem, the various ways the candidate would attempt to glean the answers to the problem, and also what other questions the candidate could pose that would add value to the original query. Stress to candidates during the interviews that “outside the box” thinking is encouraged and limiting answers to only the problems posed is discouraged. 
  • A good data scientist will have a familiarity with database design and implementation. It is important for today’s data scientists to sit somewhere in the range between an inquisitive university research scientist (which is essentially what my last point is describing) and a software developer or engineer, someone who knows how to tune his lab and operate his machinery well. Even though much of what falls under the “big data” category is known as unstructured data, a fundamental understanding of both relational and columnar databases can really serve a data scientist well. Many corporate data warehouses are of the traditional row-based relational database sort, and while big data is new and alluring, much actionable data and trends can be teased out of traditional databases. Also, data scientists will play a key role in setting up analytics and production databases to take advantage of new techniques, and a history of working with databases would provide great context for setting up new systems in the new role. Additionally, we see many big data software developers attempting to use SQL like language in their products in an attempt to woo traditional database administrators who have no desire to learn a MapReduce like language. A knowledge of traditional SQL will continue to pay dividends and will allow data scientists to play nicely and integrate well with other database professionals that you already have on your staff. 
  • A good data scientist will have a baseline proficiency in a scripting language. Your most qualified candidates should be awarded extra points for knowing Python at least somewhat well. Many query jobs over vast quantities of unstructured data are issued in scripts and take quite some time to run. Python is generally accepted as the most compatible, most versatile scripting language for working with columnar databases, MapReduce style queries, and other elements of the data scientist puzzle. Python is an open source language known to be fairly usable and easy to read, so it should not pose much of a hurdle for your base of data scientist candidates to overcome. You could also consider “pseudo code” skills, or the ability to write out almost in plain English how an algorithm or a query would work—such a test would show the quality of the thinking and the approach to a problem as well as how such a problem would begin to be solved by your applicant, regardless of if he or she actually possesses the skills in any given language to pull it off.

Be Prepared to Pay

One note about compensation: as demand for data scientists increases, and as long as the supply of qualified candidates is being outstripped by it, salaries are rising. In almost any metro market in the United States, data scientists are receiving six figure base salaries—obviously higher in high cost markets like the West coast.  In Silicon Valley in particular, multiple offers for a qualified candidate are not uncommon. Do not attempt to pay below market rates for this position. Even startups are paying data scientists comfortable wages plus the ability to work on challenging new products, unlike their traditional modus operandi of loading up in equity positions and paying measly wages. Put simply, don’t cheap out and expect great talent.

This article was published with edits on CIO.com.

What CIOs Can Learn from the Recent Target Payment Systems Breach

We are likely all familiar with the Target payment card breach in late 2013, wherein millions of payment card numbers were stolen in a huge hole in the company’s network, right down to the security of the PIN pads. As of March 5, 2014, the breach cost the chief information officer of Target, Beth Jacobs, her job.  It was, and still is, a very serious matter.

Obviously Target is a very public company and this situation garnered a lot of attention. As a CIO, however, or a member of executive technical staff, there are some observations we can make about the situation. What can you take away from Target’s very public example of a breach? Here are four key lessons.

  • It is vitally important to know which alarms and alerts to respond to and which ones you can safely ignore as an organization. Security vulnerabilities are in all actuality a dime a dozen in this connected age. Different software has different risk profiles, and some vulnerabilities that affect certain organizations severely are already safely mitigated in other organizations simply by the structure of how components are set up. Performing a thorough threat analysis is crucial, but knowing how to manage the onslaught of event logs, audit logs, vendor vulnerability notifications, and intrusion prevention messages is just as critical.One best practice is to develop a rubric by which a weight is assigned to alerts about security vulnerabilities and attempted penetration. Depending on what business you are in, you can score this either by system involved or by the source of the alert. Some considerations might include the following:
    • If you are a retail business, payment systems alerts should clearly be given a priority. Typically these payment systems are segregated from other networks, but patching alerts from your vendors, security audit logs, and activity monitoring should be done on a high frequency and particular attention should be paid to anomalies that appear in these results. Internet facing businesses should always ensure that fraud prevention measures are in place and ensure shopping cart and e-commerce software is patched and monitored.
    • Alerts from your intrusion detection system or honeypots should also be given a higher priority than other alerts. However, it may be necessary to fine tune thresholds so that one-off attempts do not raise alarms, but repeated attempts that display similar characteristics are evaluated for their consistency and then bubbled up to the appropriate levels for technical review and analysis.
    • Other regular software vulnerabilities, like those in file servers and desktop software, should be cataloged and analyzed but should fall below other riskier parts of your technology stack.

It is best to spend time now creating a judgment structure by which you can evaluate alerts and threat messages so that the signal to noise ratio when it comes to these notifications is as high as it can be. This way, “red alert” messages get the attention they deserve immediately, while “yellow alert” type messages can be analyzed with a less urgent pace.

  • As a CIO, it is important to fight to get funding for a Chief Information Security Officer position to handle these significant security and liability responsibilities. As the old saying goes, the buck must stop somewhere, and as with most things technology, the head of the information services organization is likely to get the blame. But CIOs are burdened with more areas of responsibility than ever before these days, from generally keeping the computers running to creating new technology driven lines of business that can actually represent a profit center to a company to liaising with marketing and the executive suite to unlock secrets that lie within the massive amounts of data warehoused in the corporate IT warehouse. Security is an important part of all this, and creating a security regimen and implementing it through the organization is really best done by a dedicated CISO, someone whose sole job is to monitor the security posture of a business and then carefully and deliberately enhance it over time. A CIO is simply too rushed, too spread thin, to fully handle this responsibility—and Target shows why. It took several weeks to get to the bottom of the extent of the breach. According to multiple reports, it took days to even discover the breach before the media caught on to it. And as we all saw, it seemed Target discovered more and more about exactly what data was lost in the attack, judging from the trickled release of information to the public and to the media. One can imagine within Target the frenzy of both getting to the bottom of what happened, reacting to it, preventing the situation from deteriorating, and activating response plans. The buck stopped with Beth Jacobs, the Target CIO, and her response was left somewhat wanting. It is a very real possibility that she simply had too much on her plate.Additionally, the creation of a specific security head is a sign to the rest of the organization that security is serious business, and having such a position generally gives the CISO the autonomy required to put into place the right remedial measures to enhance security. Having to work through a chain of command not dedicated to security can delay or even jeopardize necessary technical  improvements because of a lack of clear communication or an inability to convince others that some measures are necessary. 
  • An incident response plan is absolutely key to a successful recovery in the event of a breach. In the hours and initial couple of days right after a breach has been discovered, there is usually only one priority: fix the breach, at all costs. Stop the bleeding. And this is a fine approach for the technical team. However, others in your organization need to at least be activated to begin planning a communications approach that keeps all stakeholders informed. Witness the somewhat haphazard way in which Target disclosed the breach. Were PINs compromised or just payment card numbers? Were PINs leaked? Were encrypted PINs leaked? Was anything leaked? The story seemed to change as the situation developed, which is a symptom of an incomplete crisis communications plan.I will note however that the PIN pads and perhaps other payment and point of sale equipment were replaced at my local Target location within a couple of days after the initial announcement of the breach. This is a sign of an excellent technical response plan. 
  • The weakest point in your security posture is very likely an area you have not even considered yet. Consider the Target breach, which began with a heating, ventilation, and air conditioning contractor having access to a wireless network on the vulnerable side of the Target corporate firewall. It all began because something is innocuous as a thermostat was not functioning correctly. Remember that hackers and crackers are sophisticated, and at this level, are playing a long game to nail lucrative high value targets. They are looking where they think you are not looking. As a CIO, it is your job to direct your teams to batten down any and all hatches, procedural, technical, and otherwise. Provide the leadership and the ethos to make this type of watchful, deliberate security a priority.

This article was published with edits on CIO.com.

Top 5 Reasons to Run Windows 8.1 on Hyper-V

With Windows 8.1 generally available and on the streets now, the time has come to take a deeper look at its feature. The way to do that for most IT folks is to install Windows 8.1 in a virtual machine. If Hyper-V happens to be your hypervisor of choice, then there is good news: Hyper-V and Windows 8.1 work really well together. Whether you are running Windows 8.1 on your desktop and Hyper-V for a couple of test machines, or you are deploying Windows 8.1 in a virtual desktop farm running Windows Server 2012 or Windows Server 2012 R2 on the backend, there is a lot to like about Hyper-V and Microsoft’s latest OS. Let’s dig in.

Generation Two Virtual Machines

Windows 8.1 is designed to work within what Microsoft calls a Generation 2 Virtual Machine. This is a new type of VM available exclusively under Hyper-V that essentially strips away all pretense of virtualizing legacy PC deployments. Generation 2 virtual machines UEFI based, rather than relying on a BIOS, and there are no emulated devices. These VMs boot off of virtual SCSI and network adapters directly and also supports secure boot, the preferred way to ensure only signed kernels are permitted to boot within the virtual machine. The only supported guest operating systems are 64-bit versions of Windows 8 and Windows Server 2012 and later, but Windows 8.1 is easier to set up within one of these VMs because it includes the necessary Gen 2 keyboard driver needed to type in a product key for setup (those pesky things!) – otherwise you’re flying blind, or pecking on the on-screen keyboard, or tearing your hair out.

Why a generation two virtual machine over the existing VM standard? For one, it gets rid of all of the goo that makes the VM look like a physical computer because unlike the late 90s, people virtualize first in most environments these days. Windows 8.1 has a deep knowledge of what it means to be on a virtual machine and so there is no need for the trappings and the pretense of trying to look like a physical computer and the performance overhead that comes with that.

For VDI Deployments, Deduplication

If you are considering running Windows 8.1 in a virtual desktop infrastructure (VDI) environment, with scores of hosts running Windows 8.1 guests in virtual machines that your users log in to as their daily drivers, then the Windows Server 2012 R2 deduplication feature can save an enormous amount of space and increase performance at the same time.

Instead of storing multiple copies of Windows 8.1 in your guest virtual machines’ disk drives, which is simply wasted space, Windows Server 2012 R2 supports deduplication—it reads the files on a volume and then optimizes by storing only one copy of a file and replacing the duplicate copies with “pointers” to that single stored instance. This de-duplication feature supports open VHD or VHDX files, meaning it can de-duplicate files on a volume with running virtual machines, so you don’t have to take your VDI farm down in order to begin using this feature. While this feature was first introduced in Windows Server 2012 gold, the performance on the optimizer algorithm has been improved, and it completes faster than it did in the previous release.

What’s the result? When the deduplication is finished, Windows actually is able to read those optimized files faster than if they were not deduplicated, and with a VDI deployment in particular as opposed to just a typical file server, you can gain space savings as high as 90% with minimal impact on performance.  Kind of a no-brainer to make your VDI farm boot faster and gain back 90% of your storage for that VDI farm at the same time, no?

Storage Quality of Service (QoS)

Much like network quality of service (QoS) allows administrators to define caps for certain types of network traffic in order to ensure enough bandwidth exists for other activities and no one type of traffic sucks up the entire network pipe, Hyper-V in Windows 8.1 and Windows Server 2012 R2 supports storage QoS. This feature lets you restrict disk throughput for overactive and disruptive virtual machines, which is great if you have a long-running process in one VM and you don’t want the overall I/O performance of your host machine dragged down. Since this feature is dynamically configurable, you can even adjust the QoS settings while the virtual machine is running so you do not interrupt the workload of a given VM.

Online VHDX Resize

Even had a virtual machine running out of disk space? If you set up the virtual machine with a fixed size virtual disk rather than a dynamically expanding disk, this could pose a problem. However, the new Hyper-V in Windows 8.1 and Windows Server 2012 R2 allows you to increase and decrease the size of virtual hard disks of a virtual machine while the virtual machine is running—a hot resize, if you will. The virtual machine in question can be running any guest operating system, so there is no limitation on running, say, Windows XP or even Linux with this feature, but the virtual hard disk file must be in the newer format (VHDX) as opposed to the older but more popular VHD format.

Enhanced VM Connect

Hyper-V in Windows 8.1 and Windows Server 2012 R2 offers what Microsoft calls “enhanced virtual machine connect,” which is essentially the ability to use the remote desktop protocol to connect to a virtual machine even if the network within the virtual machine is down. Hyper-V uses VMBus, the internal communications channel for virtual machines to the hypervisor, and transmits RDP over the VMBus independent of the network connection. Also as part of this enhanced mode, you can now drag and drop files between the host and the virtual machine (which makes use of the clipboard sharing capabilities) and you can redirect local resources like smart cards, printers, and USB devices right over that VMBus connection. This makes it much simpler to perform troubleshooting and simple administration. (Note, this enhanced mode is enabled by default if you’re running Hyper-V on Windows 8.1, but disabled by default if you are running it on Windows Server 2012 R2.)

The Last Word

Windows 8.1 and Hyper-V are joined at the hip, and much of the goodness that exists in this latest hypervisor release is meant to work in conjunction with Windows 8.1. Take a look today.

This article was published with edits on SearchWindowsServer.com.