So You’re New to Azure, Part 1

You may have heard a lot about Windows Azure, soon to become Microsoft Azure. But you might not know exactly how the Azure service can fit into your plans. Windows Azure has come a long way from its initial introduction in the fall of 2008 and really has something for most businesses and organizations. Windows Azure can effectively function as

  • A complete datacenter for microbusinesses and small business
  • A disaster recovery solution for organizations of all sizes
  • A scalable way to host web sites outside of your existing infrastructure
  • A way to run lab and test scenarios using virtual machines nearly identical to the ones you can create in your office

If you’re new to Azure, it can be difficult to know where to begin. The idea behind this new series of articles is to take you, as someone brand new to Windows Azure, through the options and show you exactly how you can get set up and running, and how you can enable some of the cooler scenarios that Azure makes possible.

What Kinds of Things Can You Set Up in Windows Azure?

Once you have established your Windows Azure account, you can set up:

  • Web sites, which are basically managed Internet Information Services (IIS) accounts that run specific web applications in either free mode, which operates at no cost to you but has significant limitations on capacity; or paid mode, which assigns specific resources at a cost
  • Virtual machines, which we’ll talk about in the next section
  • SQL databases, which can be used by web sites or other cloud services
  • Storage, including virtual hard disks of any size depending on what kind of budget you have
  • Media Services, like streaming audio and video with probably better availability metrics than you would be able to achieve on your own
  • Visual Studio Online, which is essential hosted source code control for your development teams based on the well regarded Team Foundation Service

The console (shown in Figure 1) is easy to use and shows all of these options in the left menu area.

Figure 1: The Windows Azure Management Console

 

Setting up a Virtual Machine

One of the best ways to use Microsoft Azure is as a test bed. In just a few clicks, you can spin up any number of virtual machines, customize their networking, and use them just as if you unboxed physical hardware and plugged it into your local switch. It’s great for testing scenarios, getting to know new software, working on certification exercises, and more. Let’s look at how to get started creating virtual machines.

  1. On the manage.windowsazure.com control panel, click Virtual Machines.
  2. In the middle pane, click the Create a Virtual Machine link.
  3. The UI experience will bring up a gray box in four columns, as shown in Figure 2. You can choose Quick create, which lets you input all of the options needed to spin up a VM on one screen to get you started as quickly as possible, or the From Gallery option, which lets you browse a library of virtual machine options with all manner of operating systems including Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, various flavors of Linux, Oracle, and pre-built application suites like SharePoint, BizTalk, and Dynamics NAV and GP. For the purposes of our example, we’ll just set up a quick Windows Server 2012 R2 virtual machine using the Quick Create option.
  4. In the rightmost column, enter:
    1. The name for the virtual machine (between three and 15 characters, and no funky symbols as it is a DNS name)
    2. The base OS image to use—if you’re in doubt, for now just create a Windows Server 2012 R2 machine to check things out
    3. The size, which is basically how many processor cores your VM will use and how much RAM will be dedicated to the machine—more cores and more memory costs money, of course
    4. The administrator’s username and password
    5. Which datacenter this virtual machine should run in—pick the closes one to your current location.
    6. Click Create Virtual Machine.

Figure 2: Creating a New Windows Azure Virtual Machine

At that point, the console will redirect you to the Virtual Machines page, and you will see the provisioning process take place under the status column, which you will see in Figure 3. After five minutes or so, your machine should have a green check mark and the text “Running” listed, which means it is ready to go. Congratulations, you have an instance of Windows Server running in the cloud.

Figure 3: Provisioning a new virtual machine

 

Virtual machines are accessible from the Internet using a *.cloudapp.net address, where the asterisk is the name of the virtual machine you entered during setup. You do need to enable Remote Desktop in order to get an RDP session going to your new virtual machine, which we will cover in the next section.

How to use Remote Desktop to Administer Your Azure Virtual Machine

As you might suspect, to manage virtual machines in Windows Azure, you use Remote Desktop Protocol. Advantages? The client is built directly into Windows, so there is no need to download any additional software. Disadvantages? Not many, unless you’re on a Linux or Mac machine, where the clients are less full featured.

To log in via remote desktop, go over to the Virtual Machines page, click on the virtual machine you want, and then on the bottom control panel, click Connect. Your browser will prompt you whether to open or save an RDP file with the name of your virtual machine, as shown in Figure 4. Save this to your desktop or other location. Then, open up the RDP file which will launch the Remote Desktop Connection client. Accept the security warning (this is simply because the certificate being presented by your new virtual machine is not trusted by your current machine), enter the username and password you configured when creating the machine, and then you are in.

Figure 4: Downloading the RDP file from the management console

 

What’s Next?

In the next post, I’ll show you how to make your Azure virtual machines a natural extension of your own on-premises network.

The Hunt for the Data Scientist

It is hard to resist the sparkly nirvana that Big Data, leveraged appropriately, promises to those who choose to embrace it. You can transform your business, become more relevant to your customers, increase your profits, and target efficiencies in your market all by simply taking a look at the data you probably already have in your possession, but have been ignoring due to a lack of qualified talent to massage it.

Enter the data scientist, arguably becoming one of the hottest job descriptions on the market. The perfect candidate is a number whiz, a savant at office politics, and plays statistical computing languages like a skilled pianist. But it can be hard to translate that ideal into actionable job description and screening criteria. In this piece, I explain several points and virtues to look for when identifying suitable candidates for an open data scientist position on your team, and also note some market dynamics when it comes to establishing compensation packages for data scientists too. Let’s dig in.

Qualities to Look For in a Data Scientist

Since “data scientist” is a bit of a new concept, without a lot of proven job descriptions, you will want to work closely with your human resources department on the rubric and qualifications you are using to screen initial resumes and also set up a first round of interviews. However, here are some salient points to add to your list to qualify candidates that should prove useful during the process:

  • A good data scientists candidate will have a keen understanding of statistics and laws of large numbers. Trends are seen in numbers. For example, a good data scientist understands, “this many customers behave in this certain way.” “This many customers intersect with others at this many precise points.” Over large quantities of data, trends pop out in numbers. A great data scientist has the skillset to understand trends in large numbers and an ability to translate that into predictive analytics. A good data scientist can interrogate large quantities of data, extract trends, and then use predictive modeling techniques to anticipate behavior across that aggregate dataset. Statistics are also helpful in preparing reports for management and also prescribing recommended courses of action. While a mathematics degree would be ideal, many qualified candidates will have taken a slightly more practical academic path and you should not be scared away by interviewees that do not have advanced mathematics credentials. A focus on statistics in a candidate’s academic career, whether at the bachelor level or above, would prove sufficient for this type of position. 
  • A good data scientist will have a strong inquisitive sense. Part of the allure and mystique of big data is the art of teasing actionable conclusions from a giant haystack of (typically) unstructured data. It is generally not enough to know how to write queries to find specific information without being able to generate the context of what queries should be run, what data we would like to know, and what data we might not know we would like to know but that could possibly be of interest. Great data scientists execute queries and database runs, but they also design suggestions for architecting queries in ways that not only return a defined set of results to answer a question someone already asked, but that also reveal new insights into questions that have not yet been asked by an organization. This is where the real value of a data scientist will present itself over the coming years, I predict. While some might argue that this is a soft skill that is difficult to interview for, I disagree. Carefully crafted hypothetical scenarios can be presented to candidates during interviews that will help you understand their thought process, their approach to a problem, the various ways the candidate would attempt to glean the answers to the problem, and also what other questions the candidate could pose that would add value to the original query. Stress to candidates during the interviews that “outside the box” thinking is encouraged and limiting answers to only the problems posed is discouraged. 
  • A good data scientist will have a familiarity with database design and implementation. It is important for today’s data scientists to sit somewhere in the range between an inquisitive university research scientist (which is essentially what my last point is describing) and a software developer or engineer, someone who knows how to tune his lab and operate his machinery well. Even though much of what falls under the “big data” category is known as unstructured data, a fundamental understanding of both relational and columnar databases can really serve a data scientist well. Many corporate data warehouses are of the traditional row-based relational database sort, and while big data is new and alluring, much actionable data and trends can be teased out of traditional databases. Also, data scientists will play a key role in setting up analytics and production databases to take advantage of new techniques, and a history of working with databases would provide great context for setting up new systems in the new role. Additionally, we see many big data software developers attempting to use SQL like language in their products in an attempt to woo traditional database administrators who have no desire to learn a MapReduce like language. A knowledge of traditional SQL will continue to pay dividends and will allow data scientists to play nicely and integrate well with other database professionals that you already have on your staff. 
  • A good data scientist will have a baseline proficiency in a scripting language. Your most qualified candidates should be awarded extra points for knowing Python at least somewhat well. Many query jobs over vast quantities of unstructured data are issued in scripts and take quite some time to run. Python is generally accepted as the most compatible, most versatile scripting language for working with columnar databases, MapReduce style queries, and other elements of the data scientist puzzle. Python is an open source language known to be fairly usable and easy to read, so it should not pose much of a hurdle for your base of data scientist candidates to overcome. You could also consider “pseudo code” skills, or the ability to write out almost in plain English how an algorithm or a query would work—such a test would show the quality of the thinking and the approach to a problem as well as how such a problem would begin to be solved by your applicant, regardless of if he or she actually possesses the skills in any given language to pull it off.

Be Prepared to Pay

One note about compensation: as demand for data scientists increases, and as long as the supply of qualified candidates is being outstripped by it, salaries are rising. In almost any metro market in the United States, data scientists are receiving six figure base salaries—obviously higher in high cost markets like the West coast.  In Silicon Valley in particular, multiple offers for a qualified candidate are not uncommon. Do not attempt to pay below market rates for this position. Even startups are paying data scientists comfortable wages plus the ability to work on challenging new products, unlike their traditional modus operandi of loading up in equity positions and paying measly wages. Put simply, don’t cheap out and expect great talent.

This article was published with edits on CIO.com.

What CIOs Can Learn from the Recent Target Payment Systems Breach

We are likely all familiar with the Target payment card breach in late 2013, wherein millions of payment card numbers were stolen in a huge hole in the company’s network, right down to the security of the PIN pads. As of March 5, 2014, the breach cost the chief information officer of Target, Beth Jacobs, her job.  It was, and still is, a very serious matter.

Obviously Target is a very public company and this situation garnered a lot of attention. As a CIO, however, or a member of executive technical staff, there are some observations we can make about the situation. What can you take away from Target’s very public example of a breach? Here are four key lessons.

  • It is vitally important to know which alarms and alerts to respond to and which ones you can safely ignore as an organization. Security vulnerabilities are in all actuality a dime a dozen in this connected age. Different software has different risk profiles, and some vulnerabilities that affect certain organizations severely are already safely mitigated in other organizations simply by the structure of how components are set up. Performing a thorough threat analysis is crucial, but knowing how to manage the onslaught of event logs, audit logs, vendor vulnerability notifications, and intrusion prevention messages is just as critical.One best practice is to develop a rubric by which a weight is assigned to alerts about security vulnerabilities and attempted penetration. Depending on what business you are in, you can score this either by system involved or by the source of the alert. Some considerations might include the following:
    • If you are a retail business, payment systems alerts should clearly be given a priority. Typically these payment systems are segregated from other networks, but patching alerts from your vendors, security audit logs, and activity monitoring should be done on a high frequency and particular attention should be paid to anomalies that appear in these results. Internet facing businesses should always ensure that fraud prevention measures are in place and ensure shopping cart and e-commerce software is patched and monitored.
    • Alerts from your intrusion detection system or honeypots should also be given a higher priority than other alerts. However, it may be necessary to fine tune thresholds so that one-off attempts do not raise alarms, but repeated attempts that display similar characteristics are evaluated for their consistency and then bubbled up to the appropriate levels for technical review and analysis.
    • Other regular software vulnerabilities, like those in file servers and desktop software, should be cataloged and analyzed but should fall below other riskier parts of your technology stack.

It is best to spend time now creating a judgment structure by which you can evaluate alerts and threat messages so that the signal to noise ratio when it comes to these notifications is as high as it can be. This way, “red alert” messages get the attention they deserve immediately, while “yellow alert” type messages can be analyzed with a less urgent pace.

  • As a CIO, it is important to fight to get funding for a Chief Information Security Officer position to handle these significant security and liability responsibilities. As the old saying goes, the buck must stop somewhere, and as with most things technology, the head of the information services organization is likely to get the blame. But CIOs are burdened with more areas of responsibility than ever before these days, from generally keeping the computers running to creating new technology driven lines of business that can actually represent a profit center to a company to liaising with marketing and the executive suite to unlock secrets that lie within the massive amounts of data warehoused in the corporate IT warehouse. Security is an important part of all this, and creating a security regimen and implementing it through the organization is really best done by a dedicated CISO, someone whose sole job is to monitor the security posture of a business and then carefully and deliberately enhance it over time. A CIO is simply too rushed, too spread thin, to fully handle this responsibility—and Target shows why. It took several weeks to get to the bottom of the extent of the breach. According to multiple reports, it took days to even discover the breach before the media caught on to it. And as we all saw, it seemed Target discovered more and more about exactly what data was lost in the attack, judging from the trickled release of information to the public and to the media. One can imagine within Target the frenzy of both getting to the bottom of what happened, reacting to it, preventing the situation from deteriorating, and activating response plans. The buck stopped with Beth Jacobs, the Target CIO, and her response was left somewhat wanting. It is a very real possibility that she simply had too much on her plate.Additionally, the creation of a specific security head is a sign to the rest of the organization that security is serious business, and having such a position generally gives the CISO the autonomy required to put into place the right remedial measures to enhance security. Having to work through a chain of command not dedicated to security can delay or even jeopardize necessary technical  improvements because of a lack of clear communication or an inability to convince others that some measures are necessary. 
  • An incident response plan is absolutely key to a successful recovery in the event of a breach. In the hours and initial couple of days right after a breach has been discovered, there is usually only one priority: fix the breach, at all costs. Stop the bleeding. And this is a fine approach for the technical team. However, others in your organization need to at least be activated to begin planning a communications approach that keeps all stakeholders informed. Witness the somewhat haphazard way in which Target disclosed the breach. Were PINs compromised or just payment card numbers? Were PINs leaked? Were encrypted PINs leaked? Was anything leaked? The story seemed to change as the situation developed, which is a symptom of an incomplete crisis communications plan.I will note however that the PIN pads and perhaps other payment and point of sale equipment were replaced at my local Target location within a couple of days after the initial announcement of the breach. This is a sign of an excellent technical response plan. 
  • The weakest point in your security posture is very likely an area you have not even considered yet. Consider the Target breach, which began with a heating, ventilation, and air conditioning contractor having access to a wireless network on the vulnerable side of the Target corporate firewall. It all began because something is innocuous as a thermostat was not functioning correctly. Remember that hackers and crackers are sophisticated, and at this level, are playing a long game to nail lucrative high value targets. They are looking where they think you are not looking. As a CIO, it is your job to direct your teams to batten down any and all hatches, procedural, technical, and otherwise. Provide the leadership and the ethos to make this type of watchful, deliberate security a priority.

This article was published with edits on CIO.com.

Top 5 Reasons to Run Windows 8.1 on Hyper-V

With Windows 8.1 generally available and on the streets now, the time has come to take a deeper look at its feature. The way to do that for most IT folks is to install Windows 8.1 in a virtual machine. If Hyper-V happens to be your hypervisor of choice, then there is good news: Hyper-V and Windows 8.1 work really well together. Whether you are running Windows 8.1 on your desktop and Hyper-V for a couple of test machines, or you are deploying Windows 8.1 in a virtual desktop farm running Windows Server 2012 or Windows Server 2012 R2 on the backend, there is a lot to like about Hyper-V and Microsoft’s latest OS. Let’s dig in.

Generation Two Virtual Machines

Windows 8.1 is designed to work within what Microsoft calls a Generation 2 Virtual Machine. This is a new type of VM available exclusively under Hyper-V that essentially strips away all pretense of virtualizing legacy PC deployments. Generation 2 virtual machines UEFI based, rather than relying on a BIOS, and there are no emulated devices. These VMs boot off of virtual SCSI and network adapters directly and also supports secure boot, the preferred way to ensure only signed kernels are permitted to boot within the virtual machine. The only supported guest operating systems are 64-bit versions of Windows 8 and Windows Server 2012 and later, but Windows 8.1 is easier to set up within one of these VMs because it includes the necessary Gen 2 keyboard driver needed to type in a product key for setup (those pesky things!) – otherwise you’re flying blind, or pecking on the on-screen keyboard, or tearing your hair out.

Why a generation two virtual machine over the existing VM standard? For one, it gets rid of all of the goo that makes the VM look like a physical computer because unlike the late 90s, people virtualize first in most environments these days. Windows 8.1 has a deep knowledge of what it means to be on a virtual machine and so there is no need for the trappings and the pretense of trying to look like a physical computer and the performance overhead that comes with that.

For VDI Deployments, Deduplication

If you are considering running Windows 8.1 in a virtual desktop infrastructure (VDI) environment, with scores of hosts running Windows 8.1 guests in virtual machines that your users log in to as their daily drivers, then the Windows Server 2012 R2 deduplication feature can save an enormous amount of space and increase performance at the same time.

Instead of storing multiple copies of Windows 8.1 in your guest virtual machines’ disk drives, which is simply wasted space, Windows Server 2012 R2 supports deduplication—it reads the files on a volume and then optimizes by storing only one copy of a file and replacing the duplicate copies with “pointers” to that single stored instance. This de-duplication feature supports open VHD or VHDX files, meaning it can de-duplicate files on a volume with running virtual machines, so you don’t have to take your VDI farm down in order to begin using this feature. While this feature was first introduced in Windows Server 2012 gold, the performance on the optimizer algorithm has been improved, and it completes faster than it did in the previous release.

What’s the result? When the deduplication is finished, Windows actually is able to read those optimized files faster than if they were not deduplicated, and with a VDI deployment in particular as opposed to just a typical file server, you can gain space savings as high as 90% with minimal impact on performance.  Kind of a no-brainer to make your VDI farm boot faster and gain back 90% of your storage for that VDI farm at the same time, no?

Storage Quality of Service (QoS)

Much like network quality of service (QoS) allows administrators to define caps for certain types of network traffic in order to ensure enough bandwidth exists for other activities and no one type of traffic sucks up the entire network pipe, Hyper-V in Windows 8.1 and Windows Server 2012 R2 supports storage QoS. This feature lets you restrict disk throughput for overactive and disruptive virtual machines, which is great if you have a long-running process in one VM and you don’t want the overall I/O performance of your host machine dragged down. Since this feature is dynamically configurable, you can even adjust the QoS settings while the virtual machine is running so you do not interrupt the workload of a given VM.

Online VHDX Resize

Even had a virtual machine running out of disk space? If you set up the virtual machine with a fixed size virtual disk rather than a dynamically expanding disk, this could pose a problem. However, the new Hyper-V in Windows 8.1 and Windows Server 2012 R2 allows you to increase and decrease the size of virtual hard disks of a virtual machine while the virtual machine is running—a hot resize, if you will. The virtual machine in question can be running any guest operating system, so there is no limitation on running, say, Windows XP or even Linux with this feature, but the virtual hard disk file must be in the newer format (VHDX) as opposed to the older but more popular VHD format.

Enhanced VM Connect

Hyper-V in Windows 8.1 and Windows Server 2012 R2 offers what Microsoft calls “enhanced virtual machine connect,” which is essentially the ability to use the remote desktop protocol to connect to a virtual machine even if the network within the virtual machine is down. Hyper-V uses VMBus, the internal communications channel for virtual machines to the hypervisor, and transmits RDP over the VMBus independent of the network connection. Also as part of this enhanced mode, you can now drag and drop files between the host and the virtual machine (which makes use of the clipboard sharing capabilities) and you can redirect local resources like smart cards, printers, and USB devices right over that VMBus connection. This makes it much simpler to perform troubleshooting and simple administration. (Note, this enhanced mode is enabled by default if you’re running Hyper-V on Windows 8.1, but disabled by default if you are running it on Windows Server 2012 R2.)

The Last Word

Windows 8.1 and Hyper-V are joined at the hip, and much of the goodness that exists in this latest hypervisor release is meant to work in conjunction with Windows 8.1. Take a look today.

This article was published with edits on SearchWindowsServer.com. 

Windows 8.1 Device Encryption – The Skinny

In Windows 8.1, Microsoft has taken the NSA to heart. OK, perhaps not the National Security Agency, but it certainly has jumped on the encryption bandwagon and now, like it already did with its Surface RT tablet line, Windows 8.1 automatically encrypts disk drives by default. This encryption is active when you first turn on a Windows 8.1 device, and a user would be hard pressed to know anything was different.

But there is a difference, and it is the recovery key, which you would need to decrypt the files on the protected drive, is not protected until you go through the step of uploading that recovery key either to a Microsoft Account, where it is stored on SkyDrive and accessible to the device owner from wherever here is an Internet connection, or in Active Directory as traditional BitLocker encryption in Windows Vista, Windows 7, and Windows 8 has worked.

Here are some details on looking at whether devices are encrypted, how to manage that encryption, and what devices currently support this encryption.

How to Tell if Devices are Encrypted

In Windows 8.1, there are two places you can look to see device encryption status:

  • In the Modern (formerly known as Metro) environment, you can check out Settings, drill down to PC and Devices, and then PC Info. There will be a section in the right pane called “Device encryption” that will report the current status of encryption on the device itself and also let you know of any next steps to enable encryption should it be not on already. There is also a button to disable encryption on the device as well, although this is not a best practice and not recommended.
  • In the old fashioned Disk Management snap in to the Microsoft Management Console, you can see in the bottom part of the window, where each physical disk is listed, the volumes that are on those disks. Encrypted volumes will read “BitLocker Encrypted” right beside the partition size and file system display.

You can only save recovery keys to a Microsoft account through the Modern interface and the PC Settings screen. There is no Control Panel applet in the traditional desktop interface for managing the encryption.

Group Policy Objects Used for BitLocker

As mentioned, while BitLocker can store recovery keys in SkyDrive now as part of the device’s Microsoft Account integration, enterprises are much more likely to be interested in using Active Directory and Group Policy to manage recovery keys. The necessary BitLocker GPOs to configure are located in

Computer Configuration\Administrative Templates\Windows Components\BitLocker Drive Encryption

To have recovery keys stored in the directory, use the following GPOs:

  • Choose how users can recover BitLocker-protected drives (Windows Server 2008 and Windows Vista)
  • Choose how BitLocker-protected operating system drives can be recovered
  • Choose how BitLocker-protected fixed drives can be recovered
  • Choose how BitLocker-protected removable drives can be recovered

You can use the following reference on Microsoft TechNet to prepare Active Directory to receive protected recovery keys.

http://technet.microsoft.com/en-us/library/dd875529(v=ws.10).aspx

Hardware Requirements for Always On Encryption

For a Windows 8.1 device to be encrypted by default, the following hardware needs to be present and the following conditions need to be true:

  • Secure Boot support, which requires both an x64 edition of Windows 8.1 and not your old fashioned 32 bit version, and UEFI bootware instead of a BIOS.
  • A Trusted Platform Module, or TPM, chip which has been standard issue on most business and corporate oriented laptops since 2008. TPM chips are typically not used on inexpensive laptops marketed primarily toward consumer channels, although this will likely change as a result of the Windows 8.1 device specifications.
  • Support for connected standby, which is the mode where almost all of the device is put into extremely low power mode except the network device, which maintains a connection and wakes up every so often to receive push notifications, e-mails, and other network information. Connected standby requires a solid state drive and not spinning media, soldered memory that does not come in pluggable SIMMS or DIMMS, and network cards compatible with NDIS 6.30.

Generally the connected standby feature is going to hold up pervasive deployment of device encryption as it generally requires a tablet like approach to designing a system. But then again, on the go devices are the ones that need encryption the most, are they not?

This article was published with edits at SearchEnterpriseDesktop.com. 

Network Virtualization Features in Windows Server 2012 R2

Virtualization has taken the IT world by storm. But now that hardware virtualization in mainstream, the industry is focusing its attention on a new target: the network. But how does that work? What does network virtualization entail, and what advantages does it provide? In this piece, I will explain the logic behind network virtualization, why it matters, some of its limits, and how Windows Server 2012 R2 implements some of these features.

What Is Network Virtualization?

As its core, software defined networking is virtualizing the network. Think of what the virtualization phenomenon has done for the productivity and efficiency of the everyday IT person.

  • Capacity is better used because entire machines are not sitting idle because they were only given two or three jobs. All of the hardware capacity of a machine can be carved up into virtual machines so that no part of the hardware is idle and the investment in the hardware is maximized.
  • Drivers, hardware bugs, firmware updates, and the like are drastically reduced because for every five operating systems you are handling, only one is running directly on the physical hardware and thus needing this level of maintenance attention. Virtual machines all use identical synthetic drivers.
  • This abstraction of the real hardware from the operating system instances running your business workload affords a great deal of flexibility when it comes to IT operations. You can cut, paste, move, delete, reimage, splice, duplicate, and otherwise mess around with virtual machines to your heart’s content because they are merely a set of files. Migrate machines, upgrade them, back them up—in the end it is all just a matter of getting the right files in the right places.

Now all of those advantages are tied to virtualizing hardware. Since all software is by definition virtual, there is no advantage to be gain by running synthetic instances of programs. So what is left to virtualize? The wires. The connectivity. The network.

If you live in the Microsoft space, you may have heard about the company pushing network virtualization—you may sometimes hear it called software defined networking—as part of its campaign to convince us all that clouds, both public and private, are the wave of the future.  Virtualizing anything and everything possible is a requirement on the way to Cloudville, and the network is included in that. As you will see in part one as we delve into what network virtualization makes possible, you will see that Microsoft’s Hyper-V product has network virtualization features that give administrators tremendous power and flexibility to move their operations around within their own systems, into public clouds, and in hybrid public-private cloud relationships. To see Microsoft delivering network virtualization features is expected, as Windows Server’s new raison d’etre is to be cloud ready in every way.

Advantages of Network Virtualization

By abstracting away the complexity of how wires are connected to switches and how switches are connected to routers and how routers are connected to backbones, you come away with several key advantages:

  •  Deploying new services is easier across different geographies. When all of your network appears available to you for configuration across all of your properties in all of your regions no matter the switch vendor, cabling, router manufacturer, telecom company, and service providers, you have an amazing ability to respond to business needs wherever they are. Move extra service capacity from your US datacenters to Singapore in respond to business demands over there, and then move it back. Bring up additional servers on a Russian network from a single console in Washington.
  • Automatic reconfiguration of your network based on certain conditions and certain circumstances. Virtual networks can be spun up, taken down, merged, separated, moved into different datacenters, routed differently, extended across premises, and vice versa all with a few clients of the mouse without ever touching the underlying network cards, routers, switches, and cabling.
  • Self-service creation of services is made possible because the entire user facing stack of compute is virtual and abstract. You can set up recipes or portals that allow individual administrators or departments to create their own networks, consisting of any number of virtual machines, storage resources, and network connectivity. These virtual networks can be created automatically, address properly, and be routed correctly without any human intervention, which is really powerful and a huge timesaver from the days when forms had to be filled out and workers had to traipse through wiring closets in datacenters trying to make sure individual servers were connected in the right way. Plus end users can simply say, “I want this package of services,” rather than ordering distinct network components that may or may not be compatible. This is one part of the value of abstraction.
  • The capacity of your network in which you have already invested and paid for is better used and more efficiently connected. Often there are gigabit and ten gigabit connections that sit between servers that often sit idle. By stacking virtual networks on these connections, you can better use the connectivity you have already paid for. Additionally, the expensive telephone company and Internet service provider connections can be used much more optimally because you can run a variety of “virtual tunnels” across single links rather than having to have dedicated pipes for each and every purpose that requires connectivity between sites.

Limitations of Network Virtualization

As with any solution, there are certain limitations and disadvantages of network virtualization. Three key downsides of software defined networking include the following:

  • Your existing cheap hardware is probably not compatible. Network virtualization and software defined networking rely on intelligent hardware underneath that know how to configure and route the tags and other virtual instructions that help keep up the abstraction between the physical wire and the logical packets that and traversing it. Switches that do not have management capabilities—these are sometimes called dumb switches—are almost exclusively not compatible with network virtualization. Even some older managed switches are not one hundred percent compatible with network virtualization, although if your switches and other hardware support VLAN (virtual LAN) tagging, then there is a good chance that you can begin to deploy network virtualization and software defined networking without further investing in hardware.
  • Your setup can get pretty complex very quickly. You may have heard about virtual machine sprawl, which essentially is a symptom of the ability to create virtual machines with wild abandon because it is so easy. Likewise it can be very easy to create new virtual networks, and without a decent governance plan and proper procedures in place, the complexity of your virtual network deployment can grow endlessly. Good detailed documentation is key here.
  • Capacity usage has its limits. Just as you can only put so many virtual machines on one given host, you can only have so much traffic crossing one wire and traversing physical routers and switches. Network virtualization does not immediately multiply your bandwidth and throughput. It only helps you make more efficient use of what you already have. Additionally, some of the processes, like tagging and encapsulation, that are part and parcel of network virtualization have some overhead that may eventually slow down your network devices. There is, indeed, a cost to flexibility.

Network Virtualization in Windows Server 2012 R2

Up to this point we have discussed network virtualization in general. But how does Windows Server 2012 R2 play into this? What advantages does one get when using the software defined networking features of Windows Server 2012 R2? In fact, Hyper-V works hand in hand with the network stack in Windows Server 2012 R2 and enables a lot of these network virtualization scenarios.

Windows Server 2012 R2 can also partially absolve concerns over the age of your networking equipment. Rather than requiring the use of virtual LAN, or VLANs, that are complex and somewhat clumsy to manage, Microsoft uses network virtualization over GRE, or NVGRE, to manage virtualized networks and keep those traffic packets isolated on the network wire itself. GRE is an old protocol that is support by many managed devices and on many of these pieces of hardware, NVGRE just rides ride on top of the existing configuration of the device, meaning you can get started reaping the benefits of network virtualization almost immediately without tearing your hair out reconfiguring Juniper and Cisco routers using IOS.

The use of the NVGRE protocol makes it possible to have a virtual network and whatever IP space is assigned to that network stretch across a huge number of devices—Hyper-V hypervisor hosts, physical IP subnets (the non virtualized real network configuration), and physical boundaries like the edges of datacenters. This means you can have a machine in Europe on the same virtual local network as a machine in Australia, with a consistent IP space and naming mechanism, and the ability to rapidly move and migrate virtual machines for fault tolerance purposes across huge distances—all because both of these machines appear to the operating system, and to you as the administrator for that matter, to be on the same local network. For all intents and purposes, these machines could be right next to each other connected to the same dinky little $20 Linksys switch. Indeed because of the consistent IP space, you can relocate entire networks. You can pick up a network full of virtual machines running, say, your accounting programs, and move them into a completely different datacenter on the opposite coast of the country and not have to do anything except finish the live migration wizard—the machines would live 2,400 miles from where they were operating before, but they would still be running and would require zero network reconfiguration. They would simply just work. That is the true power and flexibility of network virtualization in action.

In addition, in a highly virtualized environment one finds a lot of traffic on the network wire is virtual machines and their attendant configuration and storage data being moved around from one physical host to another—because of downtime, reconfiguration, patching, or really any other system maintenance need. Previously all of this migration traffic had to go out over the network, reducing the available throughput of the network and also slowing the traffic down from the hugely fast system bus speed (i.e., the speed at which the physical components of any given hardware can talk to each other) to the highest speed the network hardware supports, which no matter what the actual speed is, is significantly slower than within the confines of the same physical machine. Windows Server 2012 R2 lets you set up virtual switches that VMs within the same host can connect to other hosts within the same datacenter rack or datacenter can talk to each other through that system bus speed rather than sending out a bunch of data on the wire—even across IP subnets.

In Windows Server 2012 R2, Hyper-V based network virtualization segregates all network traffic by virtual network. Each physical packet is tagged as it traverses the network so that only networks with tags in common can communicate with each other. (It is somewhat more complicated than that, but this explanation works well for the purposes of our overview.) This means you can multiple networks running the same IP addresses in the same IP space with no conflicts, because a packet destined for, say, 192.168.0.100 with one tag does not interfere with another packet on another virtual network (i.e., with a different tag) destined for 192.168.0.100. This is great for hosters and other shared environments because different customers can keep their preferred IP space assignments and deployments while still taking advantage of the infrastructure provided in your datacenters. It is also a real benefit for organizations that merge with or acquire other companies that already have pre-existing infrastructure—these companies can essentially absorb any existing services without having to perform very much, if any, network reconfiguration while getting the benefits of private cloud style hosting and the ability to migrate around other parts of the network as well.

This article was published with edits for a special TechTarget offering.

Hyper-V in Windows Server 2012 R2 for Private Clouds: Why It May Make Sense For You

As the cloud moves from the sky to your datacenter, you might be wondering what Windows Server 2012 R2 and its Hyper-V hypervisor bring to the table in terms of features and capabilities specifically designed for building private clouds. In this piece, I’ve identified four key advantages for the new release.

  • Better live migration performance. One of the key tenets of a private cloud is the ability to move workloads around hosts with what amounts to impunity. When something goes wrong, the workload is what is important, and the ability to quickly get that task onto a healthy system ready for it without the client noticing any downtime is paramount. Otherwise, you are no more than a hoster. In Windows Server 2012 R2, improvements have been made to the Hyper-V live migration feature that really take the pressure off systems during migrations and make the entire process smoother. The contents of a virtual machine’s virtual memory are compressed and then sent over the wire, so you are reducing the amount of data that has to travel over the network by half or even more, depending on what the VM is running. Over the wire, R2 to R2 migration also use SMB 3.0, which is already quite a bit faster than its predecessor used in everything before Windows Server 2012, and SMB Direct—a feature on SANs and NASs that generally requires specialized hardware—can be used to juice transmission speeds even further, taking advantage of Remote Direct Memory Access (known as RDMA to storage wonks) to write directly to storage, bypassing the CPU of hosts.

 

  • A shared virtual hard disk for clustering. This is a pretty important point for building private clouds, because clouds involve fault tolerance, and fault tolerance in a Windows world means clusters. In Windows Server 2012 R2, you can use a single VHDX file as the required shared storage needed to build failover clusters. Clusters have to have a shared disk between all of the nodes for communication purposes, but now instead of having to create multiple virtual hard disks for use by virtual machines that are members of a cluster, you can now share out a single VHDX file and you are done. This VHDX file can even be hosted on shared storage, a cluster shared volume, or shared out over SMB.

 

  • Automatic virtual machine activation. Licensing and activation are both pesky topics that are essentially a tax imposed by Microsoft on administrators’ time. Microsoft has at least tried to make it easier in Windows Server 2012 R2 by taking away the messy product key requirements for virtual machines running on a host with Windows Server 2012 R2 that is properly activated—these machines will self-activate through communication with the host about their licensing and no keys or activation are required in order for these VMs to come up fully licensed and ready for workloads. You do not have to track spreadsheets of product keys and virtual machines, since this feature will also bubble up reports on usage and licensing in case you need it, but you do have to be running Windows Server 2012 R2 on both your hosts and your guests for this to work.

 

  • Windows Azure Pack. The Windows Azure Pack is a free additional download that touches the hypervisor and also integrates higher in the stack with the System Center suite that manages your datacenters and applications. Specifically, the Azure Pack will let you create service offerings for infrastructure, like Windows and Linux virtual machines, on demand, by working with Hyper-V and the Windows networking stack to carve up new virtual machines on demand based on policies you can control as the administrator. You can define particular types of VMs to offer, what their virtual memory and disk space sizes are, and how many of them can run at one time—and even which networks, physical or virtual, to connect them to. All of this is done through a pretty easy to use GUI interface that is essentially the exact same interface Microsoft is using on Windows Azure, just localized to work on your on premises machines.

This article was published with edits at SearchWindowsServer.com. 

The Pros and Cons of Relying on Startups for Key Business and IT Activities

It is hard to argue that technology startups are not sexy. They get all of the attention from the media, especially when investors give them a vote of confidence, they have the potential to make millionaires of people in one fell swoop, and sound business plans are typically optional—as are profits a lot of the time! As a CIO or senior IT leader, you may come across compelling products and services offered by startups. But should you bet any part of your business or operation on them?

Advantages

Businesses turn to startups all of the time for a variety of reasons. Key advantages include the following:

  • Startups generally tend to be disruptive, and the fruit of their disruptions can be quite valuable for businesses. The idea for startups often revolves around taking an existing process or product that is “broken” and fixing it in a systematic, replicable way, and in often dramatic fashion. Think of some common business related startups. Expensify saw a need to reform and recreate the often terrible process of submitted business expense reports for reimbursement. It created a system where employees can take pictures of receipts, have them imported directly into an expense report (even the amounts and descriptions on the receipts are read by optical character recognition technology), and submitted for approval, saving hours of manually creating spreadsheets or taping paper receipts and mailing them into corporate accounts payable departments. TripIt started because airlines and travel agencies were terrible about being able to put coherent, cohesive itineraries together for frequent travelers. TripIt makes assembling your confirmations and other travel documents a cinch by letting you forward via e-mail those receipts when you get them. TripIt scans them, puts them into a highly readable, accessible single itinerary, and then monitors your plans for changes. It disrupted a very inefficient process. Amazon started Amazon Web Services because it knew spare computing capacity, available by the hour and scalable as demand warranted, would be highly desirable for all sorts of business. It disrupted the process of buying expensive servers, storing them in expensive data centers, and using expensive IT administrators to manage those machines by making it as easy as clicking a few buttons and putting in a credit card number to access compute resources that cost hundreds of thousands of dollars to actually own.

    In other words, startups are often very attractive because they find a way to solve complex problems or make inefficient scenarios much more efficient. There may be a significant payoff in using a startup’s product or services in terms of money saved and efficiency and productivity increased.

 

  • Startups can often be flexible, may offer value sooner, and will often ask “how high” when you request them to jump. Often in search for those first dollars of revenue, but even after a business finds its footing and starts working on its own, startups will court corporate IT departments and offer any number of service customizations, increased support availability, on-site setup, custom integration services with other line of business applications, and more, depending on what makes sense. Typically, when you are purchasing software or services from established vendors, such accommodations are either not possible or very hard on your budget. In addition, organizations using startups often have the ability to suggest features or otherwise influence the direction of the product. For midsize and large businesses, often startups will offer direct access to developers and test managers to get to the bottom of problems quickly and to understand where certain features and capabilities make sense. Larger and more established businesses will not generally offer this kind of access to the actual coders behind a product or service and may at best put your organization on some kind of customer advisory council or similar informal group.

    In many respects, working with a startup on these key activities is much like having an application developed in house, specifically for your organization, with a great support staff behind it and all the benefits thereof—the startup becomes a true partner, or an extension of your own staff development team.

Disadvantages

As always, there are cons to relying on startups. Among them are the following points:

  • The “always on” risk of capitalism may jeopardize the existence of your new vendor. Startups—well, IT startups especially—suffer from cash burn problems from the very beginning, almost invariably. To solve key technical challenges, they hire expensive experts and pay them Silicon Valley wages and grant them numerous options. Increasing headcount is generally seen as the way through problems and progress issues for these startups—the mantra is often, “just go hire some more developers,” as if just plucking developers from a tree like fruit automatically gives you more juice.  The problem is, most of those employees expect to get paid every two weeks, but they are creating this product or service out of nothing, and so there is no revenue to pay them. Usually there is some venture capital firm brave enough (or stupid enough) to give a startup with a decent idea one to two years’ worth of cash burn, accounting for growth. You see the issue here pretty plainly; expenses are very high and revenue is almost minimal, making the venture entirely dependent on the ability and appetite for risk that venture capital firms can muster.

 

  • The sale of a startup to another firm may alter the trajectory of the startup’s business plan in a way that is negative to your organization. It is a sad fact that startups are hovered up by big companies that sometimes purchase the business only to shut it down. Perhaps the disruptive startup gets purchased by the incumbent in order to stymie the disruption.  Or maybe a giant company like Google or Yahoo buys the startup to eventually add the startup’s key functionality or unique value proposition to their own software, but instead through changes in leadership or simply the march of time, lets the startup languish until it dies. Good talent at the startup leaves. The company’s founders vanish. The acquiring company does not make a plan clear for the future of the company. Such is the world we live in, and it is smart to reflect on the potential “exit” or liquidity strategies of startups which will be key players in your ecosystem and ensure that you have at least considered the downside risk in the event that type of exit occurs.

 

  • Customizations and added features become available for everyone in many cases, and not just your company.  In a world where efficiency is a competitive advantage, having all of your added features and customizations put into a startup’s product or service may have the unintentional consequence of allowing your competitors to make use of the same feature. Of course, this risk depends upon your competitors also having some of the same problems you do and seeking the same solutions you do, but in particular if a startup’s offering becomes relied upon for your organization, competitors may well be able to take advantage of the same efficiencies you created.

This article was published with edits on CIO.com.

A Guide to the Web Application Proxy in Windows Server 2012 R2

With Windows Server 2012 R2, Microsoft has built into the operating system a reverse proxy feature. The Web Application Proxy role securely publishes internal resources out to the Internet to both corporate owned devices and untrusted machines like tablets and phones. Indeed, most deployments of, say, Work Folders or workplace join—key work anywhere features Microsoft put into Windows Server 2012 R2—demand a reverse proxy of some sort, so this requirement is likely to come up for you sooner or later.

As you may have heard, Microsoft killed its flagship reverse proxy product, Forefront Unified Access Gateway, back in December. Many organizations used UAG to create DirectAccess tunnels and also to deploy secure portals where applications could be securely accessed from all sorts of clients, both trusted and untrusted. While UAG’s capabilities were vast, it may have represented overkill for many applications, and Microsoft has built a capable, if less full featured, successor into Windows Server 2012 R2. That’s what this article is about.

Configuring the role, however, requires a lot of moving parts, and in this piece I will walk through how to set up the Web Application Proxy role in your lab with either an application of your choosing, or a freely available sample claims application that Microsoft publishes as part of one of its software development kits. Let’s begin.

Installing and Configuring Active Directory Federation Services

The steps for setting up the Web Application Proxy role in Windows Server 2012 R2 essentially fall into three phases—first, getting Active Directory Federation Services (ADFS) installed and configured on one server, then getting the Web Application Proxy (WAP) role installed and configured on a separate server, and then setting up an application that can make use of the WAP role which can be hosted on either of the two machines. For the purposes of our walkthrough here, we will simply use virtual machines—but the roles do need to be on distinct copies of the operating system, as ADFS and WAP cannot function when installed together on the same machine. You will also need to join the machine running ADFS to a domain. You can run ADFS on a domain controller, but as a best practice, it is not recommended.

Follow these steps to get started on the ADFS server.

  1. On the machine that will host the Active Directory Federation Services role, open Server Manager and go to Add Roles and Features, and then check the box for Active Directory Federation Services.
  2. Click through the rest of the wizard—just a formality, these screens are—and then press the Finish button to get the role installed.
  3. Once the wizard is finish, click the yellow exclamation icon in Server Manager. This icon represents the fact that even though the role is installed, ADFS is not functional yet; you need to further configure the service. Click the link within the status balloon that pops up from the yellow icon to go directly to the configuration interface.
  4. For this walkthrough, we can assume this is our first ADFS server in the environment, so choose the default option and click Next.
  5. On the account selection page, choose an account that has domain administrator permissions and then click Next.
  6. On the next screen, you need to select the secure certificate that ADFS will use in its connections. You cannot use the certificates from IIS Manager here, as you will need to have previously imported the certificate into the certificate store through the Microsoft Management Console (MMC) snap in. You can also import a new wildcard or subject alternative name (SAN) certificate right from this screen, but take note of the fact that the wizard will automatically link the subject name of the certificate you are importing with the Federation Service Name, which may not, and in fact probably is not, what you want.  Instead, for the Federation Service Name, type in the URL you will want to use when applications request a connection to ADFS – some folks choose adfs.domain.tld, others choose id.domain.tld, still others choose federation.domain.tld. The key here is to not simply accept the default and to make sure the FSN lines up with the URL you will be using in your web applications that require any sort of transaction with your ADFS deployment.
  7. Leave the wizard where it is now and click over to whatever solution you are using to host your domain name service (DNS) records and add an A record for the Federation Service Name you just used in step six. Once you have created and saved that new record, come back to the ADFS configuration wizard.
  8. Back in the ADFS configuration wizard, add the Federation Service Display Name—this is just a friendly name, so there are no special rules for how it is formatted or what it must be addressed to—and click Next.
  9. Select the account you will run the ADFS service under, and then click Next.
  10. On this database screen, you can either create a new Windows internal database instance  for ADFS, or you can point the service to an existing database running in SQL Server. Click Next.
  11. Review the options you selected to configure ADFS, and click Next to validate your choices. You can also take a look at the PowerShell script the wizard will actually run to perform the configuration, as most of the wizards in Windows Server 2012 R2 these days are really just front ends to PowerShell script generators.
  12. The checks should pass and now you can click Finish to actually configure the service.

Next up, you will need to spin up another client or server to test connectivity to the ADFS service. You can use any machine with a web browser that has access to the network on which your ADFS server is installed, as all you are doing here is browsing to a web page on the ADFS deployment. On this separate machine—NOT on the ADFS server itself, as it will fail—access the following URLs:

Of course, replace adfs.domain.tld with whatever DNS name resolves to your instance of ADFS. The point here is to make sure that for the first URL, you see the metadata from the federation server without any SSL or certificate validity errors, and for the second URL, you see the standard default ADFS sign on page. If both of these pages come up without errors, you have successfully installed ADFS.

Back on the ADFS server, copy the SSL certificate you used in the configuration wizard to a network share or a thumb drive, so that you can copy it again down to the server on which we will be installing the Web Application Proxy role in the next section.

Installing the Web Application Proxy Role

Continuing in this process, you will need to create a second machine as mentioned before on which the Web Application Proxy role can be installed. However, this machine should NOT be joined to any domain; it can remain a standalone server.

First off, we need to install the certificate you just copied from the ADFS server. You will need to manually import this certificate into the Windows certificate store.

  1. On the Start menu, type MMC and press Enter.
  2. From the File menu, choose Add/Remove Snap-in.
  3. In the left pane under “available snap-ins,” choose Certificates, and click the Add button in the middle of the window.
  4. A window will pop up, asking which account this snap-in should manage. From the three choices, select Computer account at the bottom and then click Next.
  5. Choose the local computer option, and then click Finish.
  6. Click OK in the management window, and you will be returned to the MMC console with the certificates snap in added.
  7. In the left pane, expand “Certificates (Local Computer), and then click on the Personal node.
  8. Right click on the Personal node and select Import from the All Tasks menu.
  9. Follow the wizard to select your certificate that you copied down and finish the import.

Now that the certificate is safely in the certificate store, you can add the Web Application Proxy role to this server. Open Server Manager, and then go to Add Roles and Features, and choose the Remote Access option. Follow the wizard through the confirmation screens until you are presented with a page where you are to select the remote access services you desire; here, check the box beside Web Application Proxy service. Click Add Features in the screen that pops up asking you about related services that must be installed at the same time, and then click on through until you are finished.

Within Server Manager, click the yellow warning icon and then follow the link to open the Web Application Proxy configuration wizard. Enter the same Federation Service Name entry that you configured in the ADFS wizard and for which you set up a DNS entry, and then enter administrative credentials for an account with local administrator privileges on the ADFS server. Once you get to the certificate selection screen, choose the certificate that you imported earlier, review the PowerShell script that the wizard has again generated, and then click the Configure button to set things into motion.

Note: if you get an SSL error relating to the failure to establish a trust relationship, make sure the root certificate authority that created the ADFS certificate that you imported into the WAP server’s certificate store is trusted by the WAP. If it is not, import that root certificate using the previous procedure, only this time add it to the Trusted Certification Root Authority node in the MMC snap in, and not the Personal node as you did in the previous procedure.

Setting Up an Application to Use the WAP Role

Now that you have set up both ADFS on one machine and the WAP role on another machine, you can now publish a new application on the WAP server. You do this by publishing rules for the specific URLs that need to pass through the WAP to some server on your network for which you are proxying transmissions.

For the purposes of this walkthrough, you can use the sample claims-based application that comes in the Windows Identity Foundation software development kit, or SDK. You can download it here: http://www.microsoft.com/download/details.aspx?id=4451 and get instructions for setting up the prerequisites for this sample application here: http://technet.microsoft.com/en-us/library/dn280939.aspx#BKMK_5  . You can also use any other application you have at your disposal that works with federated identities as long as it has an external URL endpoint. You will just need to know that endpoint.

To set up a new application, follow these steps:

  1. From the Start menu, open the Remote Access Management Console.
  2. From the Tasks section on the right of the Remote Access Management Console, click Publish.
  3. Click Next through the welcome screen.
  4. The Preauthentication screen appears. Select the first option, “Active Directory Federation Services (AD FS)” and click Next.
  5. The Relying Party page appears. Here, select the relying party for the application you are using and click Next.
  6. The Publishing Settings page appears. Enter a friendly name for your application, the URL at which external clients can access the application, the certificate that covers that name, and the URL of the backend server if different (for the purposes of this walkthrough, it is the same as the application’s external URL). Click Next.
  7. Confirm these settings, and click Publish.

Interestingly, you can wrap that entire sequence up in a single PowerShell command, which again is all the wizard does.

Add-WebApplicationProxyApplication

-BackendServerURL ‘https://www.domain.tld/yourappgoeshere’

-ExternalCertificateThumbprint ‘qwerty87239874923hjdf0df9′

-ExternalURL ‘https://www.domain.tld/yourappgoeshere/’

-Name ‘Test Application’

-ExternalPreAuthentication ADFS

-ADFSRelyingPartyName ‘Test Party’

 

At this point, everything should be working, and you should be able to see your application being securely reverse proxied using the WAP role to clients connecting from the wild Internet.

This article was published with edits on Computerworld.com.

Three Months Until The End of Support for Windows XP: A Last Minute Battle Plan

Support for Windows XP ends on April 9, 2014. That’s less than 90 days from now. You’ve heard about this deadline before, surely, but now is the time for final actions.

Unfortunately, by now, just 90 short days ahead of the deadline, most of the low hanging fruit has already been harvested. Stragglers that can be convinced have either already migrated to Windows 7 and Windows 8 or are in the process of doing so now. What remains are the harder cases: customers who are just resistant to change (this is most probably in small business environments), customers with specialized hardware controlled by a Windows XP machine that isn’t able to be upgraded or replaced, and larger customers that probably already have agreements in place with Microsoft for further support after the deadline.

So for these more difficult implementations, here are some steps for your last minute XP battle plan:

  1. Find out what the end of life plans for your antivirus and antimalware products are, and make sure they will still be effective. For standard end user desktops that do not have special requirements, you should be a using a third party antimalware product. Most of these third party products have support beyond April 8, 2014, and even Microsoft has relented, saying its free Security Essentials product will have virus definition updates until July 2015 even after security patches cease being released.  Verify these dates, install the latest releases of these products, and switch products if one provides lengthier or more comprehensive XP support than the one you are currently using does. This is not an area in which cost should be the primary driver—effectiveness above all is what counts here, or you might as well not bother at all.
  2. Develop network segregation techniques to keep your XP machines isolated. Despite the recent allegations from Edward Snowden that the National Security Agency can penetrate the air gap that exists between computers that are disconnected from all networks, the most significant vector for attacks for machines running Windows XP after the end of support date will be through the Internet—specifically users clicking dangerous things and surfing to nefarious websites. One effective, if blunt, method of mitigating the risk of attacks through this vector is to segment XP machines onto a different network that has access only to the file shares necessary work with line of business applications, and not the big bad wild Internet. This means you might need to do some network restructuring and ensure XP machines can still access a very limited set of files and shares that are necessary to keep the applications they are running functional, but it is an effective way to guard against a big risk.
  3. For XP machines running on embedded systems or controlling expensive hardware, unplug them from the network if possible. If you have a lathe or CNC machine controlled by a Windows XP host, and all it does is control that machine, then disconnect it from the network. Don’t store data on it, don’t use it to browse the web, unjoin it from your domain if it was joined, and simply use it as a dumb hardware interface controller. There is no legitimate reason for the vast majority of these machines to have access to the Internet. If Internet access is required, use a kiosk program or Deep Freeze to lock down the desktop as much as possible, and use network sniffers and your firewall logs to determine exactly what Internet access is required for the machinery to function, and blacklist everything else. But if at all possible, just disconnect the network cable and sleep better at night.
  4. Look at virtual desktop infrastructure solutions for those Windows XP machines you just cannot get rid of.  As mentioned, controlling user actions on XP machines is absolutely critical, but another factor is limiting the access to those machines as much as you can. Hosting XP virtual machines in a central server, where you can control which applications can be accessed and updated and you have a “good” image from which you can restore a virtual machine in the event it does get infected, will be a significant positive step if you must run XP past the support deadline. Even three or four year old servers can host Windows XP virtual machines without too much trouble, so new servers are not required, especially if you simply need more time to effect an eventual migration and XP is not a permanent member of your technology portfolio.

This article was published with edits at SearchEnterpriseDesktop.com.