The Role of Agents in a Software Defined Network

An edited version of this story ran on Cisco.com.

In this article, I want to talk about network agents—what some in the industry refer to as the future of the software defined network. Agents are responsible for a great deal of flexibility and control, and the work they do can vastly impact how responsive and agile a software defined network can be. Let us dig in.

What are agents?

In almost all software defined networks, agents are the tool by which network applications expose their abilities for network controllers to modify their configuration at any given time. Network agents can live on the software defined network controller itself, providing a mechanism for the controller to talk to various applications through the controller’s northbound interface. (These northbound interfaces are generally output only—they only communicate outward to the individual SDN-aware applications.) Network agents can also live on different network elements, comprising a data path. All of these elements work together to forwarding traffic through the datapath’s external interfaces or process or terminate traffic within the element itself.

 

Where this all gets interesting is here: one or more of these datapaths (an agent and an engine) may live inside one physical network element—an “bucket” of welded-together communications resources, which the SDN sees and manages as a unit. But it goes further. A datapath may also be defined across multiple physical network elements. This logical datapath doesn’t care how datapaths map to actual physical assets, how those physical resources are managed or how they are abstracted away, how all of this is virtualized or how it all interrelates and interoperates, and more. It’s a completely different thing than physical wires, ports, and switches. The agent itself is responsible for communicating with the SDN controller about what the datapath is supposed to represent and how its engine is supposed to behave, and the datapath network agents do this communication across the network controller’s southbound interface.

How are network agents deployed in production SDNs?

In some networks, administrators install virtual network agents on every hypervisor host (in this scenario, each hypervisor host is a network element). The virtual network agent on each host then directs traffic toward the virtual switch. If that virtual switch recognizes that particular flow of traffic and is able to match it with a prescribed action, then it can adjust the configuration of the virtual machines on that host so that the flow of traffic exits through the host’s physical network interface card. If the virtual switch doesn’t recognize the flow, it sends a mapping request back to the agent, which queries a mapping service in the network controller so it can figure out where that traffic is supposed to flow.

 

In other scenarios, network agents are key to implementing virtual functions on the network. Workloads like load balancers, firewalls, threat monitors, intrusion detection, packet capture and packet and traffic forwarding, and more can all be implemented by various network agents residing in different network elements. The controller can deploy, activate, deactivate, and redeploy these agents as needed whenever traffic flows in one direction or another, and large scale SDNs can deploy multiple copies of these objects for multitenant scenarios or to respond to a breach, a distributed denial of service attack, an unusually large load during a spiky peak period, or other one time occurrence.

Putting it all together

The important element to remember when it comes to agents is actually what distinguishes a software defined network from a physical network: the fact that the physical components, the discrete switches and patch cables and ports and whatnot are completely abstracted away and the only networking we care about—and the networks to which these network agents serve and function—are the traffic “flows” created by applications.

 

Imagine how this would apply to human travel. SDNs in that context are essentially the difference between worrying about interstates and side roads versus the concept of traveling from Los Angeles to San Francisco. SDN cares out the route, the flow, the purpose of the traffic and its outcome, rather than concerning one’s self with which exits to take off the interstate, where to turn right, what to do if there is a traffic backup, and the like. By focusing on the flows, the need of packets and data to get from one application to another, we get the concepts of agents, the bits that carry out the instructions from the control plane and translate those into action on the data plane.

 

In conclusion, agents are key to the SDN experience and perform the vital role of translating instructions from the control plane to the data plane.

Exploring the Chocolatey Package Manager for Windows

Note: An edited version of this article appeared on Computerworld.com on November 13, 2018.

I’ve administered both Windows and Linux systems for close to two decades now, and honestly while Linux is a fantastic operating system and very appropriate in many respects for many applications, I’ve long preferred Windows for its generally better ease of use and polish. But that doesn’t mean Linux hasn’t had features I’ve lovingly pined for and miss on Windows – and a package management solution is one of them. Luckily, there are a couple of solutions to this, and best of all, both are open source and free. Read on for more.

The Premise Behind Package Management Systems

Linux distributions have had package management options for a while. You probably have heard of Red Hat’s RPM (Red Hat Package Management) format, Debian Linux’s apt-get, and then the new yum package manager that seems to be infiltrating a lot of distributions these days. At their core, these package management systems seek to achieve the same objective: analyze a system, determine what packages are necessary to run whatever software the user is requesting, find the latest compatible version of all of the packages, and install them in the correct order, ensuring they get laid down on the system successfully and that, after the 117 dependencies install, your text editor of choice is ready to run on your target system. I kid, but only a little bit.

Imagine bringing this over to Windows. Imagine the scenario: you are moving to a new system and setting it up properly, exactly how you like it. In the process, you are trying to find the latest version of Notepad+, for example, or any other reasonable popular utility. As it stands now, out of the box, you would Google for the site, find the download link, skip past all of the “featured offers” and near malware that most sites like to bundle with their downloads, and then run the installer. After that, you might even discover you downloaded a 64-bit version when you installed a 32-bit version of Windows. Or maybe you found an old download link, and there are two newer versions out there. That whole sequence is not exactly rocket science, but it is trying.

Imagine, instead, that you could simply say

choco install googlechrome

from a PowerShell command prompt and you would get:

…which would be followed by a completely functional installation of Google Chrome. That would save a lot of time, right?

And what if you had software installed like Google Chrome and then wanted to upgrade it? What if you could use a command like

Choco upgrade googlechrome

…and get an instant upgrade?

That is the power of package management, and that is what the Chocolatey package manager brings to Windows: a greatly expanding selection of carefully curated and maintained software packages that can be brought down and installed on your system with a simple three word command. As of this writing, there are 3,958 community maintained packages, and you can browse and search among them on the web at https://chocolatey.org/packages.

Where Chocolatey really comes in and shines is in the process of setting up and deploying new machines. If you have a fairly standard set of tools – Office 2013 or the 365 version of Office ProPlus, along with some other utilities like 7Zip, your web browser of choice like Mozilla Firefox or Google Chrome, and a few others, then you can absolutely script the setup of a new machine. Just deploy Windows through whatever method you care to use, and then once you have completed the installation and are at an administrative desktop, simply kick off a script that installs Chocolatey and then calls all of your software. Now you can get 70-90% of your software installs automated without having to do a bunch of imaging and packaging yourself, and all of the installations are done in a consistent, repeatable, reproducible way—another benefit of scripting in general.

Understanding the Pieces of the Chocolatey Puzzle

First, let us understand how all of the pieces fit together in this package management puzzle.

  • Chocolatey is a package manager that works with Windows – specifically, that is any version of Windows 7 or later that also has PowerShell installed. This is the vast majority of clients in production environments today. It uses scripts, along with a specific software package format (more on that in the next bullet), to install applications on your system.
  • NuGet is that specific software package format. It has previously been used by Windows software developers to install software dependencies for their own development projects. These were typically libraries, bootstrap utilities, and more that existed as complete packages out on the Internet, ready to be re-used. NuGet was simply an easy way to get those dependencies put in the right place within individual products.
  • PowerShell is the command engine that makes the whole thing tick. It functions as the scripting language and execution environment for all of Chocolatey. Chocolatey uses PowerShell to grab software, packaged in NuGet format, to install applications for your entire system, not just development projects.

To get started on a test system, use one command, straight from the regular command prompt (be sure you have administrative rights to do this):

@powershell -NoProfile -ExecutionPolicy Bypass -Command “iex ((new-object net.webclient).DownloadString(‘https://chocolatey.org/install.ps1’))” && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin

This one line of PowerShell actually does quite a few things, as you might expect. Let us walk through them individually.

  • It supersedes any custom profiles you have configured so as to ensure a full namespace is available to this command.
  • It sets the execution policy for just this one command to Unrestricted so that scripts that are downloaded from the Internet (in the case of this command, that is https://chocolatey.org/install.ps1) can be executed and not be treated as potentially malicious).
  • It downloads that .ps1 script from the Chocolatey website, which is a preconfigured set of instructions that PowerShell can use to install the Chocolatey system on your local machine.
  • It also sets a PATH entry to the default binary folder for the Chocolatey install, so that you do not have to enter the full path to the Chocolatey commands every time you would like to do any package management tasks. Instead you can just enter “choco” and then whatever you need, and Windows and PowerShell will know where to find it.

The Chocolatey installation process requires the .NET Framework version 4.0 and will attempt to install it automatically as part of that PowerShell script. I have set this up on Windows 7 and Windows 8.1 machines and that automatic installation of .NET was totally successful, although some administrators have a deep rooted aversion to installing multiple versions of the .NET Framework on systems due to potential application compatibility issues and the attendant security patching that is required for each respective version. However, this is an all or nothing proposition, and I think the benefits of Chocolatey outweigh the negatives of another framework version deployed.

Chocolatey in Action

Chocolatey is essentially an automated deployment wrapper. It does not intercept installations or somehow modify the setup process of an app. Rather, it automates the downloading of a setup file or package and then its execution, while turning off any sort of screen communication from the app (a silent install).  Here is an explanation direct from Chocolatey that goes into a little further detail:

“Chocolatey is a software management tool that is also a package manager. It functions fantastically well when the software is all included in the package and it doesn’t make use of native installers. However to approach the Windows ecosystem a package manager also needs to know how to manage actual software installations, thus why Chocolatey does that as well. For publicly available packages, copyright keeps from having binaries embedded in packages, so Chocolatey is able to download from distribution points and checksum those binaries.”

Some tips:

  • The command cinst works instead of choco install to save you some keystrokes and avoid carpal tunnel syndrome. So choco install googlechrome could also be cinst googlechrome.
  • Choco uninstall packagename removes a package from your system. It only removes the Chocolatey installed instance, so if for example you have a previous Git installation, then install Git from Chocolatey, and use choco uninstall git to uninstall one of your Git deployments, it will remove the Chocolatey installed instance.
  • Want to search for packages but do not want to browse the web? This is common on new server installs. Use choco search packagename to search the Chocolately package feed.
  • The command choco outdated will give you a list of all of the packages on your system that have a newer version available from the repository.

The Downsides of Chocolatey

As with every tool, there are negatives. Here are the most important ones for Chocolatey.

  • There is a malware risk to Chocolatey, however slight. Unless you run your own packaging team and set up Chocolatey’s PowerShell scripts to only retrieve packages from your own private repository full of packages you have vetted and trust, you will not be able to remove from your security mind the idea that you are downloading packages from a third party source. Those packages could well have malware in them, put there either on purpose by a nefarious actor in the contributor and maintenance team or, much more likely, the Chocolatey repository was hacked and malware payloads were inserted into the packages. To my knowledge and as of this writing, this type of breach has not happened to Chocolatey. But “has not” is not the same thing as “could not” or “will not,” and so there will always be a risk. On the flip side, Chocolatey is very popular and most of the package maintainers are very diligent about their work. Since the whole shebang is open sourced, the “many eyes” theorem means that problems and breaches would be discovered quickly and mitigated quickly, and the resulting reporting would be transparent. Caveat emptor.
  • Package maintenance is not always quick. Remember, Chocolatey depends on a team of volunteer contributors to package up applications. When new versions of the core applications are released, it is not an immediate process to package up the upgrade and make it available in the Chocolatey repository. For more popular applications like web browsers and some utilities, this is not a big issue—for one that software generally has the ability to update itself, making Chocolatey’s upgrade function a moot point; but the more popular the application is, the more active the Chocolatey package maintaining team for that application generally is. The wait for more obscure packages could be quite a bit longer, or packages could be left abandoned after several years.
  • Sometimes install locations get screwed up and there’s no way to track it. Because Windows lets software developers lay their bits down on users’ drives pretty much wherever they please, you wind up in situations where Chocolatey wants to put a particular piece of software in one location but the GUI installer you would download from the Web wants to put the software in another location. Herein lies the problem: if you install something outside of Chocolatey, especially if that software installs itself in a directory or folder Chocolatey does not know about, then Chocolately will not think you have that software installed—so it will let you go choco install thatpieceofsoftware and suddenly you have two installations of the same product on the same system. Sometimes that is not a big deal, whereas in other situations, it cripples the software.

For Windows 10, There is PackageManagement

Part of the innovating happening in Windows 10 and Windows Server 2016 is the Windows Subsystem for Linux, or WSL. This is essentially a port of Ubuntu Linux and other popular Linux distributions to Windows in a way that runs those Linux distributions as “layers” or subsystems underneath the regular Windows 10 user experience. Regular Linux binaries work and you can even install the X Windowing System and get separate graphical shells happening. (Who would have thought the year of the Linux desktop would actually be brought to us courtesy of Microsoft?)

This means that the plumbing is already inside Windows 10 to be able to provide a native package manager that works with PowerShell and Linux binaries, opening up a second entire ecosystem of applications to run on Windows 10 machines. That package manager is called PackageManagement, creatively, and it works with Chocolatey repositories, too. In fact, it is more of a package manager manager, because it works with multiple repositories and brings all of their capabilities and inventory together with a single tool.

To verify that PackageManagement is set up on your system, open a PowerShell prompt with administrative credentials and then type in the following command:

Get-Command -Module PackageManagement

Then, take a look at the universe of commands you have available to you.

To find out what repositories work, then call the following command:

Get-PackageSource

You will probably find that only the regular PowerShell gallery is there. We can add the entire Chocolatey repository with one single command:

Get-PackageProvider -Name Chocolatey

To use the Chocolatey source repository by default, use this command:

Set-PackageSource -Name chocolatey

Now let us try to add some software. We can continue to use Chrome as an example. From the Get-Command example above, we saw that there is a Find-Package command which ought to find a package in a repository. Let’s search for Chrome:

Find-package –name Chrome

In the results, you will see quite a few responses, all of them packages of Chrome that are available in Chocolatey repositories. You can install any one of them by using the install-package command:

Install-Package chrome

Unfortunately, one of the really nice features of raw Chocolatey, the upgrade command, is not available with PackageManagement. Perhaps one day this will be available, but for now, this is only useful to install or remove packages. And again, PackageManagement (which, for future reference, was codenamed OneGet during Windows 10’s initial development) is only available for Windows 10 and now Windows Server 2016, because it is dependent on the Windows Subsystem for Linux, which will not be backported to earlier versions of Windows. It is also important to note that OneGet can co-exist with Chocolatey so you can use whichever option makes the most sense. It is a very nice tool to have in the arsenal and one that especially for power users and developers can make quick work out of application installation and maintenance.

The Last Word

I suppose the best way to sum up Chocolatey is: it’s one of those tools where if you need it, you know as soon as you see it that you need it, and you want it right now. I’ve long searched for a lightweight tool (read: NOT System Center) to script software installations and make it easier to stand up development and test environments with the right programs and utilities. Bonus points that Chocolatey uses PowerShell to get its work done. Check it out now. http://chocolatey.org.

Secrets of Building a Successful Big Data Team

Note: an edited version this story appeared on the Hortonworks blog on November 10, 2017.

Having a dedicated big data team is critical to achieving successful outcomes with the data you’re collecting. This post will discuss the elements of building a successful big data team and how this team operates within your organization.

Step One: Achieve Executive Buy-In

Ask any business school professor or seasoned project manager what one has to do first and they will all say the same thing: obtain executive buy-in. The most fundamental part of any business transformation is buy-in and enrollment from the top. A top down approach, and big approvals, are a key step in building a big data story in your organizations. Unless this step is done, don’t pass go, don’t collect $200: work until you have executive approval.

Step Two: Set Up a Center of Excellence

Building a center of excellence, a shared employee facility that develops resources and best practices, is the next step in this process. This CoE could be as small as one person, or it could be as big as you want. The CoE’s members should comprise a subject matter expert from each impacted group across the organization in order to adequately represent the way the transformation is happening. In this way, not only have you gotten buy-in from the executive level to start the overall transformation, but you have participation across the organization so all affected departments can feel heard and acknowledged.

Part of building a proper and maximally effective center of excellence is to encourage the naysayers. Platitudes are fantastic, and open mindedness is a great thing to strive for, but the reality is that everyone has their own agendas and different original points of view. In building the CoE, you want cheerleaders as well as skeptics in order to keep debate alive and valuable. Ironically enough, the naysayers end up leading the CoE often; they become most passionate over time because once their objections have been overcome, they understand why the transformation is so important.

Step Three: Building the Team

Once your center of excellence is in place, the bulk of building your big data team lies in finding individual employees to flesh out the team. This is about 75% art, 25% science. From the science perspective, you’ll want to screen workers through the job description, background requirements, and appropriate thresholds of experience. You will want workers with 7-12 years of experience in IT and some exposure to production Linux; data warehouse skills and experience are a very big plus. Unfortunately, this won’t get you all the way there: the industry at this current point in time doesn’t have skill set in body of work to make it easy to find workers just on those limited merits. The current H-1B situation is actively contributing to the dearth of objectively qualified candidates. It is akin to trying to find a needle in a haystack.

This is where the 75% art part comes in: you build your team from folks with personality and passion who also come from relevant backgrounds. How many candidates you interview are both willing to learn Hadoop but also have the passion to do so? The interview and subsequent in-person conversations is where you will find this passion and sense of opportunity. These soft skills have to be found in a face to face interview, and your interview process should dig into what the candidate’s exact experience translates into. You may also want to consider pushing candidates into a live demo where they perform a real world task, and then discuss how they solved problems you stage. Many times candidates are unsuccessful at completing a demo, but the real key is, can they explain why?

You will also find the best candidates are either toward the beginner of their careers or toward the end, and not necessarily in the middle. For more experienced resources, there is a familiarity with the “wild west” that is Big Data these days as it bears resemblance to IT 20 years ago. Things weren’t integrated, and staff had to do the heavy lifting: build it, script it, troubleshoot it. That innate ability to self-start is an asset. For younger resources, they are quick to adopt tech that can build scripts and automate, which is also a useful skill.

Leaders of these teams should be neutral parties in the organization with no self-driving interest other than to help the company overall change. If a leader does end up being from the department that is funding the project, that interest will often eclipse the greater good. The ideal leader is an employee who is not fully integrated, almost a third party.

Finding Talent

Finding this talent is also a challenge. One way is through the Hortonworks University, a program that 12-15 colleges nationwide provide in order to establish Hadoop as a fully accredited part of their curriculum. Hortonworks pays the costs of the program incurred by the university so long as the school makes the courses part of the curriculum. You might also consider recruitment days at local universities, asking professors, who stands out? Who solves things?  Provide internship and trial opportunities for those names you receive.

Word of mouth is also a proven way to find candidates. The raw truth is that the Big Data community is a small enough group in world that if you happen to be really good at Hadoop, then people know who you are.

The Last Word

Ultimately, a big data transformation is an enablement opportunity to get your entire organization to go learn. Over time, we can all get stale, but this transformation can be a driver of learning, a place to get hands dirty with something new, and an opportunity to create new subject matter experts. Don’t be afraid to use this rubric to build a successful team.

A First Look at Project Honolulu, the new Windows Server Management Experience

Note: an edited version of this story ran on Computerworld.com on November 2, 2017.

I’ve been on the PowerShell train for years now—so much so that I wrote a book on how to learn PowerShell, because to my mind PowerShell knowledge is required if you work in Windows administration in any reasonable capacity in this day and age. But PowerShell is not always the best tool for the job, and options are always good. Microsoft’s graphical user interface (GUI) tools have become somewhat long in the tooth especially over the last three to four years as the company invested a lot of time and energy into expanding the universe of things PowerShell commands could accomplish.

Traditional administrators saw all of this happening and thought to themselves, what of Windows itself? After all, if you want command line, you can go with Linux—Windows was supposed to be point and click all along. To this group, Microsoft says, fear not: we now have something superior to the in-the-box GUI tools you know well. Enter the new Server Management Experience, codenamed Project Honolulu. It can manage Windows Server 2012, 2012 R2, and 2016 and future versions all through a web browser (currently Edge and Google Chrome). It can even handle the free Microsoft Hyper-V Server, making this a no brainer way to get a free hypervisor deployed in environments that don’t really need a full fledged copy of Windows Server without trying to manage through Hyper-V Server’s cryptic command line interface.

Let’s take a look at how Honolulu works, how it’s built, what it can do, and perhaps most important, what is isn’t.

Project Honolulu

There are a lot of different tools you need to use to manage servers with the in the box utilities you get today, and Project Honolulu’s main mission is to get all of these pulled together in one place. All of the tools’ functionality are integrated on the left side of the screen—Hyper-V tools, disk management, file server management, and more. An intuitive, HTML5 based touch screen compatible interface lets you browse around and carry out just about any task that the previous tools would let you complete, just all from a “single pane of glass,” to use the preferred industry term.

The great news is that this is a really lightweight install. If you want to run it on a server, it does not need to be a dedicated machine, and you can also deploy it onto a management workstation or laptop. You run a single MSI installer of about 30 megabytes, with no dependencies, and you can run it on any Windows 10 or Windows Server 2016 machine. It’s literally a few Next, Next, Next clicks in an installation wizard and then in about 30 seconds, the operation is complete. Then just type in a server name, Honolulu will find it, and bang, you are managing your machine remotely.

Some notes on behavior:

  • If you install Honolulu on a Windows 10 machine, you essentially get a browser-based single app that you can use to manage any number of remote servers. Honolulu will listen on port 6515 by default, although you can elect to change this.
  • If you install Honolulu on a Windows Server machine, it configures itself as a multi-user web app and any user can connect to it and customize their own configuration. You must specify the port that the service will use to answer requests, and you will need to either allow Honolulu to generate a self signed certificate or install an appropriate certificate on the server before installation and point the Honolulu installer to that certificate’s thumbprint.

Once your installation is complete, the next step is to add a machine to manage. On the main screen, under All Connections, click the Add link.

From there, you can add a single server, a cluster of servers, or a hyperconverged cluster of servers. (For what it’s worth, to manage a hyperconverged set of clusters, you must be running the new Windows Server Semi Annual Channel release, which counts out a lot of organizations at this point.) For our purposes, let’s just add a single server: we’ll select that, and type the server’s name, and then we are connected.

You can also have a list of servers in a text file, one per line, and then direct Honolulu to import those. Just click the Import Servers link and point Honolulu to that text file.

Perhaps the biggest point in favor of Project Honolulu is that it no longer requires you to use the awful Server Manager app that was introduced in Windows Server 2012. At first blush, I think we were all optimistic that that app would work better at managing multiple servers and making common administrative tasks easier, but most of us have discovered that the “new” Server Manager is simply a confusing mess. Now that a suitable replacement is around, we can banish Server Manager.

What else does Honolulu replace? The list as it currently stands:

  • Displaying resources like CPU, memory, and disk and their related utilization
  • Managing certificates, including installing new ones and managing expired and expiring certificates
  • Viewing events from event logs
  • Managing, uploading, and downloading files through File Viewer
  • Turning the Windows Firewall on and off and opening and closing ports
  • Configuring Local Users and Groups, including adding and deleting both
  • Network Settings, including IPv6 and IPv4 settings, network names, domain memberships, and more
  • Managing processes, including killing them and creating Process Dumps
  • Viewing and editing the Registry
  • Managing Windows Services, including viewing their current statuses, starting, and stopping them
  • Installing and removing Roles & Features a la Server Manager
  • Managing Hyper-V virtual machines and virtual switches
  • Managing disks and volumes, including creating, initializing, and setting drive letters
  • Managing Windows Update

You can easily imagine a scenario where you have ten, 15, 20 different servers all under your purview. You might even want to bring up a new system, in which case you lay down a copy of Windows Server on it, get it connected to the network, and then add it to the Honolulu interface. At that point, you can use Honolulu to configure IP addressing or change the system name, add disks or initialize disks you have plugged in, kill roles or add new roles like file server services or container services. Once that new system is up and you have 21 servers to manage, you can then see them all in one place. Is there, say, an errant task pegging a CPU higher? Just click into it from the screen, look at a process view, and end the troublesome task. Need to check if an update is installed? Click into Windows Update view, or to prevent a patch from installing by entering a new Registry key, just click the Registry node.

One of the cooler bits is how you can essentially replace the staid Hyper-V Manager tool with Honolulu.

It is a very convenient tool with a very easy, intuitive interface. Click around and you will find that it largely just makes sense.

Is This Azure Server Management Tools?

What happened to Azure Server Management Tools? The idea behind the Azure SMT product, which was discontinued in June 2017, was access and management from anywhere on any platform, with the fact that it was hosted in Azure meaning it could be updated and improved continuously. However, according to Microsoft, the top customer feedback from the toolkit was that users had environments that were not connected to the raw Internet (how can you then manage those types of deployments over the Internet?), and additionally some customers needed fully deployed tools in their local environments. Partners didn’t like SMT because it only was tied to Azure and not other public clouds, and sometimes a dependency on any cloud, Azure or otherwise, was not a natural fit. Additionally, the top user feedback request was basically to make SMT available on premises without an Internet connection. That is exactly what Project Honolulu is: you can manage from anywhere, including integrating in the future with Azure, from any platform since it is a browser based solution, that starts with core tools and ships independently (not in the OS image but as a separate download) which makes revisions possible on a much faster cadence.

Project Honolulu’s Architecture

Honolulu as its core is a lightweight web server and gateway. The web server does not depend on IIS; rather it uses a self-hosted stack that leverages HTTPS. The gateway is the piece that lets you fan out to manage remote servers using PowerShell and Windows Management Instrumentation over WinRM, the remote management protocol. Crucially this means that there is no agent infrastructure for Honolulu so nothing additional is required on the servers you want to manage; all of this instrumentation technology is already built into the operating system and is mostly standards based at that. (For Windows Server 2012 and Windows Server 2012 R2, you will need to upgrade the version of the Windows Management Framework to 5.0 or higher; the current version, 5.1, is available here [https://www.microsoft.com/en-us/download/details.aspx?id=54616].)

What Project Honolulu is Not

Honolulu is great and off to a promising start, but it is important to understand its limitations.

First, Honolulu does not replace any functionality covered in the following in the box tools (list courtesy of Microsoft):

  • AD Administrative Center
  • AD Domains and Trusts
  • AD Module for Windows PowerShell
  • AD Site and Services
  • AD Users and Computers
  • ADSI Edit
  • Cluster Aware Updating
  • Component Services
  • DFS Management
  • DHCP
  • DNS
  • Failover Cluster Manager
  • File Server Resource Manager
  • Group Policy Management
  • iSCSI initiator
  • Network Load Balancing Manager
  • Performance Monitor
  • Print Management
  • Remote Access management
  • Routing and Remote Access
  • Shielding Data File Wizard
  • Task Scheduler
  • Volume Activation Tools
  • Windows Server Update Services

That list could very well change as development on the project continues, although some of these tools lend themselves more to remote management than others.

Second, Honolulu is not a finished product. It is in technical preview, which means anything and everything could change—though it probably won’t. The code is in pretty good shape for a preview product and is suitable for use in production, although Microsoft will not admit that nor support you if you do. But at any time features may be removed, added, changed, moved around, and so on. A final release is expected in the not-too-distant future, and frequent updates are expected since the utility can be delivered “out of band.”

Third, Project Honolulu is not a replacement for a third party systems management suite. It is designed for relatively lightweight server management and monitoring. It does not support older Windows releases, nor does it handle client management or software distribution. It also does a cursory job of monitoring but does not orchestrate an alerting workflow or any type of automated response, both of which are necessary to have in enterprise environments. Project Honolulu is available at no additional charge with a Windows license, which is really nice, but it does not purport to solve all of the issues that you would need a product like System Center Configuration Manager and Operations Manager for. It is a convenient add-on.

The Last Word

The GUI is not dead! There is no denying that however useful PowerShell is in a variety of applications, there remain scenarios where a visual approach is the right way to go. It’s very refreshing to see Microsoft embracing both types of tools and really investing in a useful, intuitive, and best of all free (with a license) way to manage more recent versions of Windows Server. As the tool grows to integrate resources in Azure as well as extensions from third parties, this is sure to be a useful set of bits to have around. Recommended.

Download Project Honolulu here: https://aka.ms/HonoluluDownload

Making Big Data More Accessible

An edited version of this article was published on the Hortonworks blog. Credit: Hortonworks

Big data, and storing and analyzing it, has the reputation of being a complex and confusing. However, because of the great insight that can be taken from big data, it’s important to allow data science to be more easily accessible.

How can business leaders lower the barrier to entry to make data science more accessible to everyone? I see four ways this is possible.

Invest in natural language capabilities. Math sometimes has the tendency to scare people away, so if your plan is for everyone to learn Python and reduce their questions to a formula, I have news: that is not going to bode well for a successful uptake. Instead, look for platforms that allow users to query data  sets using standard English questions, like “how many more sales did I make this year rather than last year?” or “What are the top five things my customers purchase along with Widget Model A?” This type of natural language processing makes accessing large quantities of data easier and more approachable, but it also has a side benefit of encouraging interest and curiosity in further analyzing data. As your users ask natural questions and receive straightforward answers, they think of more questions, which leads to more answers and more insight. Save the math and SQL-like queries for your developers and put a friendly front end on your data.

Make visualization an integral part of your analysis and presentation. You surely have heard the old phrase, “I’m a visual person.” Some people simply absorb and digest information better if it is presented visually. There are currently many platforms and frameworks on the market that can take a raw data query result and turn it into a rich graphical answer to a user’s question. Further, the more sophisticated of these platforms can allow you to interact with certain subsections of a visualization, such as exploding out a pie chart into smaller subpieces or layering statistics onto a map of a certain geographic location. Even heat maps and gradients can make statistics pop into life and really enhance your users’ understandings of exactly what the data is telling them. Visualizations can also help to enhance the encouragement to ask more questions and interrogate the data further by creating a rich, inviting atmosphere to discover new ways of thinking about data.

Smart small and then aim to grow your user base. Chances are, there is a vocal community of employees or contractors within your organization who are already clamoring to access new data science tools. Perhaps you have already harnessed their enthusiasm and enrolled them in a controlled pilot of your data deployment. If you have not, however, consider finding a group of individuals excited about the potential for big data and data science and invite them to preview your plans and test systems and provide very valuable feedback about what they like, do not like, and wish to see or need to have to augment their roles. After all, the best camp fires start with proper kindling; a data portal project is no exception to this rule. Beginning with a small but dedicated group of users is a tried and true method of getting a successful project off the ground before expanding it so quickly that it outgrows its capabilities.

Embrace the power of data on the go. Mobility has changed the world around us, with computers more powerful than the space shuttle available in pocket sized form off the shelf at Target for $80 or less. Many—I dare say most—professionals have smartphones or tablets, some either issued by or paid for by their employer. This means you have an audience that expects to be able to get answers on their mobile devices whenever and wherever they are. Your data science projects and deployments must be available in a mobile friendly format, either with responsive web design, customized native apps for the popular mobile device platforms, or perhaps some combination of both paths. That mobile app ought to also satisfy the points we have already established, including supporting natural language and providing rich visualization support with the capability to touch, drag, pinch, zoom, scroll, and more. The bottom line: in this day and age, you simply must make data accessible on mobile devices.

10 Free SharePoint Tools

An edited version of this article ran on Computerworld.com on September 20, 2017. Credit: Computerworld

The SharePoint software vendor ecosystem has produced many free tools that help you administer and configure a SharePoint farm or Office 365 deployment on a day-to-day basis. The vendors would, of course, love to have you upgrade to their for-pay software products, but until you do there is much utility in these free tools. Here are ten that many administrators find extremely useful.

1. Marco Wiedemeyer’s SPDeployment command line tool

Many companies develop solutions that live inside SharePoint, but those SharePoint developers have a hard time deploying those solutions to the right spots within the SharePoint hosting infrastructure, whether that is on premises or up in Office 365. Who wants to keep all of those details straight every time you make a change to a SharePoint Solution? Developer Marco Wiedemeyer has developed a tool that developers can run from the command line that reads a standard JavaScript Object Notation (JSON) file and automatically puts files where they should be and marks properties as they need to be. It handles the credentials of logging into SharePoint as well. You can even use it with multiple sites and environments and trigger deployments as granularly as your developers would need to. It gets installed via the NuGet Package Manager (NPM) and works right from a command console. Hosted on Github. Free.  https://github.com/mwiedemeyer/SPDeployment

2. The SharePoint Online Management Shell

If you have worked with Microsoft server products for any length of time in the past few years, you know that PowerShell is the way to get things done from an administrative perspective. The SharePoint Online Management Shell is a preconfigured PowerShell environment that has called all of the SharePoint Online (Office 365) cmdlets into one safe space; you can do basically anything from here, from creating content packages to migrate file share data to SharePoint Online to creating new document libraries to turning on and off external access to certain SharePoint sites. If you have even the slightest remit to manage Office 365, then you should grab this shell—it is a virtual certainty that having it will make your life easier. Be sure to right-click it after installation and run it as administrator or essentially nothing will work. Runs on your local machine. Free. http://go.microsoft.com/fwlink/?LinkID=617148&clcid=0x409

3. Amrein Engineering Free Microsoft SharePoint Web Parts

Web parts have been around almost as long as the SharePoint product itself, but they have come in and out of “SharePoint fashion” as the product has matured over the years. Still there are many tasks for which a web part—a pluggable component designed to be run within SharePoint—is the best tool for the job. SharePoint 2016 comes with several built in, but a groupware firm has developed around seventy web parts that do everything from read Exchange conference room calendars to track individual stocks to perform overall task rollups across a given team. While your SharePoint developers can build complex solutions that ride atop the server, your users can grab these web parts and build simple pages and project sites themselves. Some of the 70 web parts are free, but all come with evaluation periods, and they are easy to license right from the page. http://www.amrein.com/apps/page.asp?Q=5728

4. ManageEngine’s Free SharePoint Health Monitor Tool

If you’ve not invested in a lot of systems monitoring tools, or you have a smaller SharePoint deployment, you might want a lightweight tool that gives you just an overall rollup of your SharePoint farm’s health status at a glance. The ManageEngine Free SharePoint Health Monitor fits this bill nicely, giving you a convenient dashboard view where you can see details about the CPU, memory and disk space usage for each server running SharePoint.

Then you can drill down into the SharePoint workload itself and see the response time, service status, Web server (Internet Information Services) process details, and even SQL Server details like free pages, cache memory and buffer cache hit ratio. While this tool won’t help you with Office 365 deployments, and it does not appear to be supported for SharePoint 2016 installations, it does indeed work with 2007, 2010, and 2013, which are still widely used. Free; runs locally. https://www.manageengine.com/free-sharepoint-monitor/free-sharepoint-health-monitor-index.html

5. Visual Studio Community Edition 2017

In a land long ago and far away, SharePoint Designer was the preferred tool for non-developers to use to reformat, re-scape, and develop simple SharePoint solutions. Unfortunately, SharePoint Designer is no longer supported and only works with SharePoint 2013 and below. The tool of choice now is the surprisingly good and free Visual Studio Community Edition with the Office Developer Tools extension. The community edition is the free version of Microsoft’s very capable integrated developer environment (IDE), and the Office Developer Tools plug in lights up IntelliSense and debugging capabilities that let you run solutions right on the SharePoint Server itself, remotely in Office 365, or in an Office web app. This tool works with essentially all versions of SharePoint no matter where they are hosted. Free; runs locally. https://www.visualstudio.com/vs/office-tools/#downloadvs

6. Office365Monitor

For those shops with significant deployments in Office 365, it can be really useful to have an eye on how the service is performing. Microsoft has promised at various points over the years more insight into the health of the service overall as well as its individual components, but we frequently see events that do not ever make it to a health dashboard. In the meantime, users blow up your phones asking what’s going on and where their files are. Ex-Microsoft employee Steve Peschka created Office365Monitor as a web service to gain deeper insight into each individual component of Office 365 and its uptime. You plug in the name of your tenant and the tool basically does the rest. There is a generous 90-day free trial and after that it is so inexpensive as to be effectively free. Web service; runs in the cloud; 90-day free trial and then starting from $19 per month. https://office365mon.com/

7. Veeam Explorer for Microsoft SharePoint

Veeam Explorer is basically Windows Explorer (or File Explorer in Windows 10, or Finder on the Mac OS X platform) for SharePoint. It lets you browse the database graphically, use full text search to find documents and items, restore individual items as well as their permissions if they have been backed up, export recovered items back into SharePoint directly or as e-mail attachments, and more.  It is also included in Veeam Backup Free Edition and can be used in conjunction with Veeam Endpoint Backup FREE, which makes this little tool extraordinarily useful. This works with on premises SharePoint 2010, 2013, and 2016 in all editions, but it does not work with Office 365. Free with backup product; standalone 30 day free trial; runs locally. https://www.veeam.com/microsoft-sharepoint-recovery-explorer.html

8. Veeam Backup Free Edition 9.5

Veeam has another really useful tool for those shops with investments in on premises servers. Sometimes your backup solution isn’t aware of SharePoint specifically, or maybe your backup just grabs virtual machines and copies them without doing anything intelligent on the processing side. Veeam’s free backup product is really quite good—I use it myself in my Hyper-V lab—and works with both Hyper-V and VMware. : Picture your SharePoint VM farm: wouldn’t it be nice to clone, copy, export and manage those VMs? Sometimes wouldn’t it be useful to peek inside the VM to restore individual application items? Veeam Backup lets you do this on an unlimited number of ESXi and Hyper-V hosts or VMs. It is totally free and thus a great tool to have in your arsenal as part of a layered SharePoint on premises backup strategy. Free; runs locally. https://www.veeam.com/virtual-machine-backup-solution-free.html

9. Refactored SharePoint SUSHI

SharePoint SUSHI was an open source project hosted on CodePlex that essentially took the most common administrative tasks and put them in one tool. SharePoint SUSHI is a powerful, user-friendly utility enabling you to accomplish common administrative tasks. You can think of SUSHI as a Swiss army knife for SharePoint. While the original version that supported only SharePoint 2007 languishes unloved on the deprecated CodePlex platform, Ivan Sanders, a SharePoint MCT, MCTS, MCITP, MCSE has refactored the tool for use with SharePoint 2013. It is unclear if the tool works with SharePoint 2016, but it does not in any way interface with Office 365.

You can view the lists and sites any given user can access, which is really helpful for looking at effective permissions; upload user photos as profile images; back up and restore sites; apply a theme to a group of sites with one click and much more.

This is a visual studio solution that you download from Github and build yourself, or you can use a precompiled EXE that you can find on GitHub. Free; runs locally. https://github.com/iasanders/sushi/tree/master/Releases/Release%204.0/bin

10. SharePoint Color Palette Tool

If you are not a web designer or artist, then coming up with aesthetically pleasing color palettes can be a real challenge. With SharePoint 2013, 2016, and now Office 365, branding is more possible than ever. Microsoft has a nice little tool to help you create polished, composed color choices. Free, runs locally. https://www.microsoft.com/en-us/download/details.aspx?id=38182

 

 

 

 

Next Generation Authentication for Windows Shops

An edited version of this story ran on Computerworld.com on September 13, 2017. Credit: Computerworld

Authentication. The act of proving one’s identity to the satisfaction of some central authority. To most, this process means typing in a user name and a password, and it’s been this way for years and years.

But passwords—especially the passwords that most enterprises require that have to be complex, with long strings of numbers and specially cased phrases with some (but not all! Heavens no, not the one you want) symbols—are difficult to remember and often end up getting written down on sticky notes. Then you have to reset them every so often so that hackers and crackers are working towards moving targets. Passwords can be leaked or hacked from the inside as well, as we have seen with numerous credential dump attacks over the past few years. And users can accidentally disclose their passwords if they fall victim to ever increasingly sophisticated phishing attacks.

Luckily for Windows shops, Microsoft is innovating in this space and it has introduced an enterprise quality method of using biometric identification and authentication without requiring the purchase of specialized hardware—and it is baked right into Windows 10, which many shops are already beginning to deploy to replace Windows 7 and Windows 8 and 8.1. In this piece, I want to take a look at this innovation, called Windows Hello for Business, and show how it works and how to enable it to secure your enterprise while eliminating the need for your users to handle cumbersome passwords.

Windows Hello for Business

Windows Hello is the most common and most widely known of the biometric authentication schemes that Windows supports. Windows Hello for Business takes the Hello idea and bundles it with management tools and enforcement techniques that businesses and enterprises want to ensure a uniform security profile and enterprise security posture. Windows Hello for Business uses Group Policy or mobile device management (MDM) policies for management and enforcement, and uses key- and certificate-based authentication in most cloud focused scenarios for maximum protection.

Essentially, Windows Hello acts on two fronts: it can scan one’s fingerprint, and it can also take an infrared picture of a user’s face and perform analysis on it. It pairs these unique physical attributes of each user with cryptographic keys that replace passwords as authentication methods. These keys are stored within specialized security hardware, or are encrypted in software, and unlocked only after Windows deems them authentic. For organizations uninterested in biometrics, Windows Hello also supports PIN usage to replace passwords transmitted over the network.

Windows Hello protects Microsoft accounts (the accounts you use to log in to Microsoft cloud services, Xbox, Office 365, and the like), domain accounts that are part of a corporate Active Directory deployment, domain accounts joined to an Azure Active Directory domain (these are relatively new), and in the future, accounts protected by federated identity providers that will support the Fast ID Online (IDO) 2.0 protocol.

Why is Windows Hello considered stronger than a traditional password? For one, security is always better in threes—the best method is authenticating is to prove something you have, something you know, and something you are. In this case, Windows Hello can authenticate users by satisfying all three rules: something you are (your face, which is exceedingly difficult to copy and use in a malicious way), something you know (the PIN that is used by default by Windows Hello from the point of registration onward), and something you have (your fingerprint, which again without removing digits is difficult to copy and use nefariously).

What is most interesting is that all of these biometrics are stored on the local device only and are NOT centralized into the directory or some other authentication source; this means credential harvesting attacks are no good against Windows Hello-enabled accounts simply because the credentials do not exist in the place that would be hacked. While it is technically possible each device’s trusted platform module, or TPM, could be hacked, an attacker would have to crack each individual user’s machine versus simply executing a successful attack against one machine: a vulnerable domain controller.

The security techniques involved in verifying the biometrics are rigid: special webcams or cameras designed to see in infrared can pick up the differences between a photograph of a person and the real presence of that person, and most laptop manufacturers are now including Hello-compliant cameras in their corporate lines of devices now. You can also purchase these compliant cameras separately, making a staged rollout possible. Fingerprint readers are mature technology and have been around for years, but Microsoft indicates the newest generations of readers pick up more and more on the first swipe, eliminating the need to swipe again and again like some previous models required; essentially all fingerprint readers compatible with any version of Windows can also be used with Windows Hello. It is important to note that you can use both fingerprints and facial cameras or both solutions—whatever biometric you end up using is called the “gesture,” and the gesture action is the key that begins the unlocking of public and private keys and verification of a user’s identity.

The Registration Process

To use Windows Hello, you must register your user account so that Windows can generate the proper elements to replace the traditional password. First, the user configures an account on the device (or the administrator adds a user account to the device). The user authenticates the normal way during the registration process—using a user name and password—and the authentication source, most likely Active Directory, issues its standard yay or nay to that user’s credentials. The user can then enable his or her PIN, which then becomes inextricably linked between that device and that user account.

Windows then generates a pair of keys, a public half and a private half, and stores them both either in the hardware TPM module, or if a device does not have a TPM, it encrypts the keys and stores them in software. This first key is associated with just one biometric “gesture” – either a fingerprint, or a face, or a PIN. Each subsequent gesture has a different protector key that wraps around the authentication key. While the container is designed to only have one authentication key, multiple copies of that single authentication key can be wrapped up with the different protector keys associated with the different gestures registered on the device. There is also an administrative key that Windows automatically generates so that credentials can be reset when necessary, and the TPM has its normal block of data as well that contains attestations and other TPM-related information.

After the PIN is established and these keys are created as I just described, the user can authenticate to the device in a trusted way and Windows will then let him or her create a biometric gesture like register a fingerprint or face print.

Enforcing Windows Hello for Business through Group Policy

As you might imagine, you set up Windows Hello and enforce it throughout the enterprise organization through the use of Group Policy. Within the Group Policy Management Console, you can find policy settings under Policies / Administrative Templates / Windows Components / Windows Hello for Business in both the User configuration and Computer configuration hives. The important policies to configure are:

  • Use Windows Hello for Business: you’ll want to set this to Enabled to get started with the deployment.
  • Use biometrics. Set this to Enabled to enable gestures instead of supporting only a PIN.

Alternatively, if you already have a mobile device management solution deployed, then you can use MDM to force the deployment of Windows Hello. The policies use the PassportForWork configuration service provider, which is like a template of potential settings that you will need to import into the MDM solution before you can begin configuring and enforcing policies.

Key Points to Consider

Some important points to remember:

  • Credentials enrolled in Windows Hello for Business can be bound to individual laptops, desktops, or devices, and the access token one gets after successful credential verification is also limited to that single device.
  • During an account’s registration process, Active Directory, Azure AD, or the Microsoft account service checks and authenticates the validity of the user and associates the Windows Hello public key to a user account. The keys—both the public and private halves—can be generated in the TPM modules versions 1.2 or 2.0 or they can live in software for devices without the right TPM hardware. The Windows Hello gesture does not roam between devices and is not shared with the server; it is stored locally on a device and never leaves the device When the PIN is entered and the face and/or fingerprint is applied, Windows 10 uses the private key stored in the TPM to sign data transmitted to the authentication source.
  • According to Microsoft: “Personal (Microsoft account) and corporate (Active Directory or Azure AD) accounts use a single container for keys. All keys are separated by identity providers’ domains to help ensure user privacy.” In practice, this means that keys get commingled within one secure container, although they are delineated by their native identity provider so that the wrong key is not sent to the wrong provider.

Sidebar: Why a PIN and not a password?

At first blush, a PIN seems like a password but worse: shorter, probably all one type of character (usually numbers), and most likely reused among a number of different places, including bank accounts, access to your mobile phone or tablet, and so on. However, the technical execution of how passwords are verified in the authentication process makes all the difference. Passwords are transmitted over the network to the authentication source where they are validated and either accepted or rejected. Because this transmission happens over the network, anyone with the right tools can snoop in, capture the credentials, and reuse them anywhere. And as we discussed earlier, if all of the passwords are stored centrally, one attack can potentially compromise all of the passwords. In Windows Hello for Business, the PIN is the gatekeeper to unlock a cryptographic key that is bound to the TPM in one individual machine. The PIN only works on the local device and does not enable authentication of any other kind from any other place.

Active Directory Requirements

Fully enabling Windows Hello for Business will most likely require you to add at a minimum one Windows Server 2016 domain controller to your domain. While you do not have to raise your domain or forest functional level, the 2016 DC will light up some required authentication functionality. One alternative to shelling out for a 2016 license is to use Azure Active Directory to deploy Windows Hello.

There is detailed information about exactly what is required from a prerequisite standpoint on the Microsoft website: https://docs.microsoft.com/en-us/windows/access-protection/hello-for-business/hello-manage-in-organization  In particular, pay close attention to the key-based authentication requirements and the certificate-based authentication requirements; if you already have a public key infrastructure deployed in production, the certificate-based authentication method will be much easier to start with. If you are largely cloud oriented, then the key-based authentication method is the one to go with for your first Windows Hello deployments.

The Last Word

Security experts for years have been calling for the death of passwords as we know it, but that prediction has always been troubled by the lack of a seamless, affordable, user friendly alternative to authenticating against systems. In practice, it was always going to take Microsoft putting biometric features inside Windows, the most popular operating system, to spur enough organizations to look into passwordless authentication, and it appears with Windows 10 that the Redmond software giant has done Just Enough to warrant the attention of enterprises everywhere. While it is unlikely your shop is a position to remove passwords entirely, new machines you deploy can work with this option by default, and as you migrate to Windows 10 over time at your own pace, you can slowly but surely work Windows Hello for Business into your security profile.

A Look at the Microsoft Azure Internet of Things (IoT) Solution

An edited version of this story ran on Network World on September 7, 2017. Credit: Network World/Computerworld.

The Internet of Things, a vast network of connected microdevices, sensors, small computers, and more throwing off data every second and sometimes even more often, is all around us. Whereas before we had dumb roads, dumb cities, dumb devices, and the like, now all of those things can talk to us and tell us how they are doing, what their current status is, what their environment is like, and even what sort of other devices it knows are near it—and it chatters all of this back to you. All of this is really coming to a head now because sensors are cheap, their processors are cheap, wireless is everywhere and becoming less expensive, and there are tremendous resources for storage and analytics out there. So how do you deal with this phenomenon and take it by the horns to make it begin working for you?

 

And deal with that you must, because that is coming to you—data. Cisco research projects that there will be 50 billion connected devices by 2020, and all 50 billion of those will be sending off sensor data between once a second and once a minute. How much data is in each of those payloads? Assume it’s 500 bytes, to allow for some packet overhead: that means 25 terabytes of data being spun off somewhere between every second and every minute. What if Cisco is half wrong? What if Cisco is twice right? One thing is for certain: there will be a tremendous amount of data, and with that data comes projections from Gartner, IDC, and Forrester that show a multi-trillion opportunity in both cost savings and new revenue from the IoT.

 

One other factor is that IT is starting to fade into the background a bit, at least as a central place  where technology projects begin. Often business units and individual departments are focusing on technology efforts, made possible by the fact that now there are cloud products out there that let you plunk down a corporate purchasing card and get started in minutes. Hire a data scientist contractor, outfit your kit with some sensors, start up a cloud service account, and before long you could have several terabytes of sensor data and be ready to get fired up. Microsoft has been innovating in this particular area with its Azure service, and they have some compelling offerings.

 

The Microsoft Azure Internet of Things Story
The Azure IoT Suite is designed to be a quick start type of portal—a true example of a platform as a service (PaaS) offering) that gives you the resources necessary to deal with all of the data being sent back to you while also allowing you to manipulate it, develop some level of understanding of it, and use it to either improve your business processes, solve some nagging problem which you have deployed a boatload of sensors to track and mitigate, or build a model of the way certain parts of your assets behave.

 

NOTE: there is also a newer software as a service (SaaS) offering called Microsoft IoT Central, which takes the platform level stuff away and focuses only on the software that powers the sensors and connects everything together. Manufacturers can build their own SaaS-based IoT solutions hosted on the Azure IoT cloud service and get their solutions to market more quickly without having to reinvent the plumbing, the platform, and more. There’s also the very new (as in spring 2017) Azure IoT Edge suite, which lets programmers develop logic for small computers and sensors on the “edge” of an IoT environment in convenient languages like Java and C# rather than assembly and other more obscure languages. In this story, however, I will focus on the Azure IoT Suite because I think it more clearly highlights the capabilities of the overall platform.

 

The Azure IoT Suite itself bundles together a bunch of relevant Microsoft Azure services into a surprisingly simplified package. It starts off by allowing you to create a couple of ready made IoT consumption scenarios, including predictive maintenance and remote monitoring, and automatically orchestrates the various Azure products like databases, websites, web services, the data ingestion point, and more, creating and linking them together so that you are ready to go from square one. For example, for the remote monitoring solution that you can start with as a pre-configured package, Azure self-selects and configures the following services, handling the provisioning process automatically:

  • Azure IoT Hub (1 high-frequency unit, also called an S2 unit)
  • Azure Stream Analytics (3 streaming units)
  • Azure DocumentDB (1 S2 instance)
  • Azure Storage (1 GRS standard, 1 LRS standard, 1 RA-GRS standard)
  • Azure App Services (2 S1 instances, 2 P1 instances)
  • Azure Event Hub (1 basic throughput unit)

 

Each of the other solutions have a different makeup, but you get the idea: everything you need with just a couple of clicks.

 

The pitch from Microsoft is that while you might have the internal resources to do a couple of IoT style projects today, as you build on those developments, create new models, deploy more sensors, and in general double down on IoT in the future, you probably will not be able to (at least cost effectively) handle all of that data. You will either be forced to invest in expensive storage infrastructure on premises, or you will have to make problematic choices about what data to keep present, what data to roll up, summarize, and archive, and what data to discard. And of course, when you discard data, you cannot get it back, so you might be losing out on some predictive capability you do not know about yet; if you roll up and summarize data, you lose the granularity and resolution on that data necessarily to build some advanced mathematical models and use machine learning.

 

Instead, you can start right out in the cloud and take advantage of the tremendous scale of resources that Azure already has—and that is growing quite a bit each year. Instead of spending of disks and compute, you just pay for the Azure services and run times that you consume with your project and you can scale up or scale down as your needs change. Even better, Microsoft is starting to make some of the glue work so you can see the day when your Azure IoT data could be integrated within Power BI, for example, so that your regular knowledge workers (as opposed to trained mathematicians and data scientists) could query your data sets using natural language and get results back in a nice, graphical, easy to consume format. All of that glue and linkage would be much harder to create in a on premises environment, and I think Microsoft here is betting that IoT initiatives are new and green enough in most enterprises that it is not difficult to start them off in the cloud—or at least not as difficult as, say, deciding to move SharePoint into the cloud. In fact, right now, the Azure IoT tools integrate with the Cortana Analytics solution, which provides data science, number crunching, and machine learning tools, and you can then inform your business processes of the insights you derive by linking Cortana Analytics with the Microsoft Dynamics suite of enterprise resource planning (ERP) tools.

 

Imagine this type of scenario: you operate a fleet of large transportation vehicles, each equipped with two or more really expensive engines. These engines can be instrumented with sensors that report quite a bit of information, including fan speed, oil pressure, ambient temperature, fuel pressure, thrust at any given moment, air quality, vibration, and more. You start collecting this data across the thousands of engines that you have in your fleet and pinpointing that data against maintenance records, failure notices, mechanical delays that interrupt the scheduled service you normally deliver with your fleet, and more. Over time and with some math, you are able to build a model that will be able to show that certain engine components are likely to fail after a certain number of cycles. You can learn which components those are, order those parts in advance, and adjust the fleet deployment schedule so that those parts can be replaced when the equipment is nearby, reducing interruptions and saving the cost of ferrying parts all around your locations. This is the kind of model you can build with Azure IoT Suite (and it happens to be one of the sample models you can run as you get started with your account).

 

As far as the sensors go, last October Microsoft launched its Azure IoT Suite Device Catalog [https://catalog.azureiotsuite.com/], which showcases more than 500 devices from more than 200 partner manufacturers that are all certified to work with the Azure IoT suite. On the developer and software side, the Azure IoT suite is a full scale member of the Azure service, and thusly works with Visual Studio, Eclipse, Chef, Puppet, GitHub, PowerShell, Python, MongoDB, Hadoop, Ruby, docker, MySql, and anything else that is part of the set of compatible offerings and capabilities with Azure.

How It Works

You can get started by heading over to https://www.azureiotsuite.com and logging in with your Microsoft account. There you can either use your current MSDN Azure benefit or fix up a new one, and then you’ll be presented with the Provisioned Solutions page, which is the first page of the Azure IoT Suite package itself. Then, follow these steps.

 

  1. Click Create a new solution to build your own IoT “workspace.”
  2. You can then choose a couple of different preconfigured solution types, including “connected factory,” “predictive maintenance,” and “remote monitoring.” For this walkthrough, I’ll show you remote monitoring, so click the latter option.
  3. The Create “remote monitoring” solution screen appears. Here is where you enter a friendly name for this workspace, the Azure region in which all of this should be spun up (you would ideally want the region closest to either you or your sensors to reduce latency), and the Azure subscription to which all of this should be billed. You can find pricing information for all of the components of Azure that the IoT suite will provision at https://azure.microsoft.com/en-us/pricing.
  4. Click Create solution, and then grab a cup of coffee while Azure spins up all of the resources it outlined.
  5. After the provisioning is complete, you’ll be back at the Provisioned Solutions screen, and your friendly named workspace will be shown there with a green check mark. Click the Launch button to get inside.
  6. You’ll be greeted with the dashboard screen. This shows a map of the Seattle area with four sensors geoplotted, each with a colored indicator (green or red). These sensors are simulated, just to give you an idea of the type of dashboard you can build with your own sensor deployment. On the right side, you can see the Device to View menu, which gives you a drop down selector where you can pick individual sensors to examine. On the lower left side, there’s the Alarm History section which shows sensors that are meeting some predefined problem threshold, and then on the lower right you see speedometer looking graphs that show various properties and values that the sensor fleet is reporting.
  7. On the left side, click Devices. This gives you a grid-style list of devices. You can ue the “+” button in the lower left to add a new sensor, which can be either another simulated device or a physical device with SIM card (ICC ID) for cellular connection, or access to a wireless connection. You can also modify the properties the simulated sensor displays to the workspace, including model and revision number, asset tag, or anything else you like.
  8. On the left side, click Rules. You can add new rules that operate on the two existing data fields, temperature and humidity, and set the thresholds that apply to those rules. This area is what kicks off those alarm history items on the dashboard, and note that if a device is alarming, its status on the map is changed from green to red to make it easy to identify.

 

That’s a quick walk around the preconfigured solution, but the key thing to remember is that all of this is live with Azure. You can go and adjust any of this, from the configuration of the dashboard to the way resources talk to each other to anything else; you manage all of this from within the Azure portal, same as any other Azure resource. If you’re looking for a remote monitoring solution just to get started, this solution saves you a lot of effort to get the right pieces in place—start there, tailor it, and build on from there. There’s no additional charge to start here other than the resources the solution spins up to run itself. The design and code is free.

 

The Last Word

Microsoft has a robust set of tools for integrating all sorts of devices into an IoT solution. They have more scale than you do and work with a wide variety of devices. If you are building an IoT solution, then you owe it to yourself to play around with what Azure IoT can do. I have not seen a solution in this space where it is easier to get started and make your own tweaks to build your own workspace. Worth a look.

Is Windows to Go a Good Solution for the International Airline Laptop Bans?

An edited version of this story ran on Computerworld.com on August 8, 2017. Credit: Computerworld

Often Microsoft presents technological solutions to problems that only a tiny percentage of its customer base has. Windows to Go was just such a feature—a nice solution to a problem that was virtually non-existent back when it was first released in 2011. However, six years later, that non-existent problem could very well be widespread.

What is Windows to Go? It’s a way to take a Windows installation with you on a USB thumb drive. You pop that thumb drive into any computer, boot from the USB, and your personalized installation of Windows—with all of your applications and files and access to corporate resources—is there. When finished, shut down, unplug the USB thumb drive, and away you go. It’s essentially portable Windows.

Windows to Go becomes more attractive in a world that seems to find traveling with electronics to be a security threat. You probably recall the recent news of the ban on laptops from all flights entering the United States from both selected Middle Eastern countries, as well as, more recently, flights coming from Europe. While this ban was lifted, more stringent security protocols are reportedly being developed for both domestic and international flights.   We could soon be entering a world where laptops are either checked in the baggage hold at airports without fail or not brought on trips at all—or a world in which officers at ports of entry demand access to electronics for either cursory or in-depth examinations. Having a nice USB thumb drive tucked away somewhere could be a real asset.

Having laptops subject to examination, or possibly locked away outside of an employee’s purview, has obvious implications for enterprises around the world. Many organizations have security policies that prohibit employees from leaving their corporate laptops unattended. Many organizations do not, as a matter of policy, encrypt the local hard drives of laptops they issue to employees. (This is very obviously a mistake in today’s world, but that does not change the reality of the situation.) Many organizations send field workers into some very remote and insecure areas of the world, often with real business assets and trade secrets stored in digital form on workers’ laptops.

These types of security protocols make it more likely that you will be separated from your laptop. Your business travelers have to put notebooks with company secrets somewhere else not within their direct control and they have virtually no say what happens to those notebooks when they are outside your travelers’ fields of vision. For most enterprises, this is far too much risk.

But that risk is a lot lower when you take Windows with you on a thumb drive and worry about the actual PC you use whenever you get to where you are going. Let’s learn a little more about Windows to Go.

What is Windows to Go?

Windows to Go was introduced in the Windows 8 release wave as an alternative to virtual desktop infrastructure: it is essentially a portable, entirely self-contained installation of Windows that you use on a USB thumb drive—that drive needs to be USB 3 in order to have the read, write and data transmission speeds necessary for a modern computer to run an operating system off of it. But what you end up with, after you configure it properly, is an entirely self-contained computer for a knowledge worker that is encrypted and fits in one’s pocket. You can pop it in your travel bag, in the car, even in your socks (if you are that type of person) and all you need to do is plug it into any reasonably modern PC, boot off the USB drive, and your OS, documents, wallpaper, personal settings, applications, and everything else is right there for you. This copy of the OS is managed through an IT department and thus it can have VPN software on it, or if you have configured DirectAccess, that copy of Windows can reach out over the Internet and retrieve its managed settings, Group Policy object configuration, and so on.

There are some key differences with Windows to Go, in its default configuration, as opposed to a similar copy of Windows installed on a regular fixed drive in a PC as you have come to expect:

  • The local drive in the computer on which Windows to Go is run is hidden by default. This keeps whatever crap is on the local system from seeping its way onto the Windows to Go USB drive as well as helps users properly save and retrieve documents to the USB stick. You can disable this functionality, but it is more secure to leave the hiding feature on.
  • Upon the first boot on a new Windows to Go target computer (that is, the “guest hardware” into which you plug the Windows to Go USB stick), a process goes through and identifies the right hardware drivers for the target system and enables and installs them. This process may reboot the computer several times, after which the boot process will proceed straight into Windows.
  • Windows to Go detects drive removal. Windows in this configuration will pause the whole computer if it detects the USB drive is gone and then will shut itself down after 60 seconds if the USB drive is not reinserted into the target machine. This is to prevent folks from using their copy of Windows to Go at, say, an airport kiosk and then quickly just removing the stick without shutting down the computer—a scenario in which bad actors could then access a logged in corporate desktop. With this feature, the whole computer shuts down rather than leave access open for others. If the USB drive is reinserted within 60 seconds, then operation continues as normal.
  • Access to the Windows Store is disabled by default, but it can be reenabled through a Group Policy object change.

 

Otherwise, Windows to Go behaves identically to Windows fully installed on a fixed computer. The added convenience is simply that you can unplug the stick and migrate it to any other device in the future.

Deploying Windows to Go

It is not much more work to deploy Windows to Go than it is to release images of any version of Windows these days—your current toolset like DISM and ImageX will work just fine. All you need is the correct USB drive hardware, a Windows Enterprise image, and a Windows Enterprise host computer to write and provision the Windows to Go image to the USB stuck. It is possible to scale this deployment process using some PowerShell scripts so that you can make multiple sticks at once, in case these new regulations have caught you off guard and you need a solution, like, yesterday. There is a very comprehensive guide to deploying Windows to Go USB sticks on TechNet, including these scripts, and I heartily recommend walking through the process so you get a feel for the steps needed to complete the provisioning. ] http://social.technet.microsoft.com/wiki/contents/articles/6991.windows-to-go-step-by-step.aspx]

After the sticks are created, you just hand them out to your users and tell them to boot off of the USB. You can see where this will come in handy in these banned laptop scenarios—there is no ban on a USB thumb drive, so you have a couple of options:

  • Take a loaner laptop with you that has no operating system installed at all—in other words, a bare metal laptop. You can allow that to be checked according to the airline’s procedure. When you arrive at your final work destination, plug in your thumb drive, which has never left your possession, and carry on. Of course it could also have a simple installation of Linux or Windows on it; it really does not matter as you would never boot into it.
  • Use Windows to Go in a business center at a hotel or convention center. Since the computer reboots to boot into Windows to Go, you don’t have to be concerned with software keyloggers or other runtime based malware. Of course it is possible for a hardware keylogger to be installed on a keyboard so you must weigh your current acute need for computing access against the threat profile you and your business have identified.
  • Purchase burner equipment at your final destination and return with it or destroy it. You can pick up any cheap laptop at any office supply store and it would be sufficient to run Windows to Go. If you are going to a reasonably populated area, $200-300 can be invested in a cheap laptop into which you can then insert your bootable stick and be off to the races. You can then either bring the laptop home with you or dispose of it—for maximum security this is a good option.

 

There is currently a list of officially supported Windows to Go USB drives which you can find at the Microsoft website [https://technet.microsoft.com/en-us/library/f82d1a0a-d8f7-4e8a-86a6-704166969a42(v=ws.11)#wtg_hardware]. I can recommend the IronKey Workspace W300, W500, and W700 options in particular as I have hands on experience with those models, and they have additional security features like boot passwords and self destruction capabilities for hard core security buffs. However, you can use devices that are not officially certified and most likely they will work fine as long as they are USB 3 devices. In fact, one of the officially certified devices—the Kingston DataTraveler—is off of my recommended list because it became scorching hot in my tests after less than an hour of usage in a Windows to Go scenario.

Licensing Windows to Go

Of course the brilliance of this solution technically is obscured by the money grab that is Microsoft licensing, except given recent current events, businesses may have little choice but to pony up for the additional expense.

Windows to Go is part of the Software Assurance program, that bundle of additional benefits and license flexibility that you get by forking over about a 33-40% premium on top of the cost of the license in question. The benefits of SA differ depending on whether your license is for a consumer or a server operating system and also whether this is for server application software or business applications like Office.

For operating systems, Windows to Go is part of the Windows SA benefit package. But of course you also have to decide if you want to license per device or per user. If you are licensed per device with Windows SA, then you can use Windows to Go on any third party device while off site. If you license Windows SA on a per user basis, then you can use Windows to Go on any device. You can also with both methods use Windows to Go on a personally owned device, but not while you are on a corporate campus. (This has to do with roaming benefits, or the ability to take a copy of the software you use at work and put it on your home machine.)

The Last Word

Windows to Go may have been ahead of its time, but it is certainly a competent solution for organizations that have more than a few regular international travelers getting caught up in these recent laptop bans. The great thing about using Windows to Go in these solutions is that it maintains your security profile, is only minimally more inconvenient for your traveler, and is easy to retire if and when these bans are ever lifted. Give it a look.

Exploring SSH Key Rotation with Thycotic Secret Server

Everyone knows and should be aware of the huge security issues there are surrounding the Windows administrator account, and there is a ton of guidance on how to properly secure that particular account, how to avoid password compromises, and how to use alternatives. But there seems to be much less intense focus on securing privileged accounts on Linux, and especially the idea that SSH keys are much like passwords in their absolute need to be protected at all costs.

SSH keys start out the security race ahead of passwords because they require two parts that do not exist in the same location all the time—a private key, which is often protected by a passphrase, is compared with a public key, and cryptographic analysis is carried out to determine if a user is authenticated to a system. This avoids the wholesale compromise of credentials just by gaining access to a system, since all that system would have is the public keys for authorized users. The other half of the puzzle, those users’ private keys, does not exist on the system anywhere.

Unfortunately, out of convenience or perhaps at the time, sheer necessity, the same private keys are re-used across multiple machines. This might be to make it easier for developers and testers to connect to a farm of Linux or Unix machines before they are moved collectively into production, or it may be that a single administrator is responsible for all of them and feels that he can properly secure that one single, well-formulated private key. It may even be an artifact of cloning a bunch of new virtual machines from an existing “template” image, or a relic of being able to easily automate a bunch of actions across a number of different machines using a single private key.

But of course, there is more to it than just one unique individual private key per user per machine. Those keys ought to be rotated and changed out, just like passwords should. This is for a variety of reasons: first, systems that contain private keys can be compromised, or the passphrases that protect the private keys can be compromised. Especially as news breaks today that the WPA2 wireless encryption technology has been broken and unprotected resources behind a vulnerable access point could be compromised, it is more important than ever to never fully trust a static key, password, phrase, or anything else, and make sure that secret is changed regularly.

The process of rotating keys is simple, but carrying it out is definitely not easy. Essentially it is a four-part process: you need to generate new keys, install that key into the authorized_keys file on all of your machines, test that new key, and then remove the stale key from that same authorized_keys file.

Brave folks use Ansible to take care of system administration tasks like these in an automated way, but then you need to spend time thinking about playbooks and plays and trying to make all of those pieces play together nicely. That all works well enough, and for the smallest deployments with the most limited of budgets it is a reasonable solution. But what about when the powers that be want proof your Linux or Unix endpoints are correctly protected this way? What if you need to easily control who can access those keys from an administrative staffing perspective? You need a stronger tool, and the folks at Thycotic have a proposition: why not use software designed from the start to automate the protection of our most sensitive privileged accounts and let it take care of the dirty work for you?

Thycotic Secret Server to the Rescue

Enter their Secret Server product and specifically the SSH Key Management function. This program will generate new keys and rotate them out either on demand or on a previously determined schedule, control via a role-based access control scheme and robust permissions which users have access to which keys, and provide a complete audit trail of which keys were sent where and when and who used them to access which systems on what date and time. Cobbling that together with Ansible would be a lot tougher and not nearly as seamless.

Thycotic asked me to spend some time evaluating this aspect of Secret Server and share my thoughts on the product.

My objective: get SSH key rotation going among 25 different Ubuntu virtual machines I have running in Amazon Web Services as part of another project. I installed a trial version of Secret Server on a Windows Server 2012 R2 box and added Microsoft SQL Server 2014 Express at the prompting of the installer. After completing the setup wizard, which was reasonably simple (although I did have to close and restart it due to an HTTPS binding issue which it automatically fixed), I went to the console and popped my license information in and enabled the Remote Password Changing settings. Then, I set out to create a new secret in Secret Server. Reading the Quick Start guide, I went with the Unix Account (SSH Key Rotation) template. On the template screen, I entered a friendly name for an account on one of those 25 machines as well as its username and password. I uploaded the private key file in .PEM format and entered its passphrase. I clicked the button and then watched for the Last Heartbeat field to say “Success” with an associated timestamp – that told me I was ready to go. I also liked I could just launch Putty and get signed in with one click – very convenient.

After that, I went to the Remote Password Changing tab and clicked Change Password Remotely. I generated a random password and then clicked Generate New SSH Key to get a new key. I also clicked the Generate button next to the Next Private Key Passphrase field to get a new protective passphrase. I clicked Change at the bottom and that was it. The key was rotated.

I went ahead and added a few of the other machines I was working with on that project and it all worked the same way as the previous one – add the secret, kick off the change, be done. It was a little work to get it set up but now I have a tool where with, what, three clicks? Four clicks? I can rotate keys all from a single point of management.

Pretty neat tool.