The Role of Agents in a Software Defined Network

An edited version of this story ran on Cisco.com.

In this article, I want to talk about network agents—what some in the industry refer to as the future of the software defined network. Agents are responsible for a great deal of flexibility and control, and the work they do can vastly impact how responsive and agile a software defined network can be. Let us dig in.

What are agents?

In almost all software defined networks, agents are the tool by which network applications expose their abilities for network controllers to modify their configuration at any given time. Network agents can live on the software defined network controller itself, providing a mechanism for the controller to talk to various applications through the controller’s northbound interface. (These northbound interfaces are generally output only—they only communicate outward to the individual SDN-aware applications.) Network agents can also live on different network elements, comprising a data path. All of these elements work together to forwarding traffic through the datapath’s external interfaces or process or terminate traffic within the element itself.

 

Where this all gets interesting is here: one or more of these datapaths (an agent and an engine) may live inside one physical network element—an “bucket” of welded-together communications resources, which the SDN sees and manages as a unit. But it goes further. A datapath may also be defined across multiple physical network elements. This logical datapath doesn’t care how datapaths map to actual physical assets, how those physical resources are managed or how they are abstracted away, how all of this is virtualized or how it all interrelates and interoperates, and more. It’s a completely different thing than physical wires, ports, and switches. The agent itself is responsible for communicating with the SDN controller about what the datapath is supposed to represent and how its engine is supposed to behave, and the datapath network agents do this communication across the network controller’s southbound interface.

How are network agents deployed in production SDNs?

In some networks, administrators install virtual network agents on every hypervisor host (in this scenario, each hypervisor host is a network element). The virtual network agent on each host then directs traffic toward the virtual switch. If that virtual switch recognizes that particular flow of traffic and is able to match it with a prescribed action, then it can adjust the configuration of the virtual machines on that host so that the flow of traffic exits through the host’s physical network interface card. If the virtual switch doesn’t recognize the flow, it sends a mapping request back to the agent, which queries a mapping service in the network controller so it can figure out where that traffic is supposed to flow.

 

In other scenarios, network agents are key to implementing virtual functions on the network. Workloads like load balancers, firewalls, threat monitors, intrusion detection, packet capture and packet and traffic forwarding, and more can all be implemented by various network agents residing in different network elements. The controller can deploy, activate, deactivate, and redeploy these agents as needed whenever traffic flows in one direction or another, and large scale SDNs can deploy multiple copies of these objects for multitenant scenarios or to respond to a breach, a distributed denial of service attack, an unusually large load during a spiky peak period, or other one time occurrence.

Putting it all together

The important element to remember when it comes to agents is actually what distinguishes a software defined network from a physical network: the fact that the physical components, the discrete switches and patch cables and ports and whatnot are completely abstracted away and the only networking we care about—and the networks to which these network agents serve and function—are the traffic “flows” created by applications.

 

Imagine how this would apply to human travel. SDNs in that context are essentially the difference between worrying about interstates and side roads versus the concept of traveling from Los Angeles to San Francisco. SDN cares out the route, the flow, the purpose of the traffic and its outcome, rather than concerning one’s self with which exits to take off the interstate, where to turn right, what to do if there is a traffic backup, and the like. By focusing on the flows, the need of packets and data to get from one application to another, we get the concepts of agents, the bits that carry out the instructions from the control plane and translate those into action on the data plane.

 

In conclusion, agents are key to the SDN experience and perform the vital role of translating instructions from the control plane to the data plane.

Exploring the Chocolatey Package Manager for Windows

Note: An edited version of this article appeared on Computerworld.com on November 13, 2018.

I’ve administered both Windows and Linux systems for close to two decades now, and honestly while Linux is a fantastic operating system and very appropriate in many respects for many applications, I’ve long preferred Windows for its generally better ease of use and polish. But that doesn’t mean Linux hasn’t had features I’ve lovingly pined for and miss on Windows – and a package management solution is one of them. Luckily, there are a couple of solutions to this, and best of all, both are open source and free. Read on for more.

The Premise Behind Package Management Systems

Linux distributions have had package management options for a while. You probably have heard of Red Hat’s RPM (Red Hat Package Management) format, Debian Linux’s apt-get, and then the new yum package manager that seems to be infiltrating a lot of distributions these days. At their core, these package management systems seek to achieve the same objective: analyze a system, determine what packages are necessary to run whatever software the user is requesting, find the latest compatible version of all of the packages, and install them in the correct order, ensuring they get laid down on the system successfully and that, after the 117 dependencies install, your text editor of choice is ready to run on your target system. I kid, but only a little bit.

Imagine bringing this over to Windows. Imagine the scenario: you are moving to a new system and setting it up properly, exactly how you like it. In the process, you are trying to find the latest version of Notepad+, for example, or any other reasonable popular utility. As it stands now, out of the box, you would Google for the site, find the download link, skip past all of the “featured offers” and near malware that most sites like to bundle with their downloads, and then run the installer. After that, you might even discover you downloaded a 64-bit version when you installed a 32-bit version of Windows. Or maybe you found an old download link, and there are two newer versions out there. That whole sequence is not exactly rocket science, but it is trying.

Imagine, instead, that you could simply say

choco install googlechrome

from a PowerShell command prompt and you would get:

…which would be followed by a completely functional installation of Google Chrome. That would save a lot of time, right?

And what if you had software installed like Google Chrome and then wanted to upgrade it? What if you could use a command like

Choco upgrade googlechrome

…and get an instant upgrade?

That is the power of package management, and that is what the Chocolatey package manager brings to Windows: a greatly expanding selection of carefully curated and maintained software packages that can be brought down and installed on your system with a simple three word command. As of this writing, there are 3,958 community maintained packages, and you can browse and search among them on the web at https://chocolatey.org/packages.

Where Chocolatey really comes in and shines is in the process of setting up and deploying new machines. If you have a fairly standard set of tools – Office 2013 or the 365 version of Office ProPlus, along with some other utilities like 7Zip, your web browser of choice like Mozilla Firefox or Google Chrome, and a few others, then you can absolutely script the setup of a new machine. Just deploy Windows through whatever method you care to use, and then once you have completed the installation and are at an administrative desktop, simply kick off a script that installs Chocolatey and then calls all of your software. Now you can get 70-90% of your software installs automated without having to do a bunch of imaging and packaging yourself, and all of the installations are done in a consistent, repeatable, reproducible way—another benefit of scripting in general.

Understanding the Pieces of the Chocolatey Puzzle

First, let us understand how all of the pieces fit together in this package management puzzle.

  • Chocolatey is a package manager that works with Windows – specifically, that is any version of Windows 7 or later that also has PowerShell installed. This is the vast majority of clients in production environments today. It uses scripts, along with a specific software package format (more on that in the next bullet), to install applications on your system.
  • NuGet is that specific software package format. It has previously been used by Windows software developers to install software dependencies for their own development projects. These were typically libraries, bootstrap utilities, and more that existed as complete packages out on the Internet, ready to be re-used. NuGet was simply an easy way to get those dependencies put in the right place within individual products.
  • PowerShell is the command engine that makes the whole thing tick. It functions as the scripting language and execution environment for all of Chocolatey. Chocolatey uses PowerShell to grab software, packaged in NuGet format, to install applications for your entire system, not just development projects.

To get started on a test system, use one command, straight from the regular command prompt (be sure you have administrative rights to do this):

@powershell -NoProfile -ExecutionPolicy Bypass -Command “iex ((new-object net.webclient).DownloadString(‘https://chocolatey.org/install.ps1’))” && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin

This one line of PowerShell actually does quite a few things, as you might expect. Let us walk through them individually.

  • It supersedes any custom profiles you have configured so as to ensure a full namespace is available to this command.
  • It sets the execution policy for just this one command to Unrestricted so that scripts that are downloaded from the Internet (in the case of this command, that is https://chocolatey.org/install.ps1) can be executed and not be treated as potentially malicious).
  • It downloads that .ps1 script from the Chocolatey website, which is a preconfigured set of instructions that PowerShell can use to install the Chocolatey system on your local machine.
  • It also sets a PATH entry to the default binary folder for the Chocolatey install, so that you do not have to enter the full path to the Chocolatey commands every time you would like to do any package management tasks. Instead you can just enter “choco” and then whatever you need, and Windows and PowerShell will know where to find it.

The Chocolatey installation process requires the .NET Framework version 4.0 and will attempt to install it automatically as part of that PowerShell script. I have set this up on Windows 7 and Windows 8.1 machines and that automatic installation of .NET was totally successful, although some administrators have a deep rooted aversion to installing multiple versions of the .NET Framework on systems due to potential application compatibility issues and the attendant security patching that is required for each respective version. However, this is an all or nothing proposition, and I think the benefits of Chocolatey outweigh the negatives of another framework version deployed.

Chocolatey in Action

Chocolatey is essentially an automated deployment wrapper. It does not intercept installations or somehow modify the setup process of an app. Rather, it automates the downloading of a setup file or package and then its execution, while turning off any sort of screen communication from the app (a silent install).  Here is an explanation direct from Chocolatey that goes into a little further detail:

“Chocolatey is a software management tool that is also a package manager. It functions fantastically well when the software is all included in the package and it doesn’t make use of native installers. However to approach the Windows ecosystem a package manager also needs to know how to manage actual software installations, thus why Chocolatey does that as well. For publicly available packages, copyright keeps from having binaries embedded in packages, so Chocolatey is able to download from distribution points and checksum those binaries.”

Some tips:

  • The command cinst works instead of choco install to save you some keystrokes and avoid carpal tunnel syndrome. So choco install googlechrome could also be cinst googlechrome.
  • Choco uninstall packagename removes a package from your system. It only removes the Chocolatey installed instance, so if for example you have a previous Git installation, then install Git from Chocolatey, and use choco uninstall git to uninstall one of your Git deployments, it will remove the Chocolatey installed instance.
  • Want to search for packages but do not want to browse the web? This is common on new server installs. Use choco search packagename to search the Chocolately package feed.
  • The command choco outdated will give you a list of all of the packages on your system that have a newer version available from the repository.

The Downsides of Chocolatey

As with every tool, there are negatives. Here are the most important ones for Chocolatey.

  • There is a malware risk to Chocolatey, however slight. Unless you run your own packaging team and set up Chocolatey’s PowerShell scripts to only retrieve packages from your own private repository full of packages you have vetted and trust, you will not be able to remove from your security mind the idea that you are downloading packages from a third party source. Those packages could well have malware in them, put there either on purpose by a nefarious actor in the contributor and maintenance team or, much more likely, the Chocolatey repository was hacked and malware payloads were inserted into the packages. To my knowledge and as of this writing, this type of breach has not happened to Chocolatey. But “has not” is not the same thing as “could not” or “will not,” and so there will always be a risk. On the flip side, Chocolatey is very popular and most of the package maintainers are very diligent about their work. Since the whole shebang is open sourced, the “many eyes” theorem means that problems and breaches would be discovered quickly and mitigated quickly, and the resulting reporting would be transparent. Caveat emptor.
  • Package maintenance is not always quick. Remember, Chocolatey depends on a team of volunteer contributors to package up applications. When new versions of the core applications are released, it is not an immediate process to package up the upgrade and make it available in the Chocolatey repository. For more popular applications like web browsers and some utilities, this is not a big issue—for one that software generally has the ability to update itself, making Chocolatey’s upgrade function a moot point; but the more popular the application is, the more active the Chocolatey package maintaining team for that application generally is. The wait for more obscure packages could be quite a bit longer, or packages could be left abandoned after several years.
  • Sometimes install locations get screwed up and there’s no way to track it. Because Windows lets software developers lay their bits down on users’ drives pretty much wherever they please, you wind up in situations where Chocolatey wants to put a particular piece of software in one location but the GUI installer you would download from the Web wants to put the software in another location. Herein lies the problem: if you install something outside of Chocolatey, especially if that software installs itself in a directory or folder Chocolatey does not know about, then Chocolately will not think you have that software installed—so it will let you go choco install thatpieceofsoftware and suddenly you have two installations of the same product on the same system. Sometimes that is not a big deal, whereas in other situations, it cripples the software.

For Windows 10, There is PackageManagement

Part of the innovating happening in Windows 10 and Windows Server 2016 is the Windows Subsystem for Linux, or WSL. This is essentially a port of Ubuntu Linux and other popular Linux distributions to Windows in a way that runs those Linux distributions as “layers” or subsystems underneath the regular Windows 10 user experience. Regular Linux binaries work and you can even install the X Windowing System and get separate graphical shells happening. (Who would have thought the year of the Linux desktop would actually be brought to us courtesy of Microsoft?)

This means that the plumbing is already inside Windows 10 to be able to provide a native package manager that works with PowerShell and Linux binaries, opening up a second entire ecosystem of applications to run on Windows 10 machines. That package manager is called PackageManagement, creatively, and it works with Chocolatey repositories, too. In fact, it is more of a package manager manager, because it works with multiple repositories and brings all of their capabilities and inventory together with a single tool.

To verify that PackageManagement is set up on your system, open a PowerShell prompt with administrative credentials and then type in the following command:

Get-Command -Module PackageManagement

Then, take a look at the universe of commands you have available to you.

To find out what repositories work, then call the following command:

Get-PackageSource

You will probably find that only the regular PowerShell gallery is there. We can add the entire Chocolatey repository with one single command:

Get-PackageProvider -Name Chocolatey

To use the Chocolatey source repository by default, use this command:

Set-PackageSource -Name chocolatey

Now let us try to add some software. We can continue to use Chrome as an example. From the Get-Command example above, we saw that there is a Find-Package command which ought to find a package in a repository. Let’s search for Chrome:

Find-package –name Chrome

In the results, you will see quite a few responses, all of them packages of Chrome that are available in Chocolatey repositories. You can install any one of them by using the install-package command:

Install-Package chrome

Unfortunately, one of the really nice features of raw Chocolatey, the upgrade command, is not available with PackageManagement. Perhaps one day this will be available, but for now, this is only useful to install or remove packages. And again, PackageManagement (which, for future reference, was codenamed OneGet during Windows 10’s initial development) is only available for Windows 10 and now Windows Server 2016, because it is dependent on the Windows Subsystem for Linux, which will not be backported to earlier versions of Windows. It is also important to note that OneGet can co-exist with Chocolatey so you can use whichever option makes the most sense. It is a very nice tool to have in the arsenal and one that especially for power users and developers can make quick work out of application installation and maintenance.

The Last Word

I suppose the best way to sum up Chocolatey is: it’s one of those tools where if you need it, you know as soon as you see it that you need it, and you want it right now. I’ve long searched for a lightweight tool (read: NOT System Center) to script software installations and make it easier to stand up development and test environments with the right programs and utilities. Bonus points that Chocolatey uses PowerShell to get its work done. Check it out now. http://chocolatey.org.

Secrets of Building a Successful Big Data Team

Note: an edited version this story appeared on the Hortonworks blog on November 10, 2017.

Having a dedicated big data team is critical to achieving successful outcomes with the data you’re collecting. This post will discuss the elements of building a successful big data team and how this team operates within your organization.

Step One: Achieve Executive Buy-In

Ask any business school professor or seasoned project manager what one has to do first and they will all say the same thing: obtain executive buy-in. The most fundamental part of any business transformation is buy-in and enrollment from the top. A top down approach, and big approvals, are a key step in building a big data story in your organizations. Unless this step is done, don’t pass go, don’t collect $200: work until you have executive approval.

Step Two: Set Up a Center of Excellence

Building a center of excellence, a shared employee facility that develops resources and best practices, is the next step in this process. This CoE could be as small as one person, or it could be as big as you want. The CoE’s members should comprise a subject matter expert from each impacted group across the organization in order to adequately represent the way the transformation is happening. In this way, not only have you gotten buy-in from the executive level to start the overall transformation, but you have participation across the organization so all affected departments can feel heard and acknowledged.

Part of building a proper and maximally effective center of excellence is to encourage the naysayers. Platitudes are fantastic, and open mindedness is a great thing to strive for, but the reality is that everyone has their own agendas and different original points of view. In building the CoE, you want cheerleaders as well as skeptics in order to keep debate alive and valuable. Ironically enough, the naysayers end up leading the CoE often; they become most passionate over time because once their objections have been overcome, they understand why the transformation is so important.

Step Three: Building the Team

Once your center of excellence is in place, the bulk of building your big data team lies in finding individual employees to flesh out the team. This is about 75% art, 25% science. From the science perspective, you’ll want to screen workers through the job description, background requirements, and appropriate thresholds of experience. You will want workers with 7-12 years of experience in IT and some exposure to production Linux; data warehouse skills and experience are a very big plus. Unfortunately, this won’t get you all the way there: the industry at this current point in time doesn’t have skill set in body of work to make it easy to find workers just on those limited merits. The current H-1B situation is actively contributing to the dearth of objectively qualified candidates. It is akin to trying to find a needle in a haystack.

This is where the 75% art part comes in: you build your team from folks with personality and passion who also come from relevant backgrounds. How many candidates you interview are both willing to learn Hadoop but also have the passion to do so? The interview and subsequent in-person conversations is where you will find this passion and sense of opportunity. These soft skills have to be found in a face to face interview, and your interview process should dig into what the candidate’s exact experience translates into. You may also want to consider pushing candidates into a live demo where they perform a real world task, and then discuss how they solved problems you stage. Many times candidates are unsuccessful at completing a demo, but the real key is, can they explain why?

You will also find the best candidates are either toward the beginner of their careers or toward the end, and not necessarily in the middle. For more experienced resources, there is a familiarity with the “wild west” that is Big Data these days as it bears resemblance to IT 20 years ago. Things weren’t integrated, and staff had to do the heavy lifting: build it, script it, troubleshoot it. That innate ability to self-start is an asset. For younger resources, they are quick to adopt tech that can build scripts and automate, which is also a useful skill.

Leaders of these teams should be neutral parties in the organization with no self-driving interest other than to help the company overall change. If a leader does end up being from the department that is funding the project, that interest will often eclipse the greater good. The ideal leader is an employee who is not fully integrated, almost a third party.

Finding Talent

Finding this talent is also a challenge. One way is through the Hortonworks University, a program that 12-15 colleges nationwide provide in order to establish Hadoop as a fully accredited part of their curriculum. Hortonworks pays the costs of the program incurred by the university so long as the school makes the courses part of the curriculum. You might also consider recruitment days at local universities, asking professors, who stands out? Who solves things?  Provide internship and trial opportunities for those names you receive.

Word of mouth is also a proven way to find candidates. The raw truth is that the Big Data community is a small enough group in world that if you happen to be really good at Hadoop, then people know who you are.

The Last Word

Ultimately, a big data transformation is an enablement opportunity to get your entire organization to go learn. Over time, we can all get stale, but this transformation can be a driver of learning, a place to get hands dirty with something new, and an opportunity to create new subject matter experts. Don’t be afraid to use this rubric to build a successful team.

A First Look at Project Honolulu, the new Windows Server Management Experience

Note: an edited version of this story ran on Computerworld.com on November 2, 2017.

I’ve been on the PowerShell train for years now—so much so that I wrote a book on how to learn PowerShell, because to my mind PowerShell knowledge is required if you work in Windows administration in any reasonable capacity in this day and age. But PowerShell is not always the best tool for the job, and options are always good. Microsoft’s graphical user interface (GUI) tools have become somewhat long in the tooth especially over the last three to four years as the company invested a lot of time and energy into expanding the universe of things PowerShell commands could accomplish.

Traditional administrators saw all of this happening and thought to themselves, what of Windows itself? After all, if you want command line, you can go with Linux—Windows was supposed to be point and click all along. To this group, Microsoft says, fear not: we now have something superior to the in-the-box GUI tools you know well. Enter the new Server Management Experience, codenamed Project Honolulu. It can manage Windows Server 2012, 2012 R2, and 2016 and future versions all through a web browser (currently Edge and Google Chrome). It can even handle the free Microsoft Hyper-V Server, making this a no brainer way to get a free hypervisor deployed in environments that don’t really need a full fledged copy of Windows Server without trying to manage through Hyper-V Server’s cryptic command line interface.

Let’s take a look at how Honolulu works, how it’s built, what it can do, and perhaps most important, what is isn’t.

Project Honolulu

There are a lot of different tools you need to use to manage servers with the in the box utilities you get today, and Project Honolulu’s main mission is to get all of these pulled together in one place. All of the tools’ functionality are integrated on the left side of the screen—Hyper-V tools, disk management, file server management, and more. An intuitive, HTML5 based touch screen compatible interface lets you browse around and carry out just about any task that the previous tools would let you complete, just all from a “single pane of glass,” to use the preferred industry term.

The great news is that this is a really lightweight install. If you want to run it on a server, it does not need to be a dedicated machine, and you can also deploy it onto a management workstation or laptop. You run a single MSI installer of about 30 megabytes, with no dependencies, and you can run it on any Windows 10 or Windows Server 2016 machine. It’s literally a few Next, Next, Next clicks in an installation wizard and then in about 30 seconds, the operation is complete. Then just type in a server name, Honolulu will find it, and bang, you are managing your machine remotely.

Some notes on behavior:

  • If you install Honolulu on a Windows 10 machine, you essentially get a browser-based single app that you can use to manage any number of remote servers. Honolulu will listen on port 6515 by default, although you can elect to change this.
  • If you install Honolulu on a Windows Server machine, it configures itself as a multi-user web app and any user can connect to it and customize their own configuration. You must specify the port that the service will use to answer requests, and you will need to either allow Honolulu to generate a self signed certificate or install an appropriate certificate on the server before installation and point the Honolulu installer to that certificate’s thumbprint.

Once your installation is complete, the next step is to add a machine to manage. On the main screen, under All Connections, click the Add link.

From there, you can add a single server, a cluster of servers, or a hyperconverged cluster of servers. (For what it’s worth, to manage a hyperconverged set of clusters, you must be running the new Windows Server Semi Annual Channel release, which counts out a lot of organizations at this point.) For our purposes, let’s just add a single server: we’ll select that, and type the server’s name, and then we are connected.

You can also have a list of servers in a text file, one per line, and then direct Honolulu to import those. Just click the Import Servers link and point Honolulu to that text file.

Perhaps the biggest point in favor of Project Honolulu is that it no longer requires you to use the awful Server Manager app that was introduced in Windows Server 2012. At first blush, I think we were all optimistic that that app would work better at managing multiple servers and making common administrative tasks easier, but most of us have discovered that the “new” Server Manager is simply a confusing mess. Now that a suitable replacement is around, we can banish Server Manager.

What else does Honolulu replace? The list as it currently stands:

  • Displaying resources like CPU, memory, and disk and their related utilization
  • Managing certificates, including installing new ones and managing expired and expiring certificates
  • Viewing events from event logs
  • Managing, uploading, and downloading files through File Viewer
  • Turning the Windows Firewall on and off and opening and closing ports
  • Configuring Local Users and Groups, including adding and deleting both
  • Network Settings, including IPv6 and IPv4 settings, network names, domain memberships, and more
  • Managing processes, including killing them and creating Process Dumps
  • Viewing and editing the Registry
  • Managing Windows Services, including viewing their current statuses, starting, and stopping them
  • Installing and removing Roles & Features a la Server Manager
  • Managing Hyper-V virtual machines and virtual switches
  • Managing disks and volumes, including creating, initializing, and setting drive letters
  • Managing Windows Update

You can easily imagine a scenario where you have ten, 15, 20 different servers all under your purview. You might even want to bring up a new system, in which case you lay down a copy of Windows Server on it, get it connected to the network, and then add it to the Honolulu interface. At that point, you can use Honolulu to configure IP addressing or change the system name, add disks or initialize disks you have plugged in, kill roles or add new roles like file server services or container services. Once that new system is up and you have 21 servers to manage, you can then see them all in one place. Is there, say, an errant task pegging a CPU higher? Just click into it from the screen, look at a process view, and end the troublesome task. Need to check if an update is installed? Click into Windows Update view, or to prevent a patch from installing by entering a new Registry key, just click the Registry node.

One of the cooler bits is how you can essentially replace the staid Hyper-V Manager tool with Honolulu.

It is a very convenient tool with a very easy, intuitive interface. Click around and you will find that it largely just makes sense.

Is This Azure Server Management Tools?

What happened to Azure Server Management Tools? The idea behind the Azure SMT product, which was discontinued in June 2017, was access and management from anywhere on any platform, with the fact that it was hosted in Azure meaning it could be updated and improved continuously. However, according to Microsoft, the top customer feedback from the toolkit was that users had environments that were not connected to the raw Internet (how can you then manage those types of deployments over the Internet?), and additionally some customers needed fully deployed tools in their local environments. Partners didn’t like SMT because it only was tied to Azure and not other public clouds, and sometimes a dependency on any cloud, Azure or otherwise, was not a natural fit. Additionally, the top user feedback request was basically to make SMT available on premises without an Internet connection. That is exactly what Project Honolulu is: you can manage from anywhere, including integrating in the future with Azure, from any platform since it is a browser based solution, that starts with core tools and ships independently (not in the OS image but as a separate download) which makes revisions possible on a much faster cadence.

Project Honolulu’s Architecture

Honolulu as its core is a lightweight web server and gateway. The web server does not depend on IIS; rather it uses a self-hosted stack that leverages HTTPS. The gateway is the piece that lets you fan out to manage remote servers using PowerShell and Windows Management Instrumentation over WinRM, the remote management protocol. Crucially this means that there is no agent infrastructure for Honolulu so nothing additional is required on the servers you want to manage; all of this instrumentation technology is already built into the operating system and is mostly standards based at that. (For Windows Server 2012 and Windows Server 2012 R2, you will need to upgrade the version of the Windows Management Framework to 5.0 or higher; the current version, 5.1, is available here [https://www.microsoft.com/en-us/download/details.aspx?id=54616].)

What Project Honolulu is Not

Honolulu is great and off to a promising start, but it is important to understand its limitations.

First, Honolulu does not replace any functionality covered in the following in the box tools (list courtesy of Microsoft):

  • AD Administrative Center
  • AD Domains and Trusts
  • AD Module for Windows PowerShell
  • AD Site and Services
  • AD Users and Computers
  • ADSI Edit
  • Cluster Aware Updating
  • Component Services
  • DFS Management
  • DHCP
  • DNS
  • Failover Cluster Manager
  • File Server Resource Manager
  • Group Policy Management
  • iSCSI initiator
  • Network Load Balancing Manager
  • Performance Monitor
  • Print Management
  • Remote Access management
  • Routing and Remote Access
  • Shielding Data File Wizard
  • Task Scheduler
  • Volume Activation Tools
  • Windows Server Update Services

That list could very well change as development on the project continues, although some of these tools lend themselves more to remote management than others.

Second, Honolulu is not a finished product. It is in technical preview, which means anything and everything could change—though it probably won’t. The code is in pretty good shape for a preview product and is suitable for use in production, although Microsoft will not admit that nor support you if you do. But at any time features may be removed, added, changed, moved around, and so on. A final release is expected in the not-too-distant future, and frequent updates are expected since the utility can be delivered “out of band.”

Third, Project Honolulu is not a replacement for a third party systems management suite. It is designed for relatively lightweight server management and monitoring. It does not support older Windows releases, nor does it handle client management or software distribution. It also does a cursory job of monitoring but does not orchestrate an alerting workflow or any type of automated response, both of which are necessary to have in enterprise environments. Project Honolulu is available at no additional charge with a Windows license, which is really nice, but it does not purport to solve all of the issues that you would need a product like System Center Configuration Manager and Operations Manager for. It is a convenient add-on.

The Last Word

The GUI is not dead! There is no denying that however useful PowerShell is in a variety of applications, there remain scenarios where a visual approach is the right way to go. It’s very refreshing to see Microsoft embracing both types of tools and really investing in a useful, intuitive, and best of all free (with a license) way to manage more recent versions of Windows Server. As the tool grows to integrate resources in Azure as well as extensions from third parties, this is sure to be a useful set of bits to have around. Recommended.

Download Project Honolulu here: https://aka.ms/HonoluluDownload