A Look at the Microsoft Azure Internet of Things (IoT) Solution

An edited version of this story ran on Network World on September 7, 2017. Credit: Network World/Computerworld.

The Internet of Things, a vast network of connected microdevices, sensors, small computers, and more throwing off data every second and sometimes even more often, is all around us. Whereas before we had dumb roads, dumb cities, dumb devices, and the like, now all of those things can talk to us and tell us how they are doing, what their current status is, what their environment is like, and even what sort of other devices it knows are near it—and it chatters all of this back to you. All of this is really coming to a head now because sensors are cheap, their processors are cheap, wireless is everywhere and becoming less expensive, and there are tremendous resources for storage and analytics out there. So how do you deal with this phenomenon and take it by the horns to make it begin working for you?


And deal with that you must, because that is coming to you—data. Cisco research projects that there will be 50 billion connected devices by 2020, and all 50 billion of those will be sending off sensor data between once a second and once a minute. How much data is in each of those payloads? Assume it’s 500 bytes, to allow for some packet overhead: that means 25 terabytes of data being spun off somewhere between every second and every minute. What if Cisco is half wrong? What if Cisco is twice right? One thing is for certain: there will be a tremendous amount of data, and with that data comes projections from Gartner, IDC, and Forrester that show a multi-trillion opportunity in both cost savings and new revenue from the IoT.


One other factor is that IT is starting to fade into the background a bit, at least as a central place  where technology projects begin. Often business units and individual departments are focusing on technology efforts, made possible by the fact that now there are cloud products out there that let you plunk down a corporate purchasing card and get started in minutes. Hire a data scientist contractor, outfit your kit with some sensors, start up a cloud service account, and before long you could have several terabytes of sensor data and be ready to get fired up. Microsoft has been innovating in this particular area with its Azure service, and they have some compelling offerings.


The Microsoft Azure Internet of Things Story
The Azure IoT Suite is designed to be a quick start type of portal—a true example of a platform as a service (PaaS) offering) that gives you the resources necessary to deal with all of the data being sent back to you while also allowing you to manipulate it, develop some level of understanding of it, and use it to either improve your business processes, solve some nagging problem which you have deployed a boatload of sensors to track and mitigate, or build a model of the way certain parts of your assets behave.


NOTE: there is also a newer software as a service (SaaS) offering called Microsoft IoT Central, which takes the platform level stuff away and focuses only on the software that powers the sensors and connects everything together. Manufacturers can build their own SaaS-based IoT solutions hosted on the Azure IoT cloud service and get their solutions to market more quickly without having to reinvent the plumbing, the platform, and more. There’s also the very new (as in spring 2017) Azure IoT Edge suite, which lets programmers develop logic for small computers and sensors on the “edge” of an IoT environment in convenient languages like Java and C# rather than assembly and other more obscure languages. In this story, however, I will focus on the Azure IoT Suite because I think it more clearly highlights the capabilities of the overall platform.


The Azure IoT Suite itself bundles together a bunch of relevant Microsoft Azure services into a surprisingly simplified package. It starts off by allowing you to create a couple of ready made IoT consumption scenarios, including predictive maintenance and remote monitoring, and automatically orchestrates the various Azure products like databases, websites, web services, the data ingestion point, and more, creating and linking them together so that you are ready to go from square one. For example, for the remote monitoring solution that you can start with as a pre-configured package, Azure self-selects and configures the following services, handling the provisioning process automatically:

  • Azure IoT Hub (1 high-frequency unit, also called an S2 unit)
  • Azure Stream Analytics (3 streaming units)
  • Azure DocumentDB (1 S2 instance)
  • Azure Storage (1 GRS standard, 1 LRS standard, 1 RA-GRS standard)
  • Azure App Services (2 S1 instances, 2 P1 instances)
  • Azure Event Hub (1 basic throughput unit)


Each of the other solutions have a different makeup, but you get the idea: everything you need with just a couple of clicks.


The pitch from Microsoft is that while you might have the internal resources to do a couple of IoT style projects today, as you build on those developments, create new models, deploy more sensors, and in general double down on IoT in the future, you probably will not be able to (at least cost effectively) handle all of that data. You will either be forced to invest in expensive storage infrastructure on premises, or you will have to make problematic choices about what data to keep present, what data to roll up, summarize, and archive, and what data to discard. And of course, when you discard data, you cannot get it back, so you might be losing out on some predictive capability you do not know about yet; if you roll up and summarize data, you lose the granularity and resolution on that data necessarily to build some advanced mathematical models and use machine learning.


Instead, you can start right out in the cloud and take advantage of the tremendous scale of resources that Azure already has—and that is growing quite a bit each year. Instead of spending of disks and compute, you just pay for the Azure services and run times that you consume with your project and you can scale up or scale down as your needs change. Even better, Microsoft is starting to make some of the glue work so you can see the day when your Azure IoT data could be integrated within Power BI, for example, so that your regular knowledge workers (as opposed to trained mathematicians and data scientists) could query your data sets using natural language and get results back in a nice, graphical, easy to consume format. All of that glue and linkage would be much harder to create in a on premises environment, and I think Microsoft here is betting that IoT initiatives are new and green enough in most enterprises that it is not difficult to start them off in the cloud—or at least not as difficult as, say, deciding to move SharePoint into the cloud. In fact, right now, the Azure IoT tools integrate with the Cortana Analytics solution, which provides data science, number crunching, and machine learning tools, and you can then inform your business processes of the insights you derive by linking Cortana Analytics with the Microsoft Dynamics suite of enterprise resource planning (ERP) tools.


Imagine this type of scenario: you operate a fleet of large transportation vehicles, each equipped with two or more really expensive engines. These engines can be instrumented with sensors that report quite a bit of information, including fan speed, oil pressure, ambient temperature, fuel pressure, thrust at any given moment, air quality, vibration, and more. You start collecting this data across the thousands of engines that you have in your fleet and pinpointing that data against maintenance records, failure notices, mechanical delays that interrupt the scheduled service you normally deliver with your fleet, and more. Over time and with some math, you are able to build a model that will be able to show that certain engine components are likely to fail after a certain number of cycles. You can learn which components those are, order those parts in advance, and adjust the fleet deployment schedule so that those parts can be replaced when the equipment is nearby, reducing interruptions and saving the cost of ferrying parts all around your locations. This is the kind of model you can build with Azure IoT Suite (and it happens to be one of the sample models you can run as you get started with your account).


As far as the sensors go, last October Microsoft launched its Azure IoT Suite Device Catalog [https://catalog.azureiotsuite.com/], which showcases more than 500 devices from more than 200 partner manufacturers that are all certified to work with the Azure IoT suite. On the developer and software side, the Azure IoT suite is a full scale member of the Azure service, and thusly works with Visual Studio, Eclipse, Chef, Puppet, GitHub, PowerShell, Python, MongoDB, Hadoop, Ruby, docker, MySql, and anything else that is part of the set of compatible offerings and capabilities with Azure.

How It Works

You can get started by heading over to https://www.azureiotsuite.com and logging in with your Microsoft account. There you can either use your current MSDN Azure benefit or fix up a new one, and then you’ll be presented with the Provisioned Solutions page, which is the first page of the Azure IoT Suite package itself. Then, follow these steps.


  1. Click Create a new solution to build your own IoT “workspace.”
  2. You can then choose a couple of different preconfigured solution types, including “connected factory,” “predictive maintenance,” and “remote monitoring.” For this walkthrough, I’ll show you remote monitoring, so click the latter option.
  3. The Create “remote monitoring” solution screen appears. Here is where you enter a friendly name for this workspace, the Azure region in which all of this should be spun up (you would ideally want the region closest to either you or your sensors to reduce latency), and the Azure subscription to which all of this should be billed. You can find pricing information for all of the components of Azure that the IoT suite will provision at https://azure.microsoft.com/en-us/pricing.
  4. Click Create solution, and then grab a cup of coffee while Azure spins up all of the resources it outlined.
  5. After the provisioning is complete, you’ll be back at the Provisioned Solutions screen, and your friendly named workspace will be shown there with a green check mark. Click the Launch button to get inside.
  6. You’ll be greeted with the dashboard screen. This shows a map of the Seattle area with four sensors geoplotted, each with a colored indicator (green or red). These sensors are simulated, just to give you an idea of the type of dashboard you can build with your own sensor deployment. On the right side, you can see the Device to View menu, which gives you a drop down selector where you can pick individual sensors to examine. On the lower left side, there’s the Alarm History section which shows sensors that are meeting some predefined problem threshold, and then on the lower right you see speedometer looking graphs that show various properties and values that the sensor fleet is reporting.
  7. On the left side, click Devices. This gives you a grid-style list of devices. You can ue the “+” button in the lower left to add a new sensor, which can be either another simulated device or a physical device with SIM card (ICC ID) for cellular connection, or access to a wireless connection. You can also modify the properties the simulated sensor displays to the workspace, including model and revision number, asset tag, or anything else you like.
  8. On the left side, click Rules. You can add new rules that operate on the two existing data fields, temperature and humidity, and set the thresholds that apply to those rules. This area is what kicks off those alarm history items on the dashboard, and note that if a device is alarming, its status on the map is changed from green to red to make it easy to identify.


That’s a quick walk around the preconfigured solution, but the key thing to remember is that all of this is live with Azure. You can go and adjust any of this, from the configuration of the dashboard to the way resources talk to each other to anything else; you manage all of this from within the Azure portal, same as any other Azure resource. If you’re looking for a remote monitoring solution just to get started, this solution saves you a lot of effort to get the right pieces in place—start there, tailor it, and build on from there. There’s no additional charge to start here other than the resources the solution spins up to run itself. The design and code is free.


The Last Word

Microsoft has a robust set of tools for integrating all sorts of devices into an IoT solution. They have more scale than you do and work with a wide variety of devices. If you are building an IoT solution, then you owe it to yourself to play around with what Azure IoT can do. I have not seen a solution in this space where it is easier to get started and make your own tweaks to build your own workspace. Worth a look.

Exploring SSH Key Rotation with Thycotic Secret Server

Everyone knows and should be aware of the huge security issues there are surrounding the Windows administrator account, and there is a ton of guidance on how to properly secure that particular account, how to avoid password compromises, and how to use alternatives. But there seems to be much less intense focus on securing privileged accounts on Linux, and especially the idea that SSH keys are much like passwords in their absolute need to be protected at all costs.

SSH keys start out the security race ahead of passwords because they require two parts that do not exist in the same location all the time—a private key, which is often protected by a passphrase, is compared with a public key, and cryptographic analysis is carried out to determine if a user is authenticated to a system. This avoids the wholesale compromise of credentials just by gaining access to a system, since all that system would have is the public keys for authorized users. The other half of the puzzle, those users’ private keys, does not exist on the system anywhere.

Unfortunately, out of convenience or perhaps at the time, sheer necessity, the same private keys are re-used across multiple machines. This might be to make it easier for developers and testers to connect to a farm of Linux or Unix machines before they are moved collectively into production, or it may be that a single administrator is responsible for all of them and feels that he can properly secure that one single, well-formulated private key. It may even be an artifact of cloning a bunch of new virtual machines from an existing “template” image, or a relic of being able to easily automate a bunch of actions across a number of different machines using a single private key.

But of course, there is more to it than just one unique individual private key per user per machine. Those keys ought to be rotated and changed out, just like passwords should. This is for a variety of reasons: first, systems that contain private keys can be compromised, or the passphrases that protect the private keys can be compromised. Especially as news breaks today that the WPA2 wireless encryption technology has been broken and unprotected resources behind a vulnerable access point could be compromised, it is more important than ever to never fully trust a static key, password, phrase, or anything else, and make sure that secret is changed regularly.

The process of rotating keys is simple, but carrying it out is definitely not easy. Essentially it is a four-part process: you need to generate new keys, install that key into the authorized_keys file on all of your machines, test that new key, and then remove the stale key from that same authorized_keys file.

Brave folks use Ansible to take care of system administration tasks like these in an automated way, but then you need to spend time thinking about playbooks and plays and trying to make all of those pieces play together nicely. That all works well enough, and for the smallest deployments with the most limited of budgets it is a reasonable solution. But what about when the powers that be want proof your Linux or Unix endpoints are correctly protected this way? What if you need to easily control who can access those keys from an administrative staffing perspective? You need a stronger tool, and the folks at Thycotic have a proposition: why not use software designed from the start to automate the protection of our most sensitive privileged accounts and let it take care of the dirty work for you?

Thycotic Secret Server to the Rescue

Enter their Secret Server product and specifically the SSH Key Management function. This program will generate new keys and rotate them out either on demand or on a previously determined schedule, control via a role-based access control scheme and robust permissions which users have access to which keys, and provide a complete audit trail of which keys were sent where and when and who used them to access which systems on what date and time. Cobbling that together with Ansible would be a lot tougher and not nearly as seamless.

Thycotic asked me to spend some time evaluating this aspect of Secret Server and share my thoughts on the product.

My objective: get SSH key rotation going among 25 different Ubuntu virtual machines I have running in Amazon Web Services as part of another project. I installed a trial version of Secret Server on a Windows Server 2012 R2 box and added Microsoft SQL Server 2014 Express at the prompting of the installer. After completing the setup wizard, which was reasonably simple (although I did have to close and restart it due to an HTTPS binding issue which it automatically fixed), I went to the console and popped my license information in and enabled the Remote Password Changing settings. Then, I set out to create a new secret in Secret Server. Reading the Quick Start guide, I went with the Unix Account (SSH Key Rotation) template. On the template screen, I entered a friendly name for an account on one of those 25 machines as well as its username and password. I uploaded the private key file in .PEM format and entered its passphrase. I clicked the button and then watched for the Last Heartbeat field to say “Success” with an associated timestamp – that told me I was ready to go. I also liked I could just launch Putty and get signed in with one click – very convenient.

After that, I went to the Remote Password Changing tab and clicked Change Password Remotely. I generated a random password and then clicked Generate New SSH Key to get a new key. I also clicked the Generate button next to the Next Private Key Passphrase field to get a new protective passphrase. I clicked Change at the bottom and that was it. The key was rotated.

I went ahead and added a few of the other machines I was working with on that project and it all worked the same way as the previous one – add the secret, kick off the change, be done. It was a little work to get it set up but now I have a tool where with, what, three clicks? Four clicks? I can rotate keys all from a single point of management.

Pretty neat tool.

A Ransomware Removal Guide

An edited version of this article ran on the Netwrix blog on June 29, 2017. Credit: Netwrix

Ransomware is one of the biggest scourges we face as Internet citizens today. What happens when you have been struck by it? The most obvious option is to pay the ransom. If you did, you would not be alone: even large companies and non profits have had to pay up or at least negotiate a ransom payment. But should that be your first option? Hardly. Here are some tips about how to initially recover from a ransomware attack.

Have good backups

It just so happens that the best defense is a good offense and in this case, a good offense happens to also be the best defense after a ransomware attack infects your network: having good backups. This can come in a couple of forms:

  • Shadow copies. You may be familiar, if you are a Windows administrator, with the Volume Shadow Copy Service, a piece of software first introduced in Windows Server 2003 that takes snapshots of data on specifically configured volumes at predetermined points in time. This service informs the Previous Versions feature in Windows client, where if you do something stupid in a spreadsheet, for example, you can right-click the file on disk and choose to open a previous version made before your mistake. If you catch a ransomware infection early, chances are shadow copies are a good way to restore an unencrypted version of your files. If you are not using shadow copies, configure them today. Unfortunately, some variants of ransomware have caught on to this procedure and during their silent infection process, prior to encrypting files, they delete all shadow copies found on a disk.
  • Regular backups that you restore from tape or archive disk. You are taking regular backups of your storage system, right? And you are regularly testing those restores so that you are able to verify you backed up good files and can restore them intact? If not, then stop reading right now and go configure a backup scheme. If you are, then rest a little easier. The worst case for a ransomware infection is to wipe your machines and put data back on them from restored backups. Sure, it is an investment of time, but you absolutely do not need to pay any ransom, and you just might be seen as a hero.

See if a free decryptor is available

If you do find yourself on the other end of a ransomware attack that has completed, you have a couple of options that don’t involve paying the ransom.

As governments and security researchers continue to make progress against the ransomware threat, these parties have managed to break the encryption schemes used by some variants of ransomware. It is important to keep in mind that not every variant of ransomware has been “broken” by the good guys, so you should not rely solely on the promise that some of these encryption schemes have been foiled and rest on your laurels when it comes to building defenses against this type of attack.

But if you have already been victimized, then head over to The No More Ransom Project at https://www.nomoreransom.org and look for the variant you have been hit with. (This site is sponsored jointly by the European Cybercrime Center, Politie, Kaspersky Lab, and Intel Security. On the site, there are currently decryptor tools available for the following variants:

  • Crysus
  • Marsjoke/Polyglot
  • Wildfire
  • Chimera
  • Teslacrypt
  • Shade
  • Coinvault
  • Rannoh
  • Rakhni

These folks are working on breaking other variants as well, but of course breaking good encryption takes time, and the malware creators also have a perverse incentive to make their encryption stronger and even more unbreakable. It is an unfortunate dance, but for now, you might be able to save yourself with the decryptor tools on the site. A big red flashing warning here, though, to beware of tools from other places—they may actually be ransomware disguised as a prevention tool.

Use the File Server Resource Manager to catch bad actors

Even if you have been infected by ransomware, it is not too late to prevent further damage. You will likely have some encrypted files, but the sooner you stop the spread of the infection, the fewer files end up being held hostage, and the easier your cleanup task is. We have covered using a built in tool within Windows Server, called the File Server Resource Manager, on this blog before to catch ransomware attacks at they happen. Essentially you create a honeypot share with a dollar sign in front of the name to fool ransomware into starting with that particular share in its efforts to encrypt files. You let the group Authenticated Users have full control of this share so that any process that wants to write to that share can (although you do not publicize this share to users; this is only to catch things that should not be on your systems, not a drop box for other files). There is no legitimate use of the honeypot share, so when the File Server Resource Manager file screen notices write activity happening within that share, it can safely assume that someone has been infected, and it will cut off that user’s access to any share to stop the spread of the encryption attack in its tracks. There is a simple PowerShell script that can be fired by the File Server Resource Manager in order to accomplish this:

Get-SmbShare -Special $false | ForEach-Object { Block-SmbShareAccess -Name $_.Name -AccountName ‘[Source Io Owner]’ -Force }


Once permissions have been removed, the ransomware has no files it can access to encrypt, and it basically just stops. You then could clean the malware off, restore the files that were written to in that timeframe, and move on with your life.

For much more detail on this way to stop a pending attack, or an attack that is just beginning, check out Ransomware Protection Using FSRM and PowerShell [https://blog.netwrix.com/2016/04/11/ransomware-protection-using-fsrm-and-powershell/] on our blog.

A Look at the Network Controller in a Software Defined Network

An edited version of this article appears on Cisco.com and appears to have launched on Friday, June 2, 2017. Credit: Cisco Systems Inc

One cannot have a software defined network without the network controller: that component that has a single complete view of a network and all of its components, and thus is the base for configuring the network, its paths, and perhaps most importantly, what devices lay in those paths. What are some important considerations when evaluating network controllers? What are some of the unique features and pitfalls that network controllers introduce? Let’s take a look.

Network Controller Extensibility

The beauty of the network controller role is that most products are designed to be extensible. Generally there are modules or plug-ins you can install into the network controller to light up more features and enable additional functionality. In fact, many controllers, including the Cisco Open SDN Controller, offer programmatic interfaces like REST APIs that lets applications have visibility and integrate deep into the network, Java APIs that let developers create custom functions to enable support for advanced scenarios, and “southbound” plug-ins for the network controller device that hook up virtual networks to physical networks to make heterogeneous network environments take the leap into an SDN world.

Open Source Network Controllers

Many network controllers are also open source; written and contributed to by a community of network professionals and talented developers. POX and Beacon are good examples of serviceable open source network controllers. Other network controllers are offered by enterprises and service companies to enable special integration with other types of network hardware and software. Both open source and proprietary network controllers have their role. Open source controllers are a great way to ensure a standards based network especially when used with other network devices from multiple vendors, and often have a vibrant community enhancing the solution as the state of the SDN art advances. On the other hand, proprietary network controllers often offer increased speeds and capabilities working on vendor specific hardware while also providing a support infrastructure for when things go haywire, as they eventually will.

Fault Tolerance and High Availability

Given that it is the single point of control in a software defined network, the network controller can also be a single point of failure. Simple clustering and replicating virtual machines and states across the wire is not enough in an SDN environment because network switches maintain a hard state has to be handled across boot iterations for the network controller. Good network controllers incorporate the state of switches into a transactional replication system that makes sure one can replicate instances of the network controller in the event of a fault while also maintaining a consistent switch state and without taking down the network so that the network switches can establish a consensus on the state of their switches. However, other implications of fault tolerance include the ability of the controller to continually manage all of the different devices on the software defined network after a fail over or a fail back procedure, and how well these fault tolerant procedures scale when your environment is not rather simply a corporate campus, but is a global scale cloud datacenter with hundreds of thousands of customers and thousands of hosted network configurations. Will your network controller vendor measure up?

Network Controller Serviceability

Another differentiation point with network controllers from various vendors is how serviceable they are. Often the logs from a controller will be a jofirst source to consult when troubleshooting network issues. Are the logs easy to find? Can they be shipped to central logging facilities where an existing network monitoring environment can capture events as they happen and proceed through alerting workflows? Another servicing concern is patching the network controller both for feature enhancements as well as to repair or mitigate security flaws. Since the network controller is a key role, most controllers have a way to patch the role without bringing it down.

Consequences of the Network Controller Role in Today’s Environments

What’s the implication for today’s software defined networking environments? What are the main concerns of more decision making and centralized management happening right inside the network controller? There are a couple of points to make here:

  • The benefit is potentially a single pane of glass (or for complex network, just a few panes of glass) where your entire network strategy can be handled, monitored, and if need be, reconfigured. The network controller is the strategic control point of the network, the one place that handles all of the abstraction of resources.
  • The challenge is that one incurs more risk with many virtualized networks and components directing activity through and by the network controller: if the controller goes down, how limited is network functionality? The extent to which services are impaired really depends on each network configuration, but without fault tolerance and high availability capabilities built into your production SDN, the impact of a failed network controller could be quite severe.

Office 365: Configure User’s Password to Never Expire

An edited version of this article appeared on the Netwrix blog on May 25, 2017. Credit: Netwrix

One of the most frequently asked questions we come across here is users or administrators who subscribe to Office 365 but who tire of being prompted to change their passwords when they expire. These users might not be used to Microsoft’s default password expiration policy for the service, especially if they are coming from an in house system that had a more lenient expiration requirement. While I am not entirely convinced setting passwords to never expire is a smart move, if you choose secure passwords and use multifactor authentication, I do not think you are opening up a big can of worms by disabling expiry.

Configuring Using PowerShell

As always, here is a good old PowerShell coming in for the rescue. First off, you need to set yourself up to connect to the service through PowerShell remoting. If you have not done this yet, then you will need two pieces of software: the Microsoft Online Services Sign-In Assistant for IT Professionals RTW (yes, that’s the official name), and the Azure Active Directory Module for Windows PowerShell. Install both of those, then open up a PowerShell command session and type in


And then enter your credentials at the prompt.

Once you have successfully authenticated, then enter the following command to set one particular user’s password to never expire:

Set-MsolUser -UserPrincipalName <fullemailaddress@yourdomain.com> -PasswordNeverExpires $true

If you know a little bit about PowerShell, then you also know that if the verb in a command is Set, then you can also use Get to get information or properties about a certain object. In this case, we can use Get-MsolUser to see if the user’s password has already been configured to never expire, and we do so using the following command which selects that one certain attribute among many to display as a response to our command:

Get-MsolUser -UserPrincipalName <fullemailaddress@yourdomain.com> | Select PasswordNeverExpires

You can extrapolate this command to see the password expiration status of all users in your tenant by using the following command:

Get-MSOLUser | Select UserPrincipalName, PasswordNeverExpires

You can also combine these two commands to set all of the users in your tenant to have passwords that never expire using the pipelining feature of PowerShell. Here we get a list of users from Get-MsolUser and then we pipe that information to Set-MsolUser, leaving out the specific reference to names since those will be fed into the new command from the pipeline, and we can leave the attribute configuration the same:

Get-MsolUser | Set-MsolUser –PasswordNeverExpires $true

Configuring using the Graphical User Interface (GUI)

If you’re afraid of the PowerShell command line, two words of advice: 1) do not fear it, for it is your friend, and 2) there is also a way to take care of this from within the web based Office 365 administration console. You will need to have administrator credentials for this.

  1. Sign in at https://portal.office.com/adminportal/home.
  2. From the Settings menu, select Security and privacy, and then click Edit.
  3. Under Password policy, click to On the lever that says “Set user passwords to never expire.”