Making Big Data More Accessible

An edited version of this article was published on the Hortonworks blog. Credit: Hortonworks

Big data, and storing and analyzing it, has the reputation of being a complex and confusing. However, because of the great insight that can be taken from big data, it’s important to allow data science to be more easily accessible.

How can business leaders lower the barrier to entry to make data science more accessible to everyone? I see four ways this is possible.

Invest in natural language capabilities. Math sometimes has the tendency to scare people away, so if your plan is for everyone to learn Python and reduce their questions to a formula, I have news: that is not going to bode well for a successful uptake. Instead, look for platforms that allow users to query data  sets using standard English questions, like “how many more sales did I make this year rather than last year?” or “What are the top five things my customers purchase along with Widget Model A?” This type of natural language processing makes accessing large quantities of data easier and more approachable, but it also has a side benefit of encouraging interest and curiosity in further analyzing data. As your users ask natural questions and receive straightforward answers, they think of more questions, which leads to more answers and more insight. Save the math and SQL-like queries for your developers and put a friendly front end on your data.

Make visualization an integral part of your analysis and presentation. You surely have heard the old phrase, “I’m a visual person.” Some people simply absorb and digest information better if it is presented visually. There are currently many platforms and frameworks on the market that can take a raw data query result and turn it into a rich graphical answer to a user’s question. Further, the more sophisticated of these platforms can allow you to interact with certain subsections of a visualization, such as exploding out a pie chart into smaller subpieces or layering statistics onto a map of a certain geographic location. Even heat maps and gradients can make statistics pop into life and really enhance your users’ understandings of exactly what the data is telling them. Visualizations can also help to enhance the encouragement to ask more questions and interrogate the data further by creating a rich, inviting atmosphere to discover new ways of thinking about data.

Smart small and then aim to grow your user base. Chances are, there is a vocal community of employees or contractors within your organization who are already clamoring to access new data science tools. Perhaps you have already harnessed their enthusiasm and enrolled them in a controlled pilot of your data deployment. If you have not, however, consider finding a group of individuals excited about the potential for big data and data science and invite them to preview your plans and test systems and provide very valuable feedback about what they like, do not like, and wish to see or need to have to augment their roles. After all, the best camp fires start with proper kindling; a data portal project is no exception to this rule. Beginning with a small but dedicated group of users is a tried and true method of getting a successful project off the ground before expanding it so quickly that it outgrows its capabilities.

Embrace the power of data on the go. Mobility has changed the world around us, with computers more powerful than the space shuttle available in pocket sized form off the shelf at Target for $80 or less. Many—I dare say most—professionals have smartphones or tablets, some either issued by or paid for by their employer. This means you have an audience that expects to be able to get answers on their mobile devices whenever and wherever they are. Your data science projects and deployments must be available in a mobile friendly format, either with responsive web design, customized native apps for the popular mobile device platforms, or perhaps some combination of both paths. That mobile app ought to also satisfy the points we have already established, including supporting natural language and providing rich visualization support with the capability to touch, drag, pinch, zoom, scroll, and more. The bottom line: in this day and age, you simply must make data accessible on mobile devices.

10 Free SharePoint Tools

An edited version of this article ran on Computerworld.com on September 20, 2017. Credit: Computerworld

The SharePoint software vendor ecosystem has produced many free tools that help you administer and configure a SharePoint farm or Office 365 deployment on a day-to-day basis. The vendors would, of course, love to have you upgrade to their for-pay software products, but until you do there is much utility in these free tools. Here are ten that many administrators find extremely useful.

1. Marco Wiedemeyer’s SPDeployment command line tool

Many companies develop solutions that live inside SharePoint, but those SharePoint developers have a hard time deploying those solutions to the right spots within the SharePoint hosting infrastructure, whether that is on premises or up in Office 365. Who wants to keep all of those details straight every time you make a change to a SharePoint Solution? Developer Marco Wiedemeyer has developed a tool that developers can run from the command line that reads a standard JavaScript Object Notation (JSON) file and automatically puts files where they should be and marks properties as they need to be. It handles the credentials of logging into SharePoint as well. You can even use it with multiple sites and environments and trigger deployments as granularly as your developers would need to. It gets installed via the NuGet Package Manager (NPM) and works right from a command console. Hosted on Github. Free.  https://github.com/mwiedemeyer/SPDeployment

2. The SharePoint Online Management Shell

If you have worked with Microsoft server products for any length of time in the past few years, you know that PowerShell is the way to get things done from an administrative perspective. The SharePoint Online Management Shell is a preconfigured PowerShell environment that has called all of the SharePoint Online (Office 365) cmdlets into one safe space; you can do basically anything from here, from creating content packages to migrate file share data to SharePoint Online to creating new document libraries to turning on and off external access to certain SharePoint sites. If you have even the slightest remit to manage Office 365, then you should grab this shell—it is a virtual certainty that having it will make your life easier. Be sure to right-click it after installation and run it as administrator or essentially nothing will work. Runs on your local machine. Free. http://go.microsoft.com/fwlink/?LinkID=617148&clcid=0x409

3. Amrein Engineering Free Microsoft SharePoint Web Parts

Web parts have been around almost as long as the SharePoint product itself, but they have come in and out of “SharePoint fashion” as the product has matured over the years. Still there are many tasks for which a web part—a pluggable component designed to be run within SharePoint—is the best tool for the job. SharePoint 2016 comes with several built in, but a groupware firm has developed around seventy web parts that do everything from read Exchange conference room calendars to track individual stocks to perform overall task rollups across a given team. While your SharePoint developers can build complex solutions that ride atop the server, your users can grab these web parts and build simple pages and project sites themselves. Some of the 70 web parts are free, but all come with evaluation periods, and they are easy to license right from the page. http://www.amrein.com/apps/page.asp?Q=5728

4. ManageEngine’s Free SharePoint Health Monitor Tool

If you’ve not invested in a lot of systems monitoring tools, or you have a smaller SharePoint deployment, you might want a lightweight tool that gives you just an overall rollup of your SharePoint farm’s health status at a glance. The ManageEngine Free SharePoint Health Monitor fits this bill nicely, giving you a convenient dashboard view where you can see details about the CPU, memory and disk space usage for each server running SharePoint.

Then you can drill down into the SharePoint workload itself and see the response time, service status, Web server (Internet Information Services) process details, and even SQL Server details like free pages, cache memory and buffer cache hit ratio. While this tool won’t help you with Office 365 deployments, and it does not appear to be supported for SharePoint 2016 installations, it does indeed work with 2007, 2010, and 2013, which are still widely used. Free; runs locally. https://www.manageengine.com/free-sharepoint-monitor/free-sharepoint-health-monitor-index.html

5. Visual Studio Community Edition 2017

In a land long ago and far away, SharePoint Designer was the preferred tool for non-developers to use to reformat, re-scape, and develop simple SharePoint solutions. Unfortunately, SharePoint Designer is no longer supported and only works with SharePoint 2013 and below. The tool of choice now is the surprisingly good and free Visual Studio Community Edition with the Office Developer Tools extension. The community edition is the free version of Microsoft’s very capable integrated developer environment (IDE), and the Office Developer Tools plug in lights up IntelliSense and debugging capabilities that let you run solutions right on the SharePoint Server itself, remotely in Office 365, or in an Office web app. This tool works with essentially all versions of SharePoint no matter where they are hosted. Free; runs locally. https://www.visualstudio.com/vs/office-tools/#downloadvs

6. Office365Monitor

For those shops with significant deployments in Office 365, it can be really useful to have an eye on how the service is performing. Microsoft has promised at various points over the years more insight into the health of the service overall as well as its individual components, but we frequently see events that do not ever make it to a health dashboard. In the meantime, users blow up your phones asking what’s going on and where their files are. Ex-Microsoft employee Steve Peschka created Office365Monitor as a web service to gain deeper insight into each individual component of Office 365 and its uptime. You plug in the name of your tenant and the tool basically does the rest. There is a generous 90-day free trial and after that it is so inexpensive as to be effectively free. Web service; runs in the cloud; 90-day free trial and then starting from $19 per month. https://office365mon.com/

7. Veeam Explorer for Microsoft SharePoint

Veeam Explorer is basically Windows Explorer (or File Explorer in Windows 10, or Finder on the Mac OS X platform) for SharePoint. It lets you browse the database graphically, use full text search to find documents and items, restore individual items as well as their permissions if they have been backed up, export recovered items back into SharePoint directly or as e-mail attachments, and more.  It is also included in Veeam Backup Free Edition and can be used in conjunction with Veeam Endpoint Backup FREE, which makes this little tool extraordinarily useful. This works with on premises SharePoint 2010, 2013, and 2016 in all editions, but it does not work with Office 365. Free with backup product; standalone 30 day free trial; runs locally. https://www.veeam.com/microsoft-sharepoint-recovery-explorer.html

8. Veeam Backup Free Edition 9.5

Veeam has another really useful tool for those shops with investments in on premises servers. Sometimes your backup solution isn’t aware of SharePoint specifically, or maybe your backup just grabs virtual machines and copies them without doing anything intelligent on the processing side. Veeam’s free backup product is really quite good—I use it myself in my Hyper-V lab—and works with both Hyper-V and VMware. : Picture your SharePoint VM farm: wouldn’t it be nice to clone, copy, export and manage those VMs? Sometimes wouldn’t it be useful to peek inside the VM to restore individual application items? Veeam Backup lets you do this on an unlimited number of ESXi and Hyper-V hosts or VMs. It is totally free and thus a great tool to have in your arsenal as part of a layered SharePoint on premises backup strategy. Free; runs locally. https://www.veeam.com/virtual-machine-backup-solution-free.html

9. Refactored SharePoint SUSHI

SharePoint SUSHI was an open source project hosted on CodePlex that essentially took the most common administrative tasks and put them in one tool. SharePoint SUSHI is a powerful, user-friendly utility enabling you to accomplish common administrative tasks. You can think of SUSHI as a Swiss army knife for SharePoint. While the original version that supported only SharePoint 2007 languishes unloved on the deprecated CodePlex platform, Ivan Sanders, a SharePoint MCT, MCTS, MCITP, MCSE has refactored the tool for use with SharePoint 2013. It is unclear if the tool works with SharePoint 2016, but it does not in any way interface with Office 365.

You can view the lists and sites any given user can access, which is really helpful for looking at effective permissions; upload user photos as profile images; back up and restore sites; apply a theme to a group of sites with one click and much more.

This is a visual studio solution that you download from Github and build yourself, or you can use a precompiled EXE that you can find on GitHub. Free; runs locally. https://github.com/iasanders/sushi/tree/master/Releases/Release%204.0/bin

10. SharePoint Color Palette Tool

If you are not a web designer or artist, then coming up with aesthetically pleasing color palettes can be a real challenge. With SharePoint 2013, 2016, and now Office 365, branding is more possible than ever. Microsoft has a nice little tool to help you create polished, composed color choices. Free, runs locally. https://www.microsoft.com/en-us/download/details.aspx?id=38182

 

 

 

 

Next Generation Authentication for Windows Shops

An edited version of this story ran on Computerworld.com on September 13, 2017. Credit: Computerworld

Authentication. The act of proving one’s identity to the satisfaction of some central authority. To most, this process means typing in a user name and a password, and it’s been this way for years and years.

But passwords—especially the passwords that most enterprises require that have to be complex, with long strings of numbers and specially cased phrases with some (but not all! Heavens no, not the one you want) symbols—are difficult to remember and often end up getting written down on sticky notes. Then you have to reset them every so often so that hackers and crackers are working towards moving targets. Passwords can be leaked or hacked from the inside as well, as we have seen with numerous credential dump attacks over the past few years. And users can accidentally disclose their passwords if they fall victim to ever increasingly sophisticated phishing attacks.

Luckily for Windows shops, Microsoft is innovating in this space and it has introduced an enterprise quality method of using biometric identification and authentication without requiring the purchase of specialized hardware—and it is baked right into Windows 10, which many shops are already beginning to deploy to replace Windows 7 and Windows 8 and 8.1. In this piece, I want to take a look at this innovation, called Windows Hello for Business, and show how it works and how to enable it to secure your enterprise while eliminating the need for your users to handle cumbersome passwords.

Windows Hello for Business

Windows Hello is the most common and most widely known of the biometric authentication schemes that Windows supports. Windows Hello for Business takes the Hello idea and bundles it with management tools and enforcement techniques that businesses and enterprises want to ensure a uniform security profile and enterprise security posture. Windows Hello for Business uses Group Policy or mobile device management (MDM) policies for management and enforcement, and uses key- and certificate-based authentication in most cloud focused scenarios for maximum protection.

Essentially, Windows Hello acts on two fronts: it can scan one’s fingerprint, and it can also take an infrared picture of a user’s face and perform analysis on it. It pairs these unique physical attributes of each user with cryptographic keys that replace passwords as authentication methods. These keys are stored within specialized security hardware, or are encrypted in software, and unlocked only after Windows deems them authentic. For organizations uninterested in biometrics, Windows Hello also supports PIN usage to replace passwords transmitted over the network.

Windows Hello protects Microsoft accounts (the accounts you use to log in to Microsoft cloud services, Xbox, Office 365, and the like), domain accounts that are part of a corporate Active Directory deployment, domain accounts joined to an Azure Active Directory domain (these are relatively new), and in the future, accounts protected by federated identity providers that will support the Fast ID Online (IDO) 2.0 protocol.

Why is Windows Hello considered stronger than a traditional password? For one, security is always better in threes—the best method is authenticating is to prove something you have, something you know, and something you are. In this case, Windows Hello can authenticate users by satisfying all three rules: something you are (your face, which is exceedingly difficult to copy and use in a malicious way), something you know (the PIN that is used by default by Windows Hello from the point of registration onward), and something you have (your fingerprint, which again without removing digits is difficult to copy and use nefariously).

What is most interesting is that all of these biometrics are stored on the local device only and are NOT centralized into the directory or some other authentication source; this means credential harvesting attacks are no good against Windows Hello-enabled accounts simply because the credentials do not exist in the place that would be hacked. While it is technically possible each device’s trusted platform module, or TPM, could be hacked, an attacker would have to crack each individual user’s machine versus simply executing a successful attack against one machine: a vulnerable domain controller.

The security techniques involved in verifying the biometrics are rigid: special webcams or cameras designed to see in infrared can pick up the differences between a photograph of a person and the real presence of that person, and most laptop manufacturers are now including Hello-compliant cameras in their corporate lines of devices now. You can also purchase these compliant cameras separately, making a staged rollout possible. Fingerprint readers are mature technology and have been around for years, but Microsoft indicates the newest generations of readers pick up more and more on the first swipe, eliminating the need to swipe again and again like some previous models required; essentially all fingerprint readers compatible with any version of Windows can also be used with Windows Hello. It is important to note that you can use both fingerprints and facial cameras or both solutions—whatever biometric you end up using is called the “gesture,” and the gesture action is the key that begins the unlocking of public and private keys and verification of a user’s identity.

The Registration Process

To use Windows Hello, you must register your user account so that Windows can generate the proper elements to replace the traditional password. First, the user configures an account on the device (or the administrator adds a user account to the device). The user authenticates the normal way during the registration process—using a user name and password—and the authentication source, most likely Active Directory, issues its standard yay or nay to that user’s credentials. The user can then enable his or her PIN, which then becomes inextricably linked between that device and that user account.

Windows then generates a pair of keys, a public half and a private half, and stores them both either in the hardware TPM module, or if a device does not have a TPM, it encrypts the keys and stores them in software. This first key is associated with just one biometric “gesture” – either a fingerprint, or a face, or a PIN. Each subsequent gesture has a different protector key that wraps around the authentication key. While the container is designed to only have one authentication key, multiple copies of that single authentication key can be wrapped up with the different protector keys associated with the different gestures registered on the device. There is also an administrative key that Windows automatically generates so that credentials can be reset when necessary, and the TPM has its normal block of data as well that contains attestations and other TPM-related information.

After the PIN is established and these keys are created as I just described, the user can authenticate to the device in a trusted way and Windows will then let him or her create a biometric gesture like register a fingerprint or face print.

Enforcing Windows Hello for Business through Group Policy

As you might imagine, you set up Windows Hello and enforce it throughout the enterprise organization through the use of Group Policy. Within the Group Policy Management Console, you can find policy settings under Policies / Administrative Templates / Windows Components / Windows Hello for Business in both the User configuration and Computer configuration hives. The important policies to configure are:

  • Use Windows Hello for Business: you’ll want to set this to Enabled to get started with the deployment.
  • Use biometrics. Set this to Enabled to enable gestures instead of supporting only a PIN.

Alternatively, if you already have a mobile device management solution deployed, then you can use MDM to force the deployment of Windows Hello. The policies use the PassportForWork configuration service provider, which is like a template of potential settings that you will need to import into the MDM solution before you can begin configuring and enforcing policies.

Key Points to Consider

Some important points to remember:

  • Credentials enrolled in Windows Hello for Business can be bound to individual laptops, desktops, or devices, and the access token one gets after successful credential verification is also limited to that single device.
  • During an account’s registration process, Active Directory, Azure AD, or the Microsoft account service checks and authenticates the validity of the user and associates the Windows Hello public key to a user account. The keys—both the public and private halves—can be generated in the TPM modules versions 1.2 or 2.0 or they can live in software for devices without the right TPM hardware. The Windows Hello gesture does not roam between devices and is not shared with the server; it is stored locally on a device and never leaves the device When the PIN is entered and the face and/or fingerprint is applied, Windows 10 uses the private key stored in the TPM to sign data transmitted to the authentication source.
  • According to Microsoft: “Personal (Microsoft account) and corporate (Active Directory or Azure AD) accounts use a single container for keys. All keys are separated by identity providers’ domains to help ensure user privacy.” In practice, this means that keys get commingled within one secure container, although they are delineated by their native identity provider so that the wrong key is not sent to the wrong provider.

Sidebar: Why a PIN and not a password?

At first blush, a PIN seems like a password but worse: shorter, probably all one type of character (usually numbers), and most likely reused among a number of different places, including bank accounts, access to your mobile phone or tablet, and so on. However, the technical execution of how passwords are verified in the authentication process makes all the difference. Passwords are transmitted over the network to the authentication source where they are validated and either accepted or rejected. Because this transmission happens over the network, anyone with the right tools can snoop in, capture the credentials, and reuse them anywhere. And as we discussed earlier, if all of the passwords are stored centrally, one attack can potentially compromise all of the passwords. In Windows Hello for Business, the PIN is the gatekeeper to unlock a cryptographic key that is bound to the TPM in one individual machine. The PIN only works on the local device and does not enable authentication of any other kind from any other place.

Active Directory Requirements

Fully enabling Windows Hello for Business will most likely require you to add at a minimum one Windows Server 2016 domain controller to your domain. While you do not have to raise your domain or forest functional level, the 2016 DC will light up some required authentication functionality. One alternative to shelling out for a 2016 license is to use Azure Active Directory to deploy Windows Hello.

There is detailed information about exactly what is required from a prerequisite standpoint on the Microsoft website: https://docs.microsoft.com/en-us/windows/access-protection/hello-for-business/hello-manage-in-organization  In particular, pay close attention to the key-based authentication requirements and the certificate-based authentication requirements; if you already have a public key infrastructure deployed in production, the certificate-based authentication method will be much easier to start with. If you are largely cloud oriented, then the key-based authentication method is the one to go with for your first Windows Hello deployments.

The Last Word

Security experts for years have been calling for the death of passwords as we know it, but that prediction has always been troubled by the lack of a seamless, affordable, user friendly alternative to authenticating against systems. In practice, it was always going to take Microsoft putting biometric features inside Windows, the most popular operating system, to spur enough organizations to look into passwordless authentication, and it appears with Windows 10 that the Redmond software giant has done Just Enough to warrant the attention of enterprises everywhere. While it is unlikely your shop is a position to remove passwords entirely, new machines you deploy can work with this option by default, and as you migrate to Windows 10 over time at your own pace, you can slowly but surely work Windows Hello for Business into your security profile.

A Look at the Microsoft Azure Internet of Things (IoT) Solution

An edited version of this story ran on Network World on September 7, 2017. Credit: Network World/Computerworld.

The Internet of Things, a vast network of connected microdevices, sensors, small computers, and more throwing off data every second and sometimes even more often, is all around us. Whereas before we had dumb roads, dumb cities, dumb devices, and the like, now all of those things can talk to us and tell us how they are doing, what their current status is, what their environment is like, and even what sort of other devices it knows are near it—and it chatters all of this back to you. All of this is really coming to a head now because sensors are cheap, their processors are cheap, wireless is everywhere and becoming less expensive, and there are tremendous resources for storage and analytics out there. So how do you deal with this phenomenon and take it by the horns to make it begin working for you?

 

And deal with that you must, because that is coming to you—data. Cisco research projects that there will be 50 billion connected devices by 2020, and all 50 billion of those will be sending off sensor data between once a second and once a minute. How much data is in each of those payloads? Assume it’s 500 bytes, to allow for some packet overhead: that means 25 terabytes of data being spun off somewhere between every second and every minute. What if Cisco is half wrong? What if Cisco is twice right? One thing is for certain: there will be a tremendous amount of data, and with that data comes projections from Gartner, IDC, and Forrester that show a multi-trillion opportunity in both cost savings and new revenue from the IoT.

 

One other factor is that IT is starting to fade into the background a bit, at least as a central place  where technology projects begin. Often business units and individual departments are focusing on technology efforts, made possible by the fact that now there are cloud products out there that let you plunk down a corporate purchasing card and get started in minutes. Hire a data scientist contractor, outfit your kit with some sensors, start up a cloud service account, and before long you could have several terabytes of sensor data and be ready to get fired up. Microsoft has been innovating in this particular area with its Azure service, and they have some compelling offerings.

 

The Microsoft Azure Internet of Things Story
The Azure IoT Suite is designed to be a quick start type of portal—a true example of a platform as a service (PaaS) offering) that gives you the resources necessary to deal with all of the data being sent back to you while also allowing you to manipulate it, develop some level of understanding of it, and use it to either improve your business processes, solve some nagging problem which you have deployed a boatload of sensors to track and mitigate, or build a model of the way certain parts of your assets behave.

 

NOTE: there is also a newer software as a service (SaaS) offering called Microsoft IoT Central, which takes the platform level stuff away and focuses only on the software that powers the sensors and connects everything together. Manufacturers can build their own SaaS-based IoT solutions hosted on the Azure IoT cloud service and get their solutions to market more quickly without having to reinvent the plumbing, the platform, and more. There’s also the very new (as in spring 2017) Azure IoT Edge suite, which lets programmers develop logic for small computers and sensors on the “edge” of an IoT environment in convenient languages like Java and C# rather than assembly and other more obscure languages. In this story, however, I will focus on the Azure IoT Suite because I think it more clearly highlights the capabilities of the overall platform.

 

The Azure IoT Suite itself bundles together a bunch of relevant Microsoft Azure services into a surprisingly simplified package. It starts off by allowing you to create a couple of ready made IoT consumption scenarios, including predictive maintenance and remote monitoring, and automatically orchestrates the various Azure products like databases, websites, web services, the data ingestion point, and more, creating and linking them together so that you are ready to go from square one. For example, for the remote monitoring solution that you can start with as a pre-configured package, Azure self-selects and configures the following services, handling the provisioning process automatically:

  • Azure IoT Hub (1 high-frequency unit, also called an S2 unit)
  • Azure Stream Analytics (3 streaming units)
  • Azure DocumentDB (1 S2 instance)
  • Azure Storage (1 GRS standard, 1 LRS standard, 1 RA-GRS standard)
  • Azure App Services (2 S1 instances, 2 P1 instances)
  • Azure Event Hub (1 basic throughput unit)

 

Each of the other solutions have a different makeup, but you get the idea: everything you need with just a couple of clicks.

 

The pitch from Microsoft is that while you might have the internal resources to do a couple of IoT style projects today, as you build on those developments, create new models, deploy more sensors, and in general double down on IoT in the future, you probably will not be able to (at least cost effectively) handle all of that data. You will either be forced to invest in expensive storage infrastructure on premises, or you will have to make problematic choices about what data to keep present, what data to roll up, summarize, and archive, and what data to discard. And of course, when you discard data, you cannot get it back, so you might be losing out on some predictive capability you do not know about yet; if you roll up and summarize data, you lose the granularity and resolution on that data necessarily to build some advanced mathematical models and use machine learning.

 

Instead, you can start right out in the cloud and take advantage of the tremendous scale of resources that Azure already has—and that is growing quite a bit each year. Instead of spending of disks and compute, you just pay for the Azure services and run times that you consume with your project and you can scale up or scale down as your needs change. Even better, Microsoft is starting to make some of the glue work so you can see the day when your Azure IoT data could be integrated within Power BI, for example, so that your regular knowledge workers (as opposed to trained mathematicians and data scientists) could query your data sets using natural language and get results back in a nice, graphical, easy to consume format. All of that glue and linkage would be much harder to create in a on premises environment, and I think Microsoft here is betting that IoT initiatives are new and green enough in most enterprises that it is not difficult to start them off in the cloud—or at least not as difficult as, say, deciding to move SharePoint into the cloud. In fact, right now, the Azure IoT tools integrate with the Cortana Analytics solution, which provides data science, number crunching, and machine learning tools, and you can then inform your business processes of the insights you derive by linking Cortana Analytics with the Microsoft Dynamics suite of enterprise resource planning (ERP) tools.

 

Imagine this type of scenario: you operate a fleet of large transportation vehicles, each equipped with two or more really expensive engines. These engines can be instrumented with sensors that report quite a bit of information, including fan speed, oil pressure, ambient temperature, fuel pressure, thrust at any given moment, air quality, vibration, and more. You start collecting this data across the thousands of engines that you have in your fleet and pinpointing that data against maintenance records, failure notices, mechanical delays that interrupt the scheduled service you normally deliver with your fleet, and more. Over time and with some math, you are able to build a model that will be able to show that certain engine components are likely to fail after a certain number of cycles. You can learn which components those are, order those parts in advance, and adjust the fleet deployment schedule so that those parts can be replaced when the equipment is nearby, reducing interruptions and saving the cost of ferrying parts all around your locations. This is the kind of model you can build with Azure IoT Suite (and it happens to be one of the sample models you can run as you get started with your account).

 

As far as the sensors go, last October Microsoft launched its Azure IoT Suite Device Catalog [https://catalog.azureiotsuite.com/], which showcases more than 500 devices from more than 200 partner manufacturers that are all certified to work with the Azure IoT suite. On the developer and software side, the Azure IoT suite is a full scale member of the Azure service, and thusly works with Visual Studio, Eclipse, Chef, Puppet, GitHub, PowerShell, Python, MongoDB, Hadoop, Ruby, docker, MySql, and anything else that is part of the set of compatible offerings and capabilities with Azure.

How It Works

You can get started by heading over to https://www.azureiotsuite.com and logging in with your Microsoft account. There you can either use your current MSDN Azure benefit or fix up a new one, and then you’ll be presented with the Provisioned Solutions page, which is the first page of the Azure IoT Suite package itself. Then, follow these steps.

 

  1. Click Create a new solution to build your own IoT “workspace.”
  2. You can then choose a couple of different preconfigured solution types, including “connected factory,” “predictive maintenance,” and “remote monitoring.” For this walkthrough, I’ll show you remote monitoring, so click the latter option.
  3. The Create “remote monitoring” solution screen appears. Here is where you enter a friendly name for this workspace, the Azure region in which all of this should be spun up (you would ideally want the region closest to either you or your sensors to reduce latency), and the Azure subscription to which all of this should be billed. You can find pricing information for all of the components of Azure that the IoT suite will provision at https://azure.microsoft.com/en-us/pricing.
  4. Click Create solution, and then grab a cup of coffee while Azure spins up all of the resources it outlined.
  5. After the provisioning is complete, you’ll be back at the Provisioned Solutions screen, and your friendly named workspace will be shown there with a green check mark. Click the Launch button to get inside.
  6. You’ll be greeted with the dashboard screen. This shows a map of the Seattle area with four sensors geoplotted, each with a colored indicator (green or red). These sensors are simulated, just to give you an idea of the type of dashboard you can build with your own sensor deployment. On the right side, you can see the Device to View menu, which gives you a drop down selector where you can pick individual sensors to examine. On the lower left side, there’s the Alarm History section which shows sensors that are meeting some predefined problem threshold, and then on the lower right you see speedometer looking graphs that show various properties and values that the sensor fleet is reporting.
  7. On the left side, click Devices. This gives you a grid-style list of devices. You can ue the “+” button in the lower left to add a new sensor, which can be either another simulated device or a physical device with SIM card (ICC ID) for cellular connection, or access to a wireless connection. You can also modify the properties the simulated sensor displays to the workspace, including model and revision number, asset tag, or anything else you like.
  8. On the left side, click Rules. You can add new rules that operate on the two existing data fields, temperature and humidity, and set the thresholds that apply to those rules. This area is what kicks off those alarm history items on the dashboard, and note that if a device is alarming, its status on the map is changed from green to red to make it easy to identify.

 

That’s a quick walk around the preconfigured solution, but the key thing to remember is that all of this is live with Azure. You can go and adjust any of this, from the configuration of the dashboard to the way resources talk to each other to anything else; you manage all of this from within the Azure portal, same as any other Azure resource. If you’re looking for a remote monitoring solution just to get started, this solution saves you a lot of effort to get the right pieces in place—start there, tailor it, and build on from there. There’s no additional charge to start here other than the resources the solution spins up to run itself. The design and code is free.

 

The Last Word

Microsoft has a robust set of tools for integrating all sorts of devices into an IoT solution. They have more scale than you do and work with a wide variety of devices. If you are building an IoT solution, then you owe it to yourself to play around with what Azure IoT can do. I have not seen a solution in this space where it is easier to get started and make your own tweaks to build your own workspace. Worth a look.

Exploring SSH Key Rotation with Thycotic Secret Server

Everyone knows and should be aware of the huge security issues there are surrounding the Windows administrator account, and there is a ton of guidance on how to properly secure that particular account, how to avoid password compromises, and how to use alternatives. But there seems to be much less intense focus on securing privileged accounts on Linux, and especially the idea that SSH keys are much like passwords in their absolute need to be protected at all costs.

SSH keys start out the security race ahead of passwords because they require two parts that do not exist in the same location all the time—a private key, which is often protected by a passphrase, is compared with a public key, and cryptographic analysis is carried out to determine if a user is authenticated to a system. This avoids the wholesale compromise of credentials just by gaining access to a system, since all that system would have is the public keys for authorized users. The other half of the puzzle, those users’ private keys, does not exist on the system anywhere.

Unfortunately, out of convenience or perhaps at the time, sheer necessity, the same private keys are re-used across multiple machines. This might be to make it easier for developers and testers to connect to a farm of Linux or Unix machines before they are moved collectively into production, or it may be that a single administrator is responsible for all of them and feels that he can properly secure that one single, well-formulated private key. It may even be an artifact of cloning a bunch of new virtual machines from an existing “template” image, or a relic of being able to easily automate a bunch of actions across a number of different machines using a single private key.

But of course, there is more to it than just one unique individual private key per user per machine. Those keys ought to be rotated and changed out, just like passwords should. This is for a variety of reasons: first, systems that contain private keys can be compromised, or the passphrases that protect the private keys can be compromised. Especially as news breaks today that the WPA2 wireless encryption technology has been broken and unprotected resources behind a vulnerable access point could be compromised, it is more important than ever to never fully trust a static key, password, phrase, or anything else, and make sure that secret is changed regularly.

The process of rotating keys is simple, but carrying it out is definitely not easy. Essentially it is a four-part process: you need to generate new keys, install that key into the authorized_keys file on all of your machines, test that new key, and then remove the stale key from that same authorized_keys file.

Brave folks use Ansible to take care of system administration tasks like these in an automated way, but then you need to spend time thinking about playbooks and plays and trying to make all of those pieces play together nicely. That all works well enough, and for the smallest deployments with the most limited of budgets it is a reasonable solution. But what about when the powers that be want proof your Linux or Unix endpoints are correctly protected this way? What if you need to easily control who can access those keys from an administrative staffing perspective? You need a stronger tool, and the folks at Thycotic have a proposition: why not use software designed from the start to automate the protection of our most sensitive privileged accounts and let it take care of the dirty work for you?

Thycotic Secret Server to the Rescue

Enter their Secret Server product and specifically the SSH Key Management function. This program will generate new keys and rotate them out either on demand or on a previously determined schedule, control via a role-based access control scheme and robust permissions which users have access to which keys, and provide a complete audit trail of which keys were sent where and when and who used them to access which systems on what date and time. Cobbling that together with Ansible would be a lot tougher and not nearly as seamless.

Thycotic asked me to spend some time evaluating this aspect of Secret Server and share my thoughts on the product.

My objective: get SSH key rotation going among 25 different Ubuntu virtual machines I have running in Amazon Web Services as part of another project. I installed a trial version of Secret Server on a Windows Server 2012 R2 box and added Microsoft SQL Server 2014 Express at the prompting of the installer. After completing the setup wizard, which was reasonably simple (although I did have to close and restart it due to an HTTPS binding issue which it automatically fixed), I went to the console and popped my license information in and enabled the Remote Password Changing settings. Then, I set out to create a new secret in Secret Server. Reading the Quick Start guide, I went with the Unix Account (SSH Key Rotation) template. On the template screen, I entered a friendly name for an account on one of those 25 machines as well as its username and password. I uploaded the private key file in .PEM format and entered its passphrase. I clicked the button and then watched for the Last Heartbeat field to say “Success” with an associated timestamp – that told me I was ready to go. I also liked I could just launch Putty and get signed in with one click – very convenient.

After that, I went to the Remote Password Changing tab and clicked Change Password Remotely. I generated a random password and then clicked Generate New SSH Key to get a new key. I also clicked the Generate button next to the Next Private Key Passphrase field to get a new protective passphrase. I clicked Change at the bottom and that was it. The key was rotated.

I went ahead and added a few of the other machines I was working with on that project and it all worked the same way as the previous one – add the secret, kick off the change, be done. It was a little work to get it set up but now I have a tool where with, what, three clicks? Four clicks? I can rotate keys all from a single point of management.

Pretty neat tool.

A Ransomware Removal Guide

An edited version of this article ran on the Netwrix blog on June 29, 2017. Credit: Netwrix

Ransomware is one of the biggest scourges we face as Internet citizens today. What happens when you have been struck by it? The most obvious option is to pay the ransom. If you did, you would not be alone: even large companies and non profits have had to pay up or at least negotiate a ransom payment. But should that be your first option? Hardly. Here are some tips about how to initially recover from a ransomware attack.

Have good backups

It just so happens that the best defense is a good offense and in this case, a good offense happens to also be the best defense after a ransomware attack infects your network: having good backups. This can come in a couple of forms:

  • Shadow copies. You may be familiar, if you are a Windows administrator, with the Volume Shadow Copy Service, a piece of software first introduced in Windows Server 2003 that takes snapshots of data on specifically configured volumes at predetermined points in time. This service informs the Previous Versions feature in Windows client, where if you do something stupid in a spreadsheet, for example, you can right-click the file on disk and choose to open a previous version made before your mistake. If you catch a ransomware infection early, chances are shadow copies are a good way to restore an unencrypted version of your files. If you are not using shadow copies, configure them today. Unfortunately, some variants of ransomware have caught on to this procedure and during their silent infection process, prior to encrypting files, they delete all shadow copies found on a disk.
  • Regular backups that you restore from tape or archive disk. You are taking regular backups of your storage system, right? And you are regularly testing those restores so that you are able to verify you backed up good files and can restore them intact? If not, then stop reading right now and go configure a backup scheme. If you are, then rest a little easier. The worst case for a ransomware infection is to wipe your machines and put data back on them from restored backups. Sure, it is an investment of time, but you absolutely do not need to pay any ransom, and you just might be seen as a hero.

See if a free decryptor is available

If you do find yourself on the other end of a ransomware attack that has completed, you have a couple of options that don’t involve paying the ransom.

As governments and security researchers continue to make progress against the ransomware threat, these parties have managed to break the encryption schemes used by some variants of ransomware. It is important to keep in mind that not every variant of ransomware has been “broken” by the good guys, so you should not rely solely on the promise that some of these encryption schemes have been foiled and rest on your laurels when it comes to building defenses against this type of attack.

But if you have already been victimized, then head over to The No More Ransom Project at https://www.nomoreransom.org and look for the variant you have been hit with. (This site is sponsored jointly by the European Cybercrime Center, Politie, Kaspersky Lab, and Intel Security. On the site, there are currently decryptor tools available for the following variants:

  • Crysus
  • Marsjoke/Polyglot
  • Wildfire
  • Chimera
  • Teslacrypt
  • Shade
  • Coinvault
  • Rannoh
  • Rakhni

These folks are working on breaking other variants as well, but of course breaking good encryption takes time, and the malware creators also have a perverse incentive to make their encryption stronger and even more unbreakable. It is an unfortunate dance, but for now, you might be able to save yourself with the decryptor tools on the site. A big red flashing warning here, though, to beware of tools from other places—they may actually be ransomware disguised as a prevention tool.

Use the File Server Resource Manager to catch bad actors

Even if you have been infected by ransomware, it is not too late to prevent further damage. You will likely have some encrypted files, but the sooner you stop the spread of the infection, the fewer files end up being held hostage, and the easier your cleanup task is. We have covered using a built in tool within Windows Server, called the File Server Resource Manager, on this blog before to catch ransomware attacks at they happen. Essentially you create a honeypot share with a dollar sign in front of the name to fool ransomware into starting with that particular share in its efforts to encrypt files. You let the group Authenticated Users have full control of this share so that any process that wants to write to that share can (although you do not publicize this share to users; this is only to catch things that should not be on your systems, not a drop box for other files). There is no legitimate use of the honeypot share, so when the File Server Resource Manager file screen notices write activity happening within that share, it can safely assume that someone has been infected, and it will cut off that user’s access to any share to stop the spread of the encryption attack in its tracks. There is a simple PowerShell script that can be fired by the File Server Resource Manager in order to accomplish this:

Get-SmbShare -Special $false | ForEach-Object { Block-SmbShareAccess -Name $_.Name -AccountName ‘[Source Io Owner]’ -Force }

 

Once permissions have been removed, the ransomware has no files it can access to encrypt, and it basically just stops. You then could clean the malware off, restore the files that were written to in that timeframe, and move on with your life.

For much more detail on this way to stop a pending attack, or an attack that is just beginning, check out Ransomware Protection Using FSRM and PowerShell [https://blog.netwrix.com/2016/04/11/ransomware-protection-using-fsrm-and-powershell/] on our blog.

A Look at the Network Controller in a Software Defined Network

An edited version of this article appears on Cisco.com and appears to have launched on Friday, June 2, 2017. Credit: Cisco Systems Inc

One cannot have a software defined network without the network controller: that component that has a single complete view of a network and all of its components, and thus is the base for configuring the network, its paths, and perhaps most importantly, what devices lay in those paths. What are some important considerations when evaluating network controllers? What are some of the unique features and pitfalls that network controllers introduce? Let’s take a look.

Network Controller Extensibility

The beauty of the network controller role is that most products are designed to be extensible. Generally there are modules or plug-ins you can install into the network controller to light up more features and enable additional functionality. In fact, many controllers, including the Cisco Open SDN Controller, offer programmatic interfaces like REST APIs that lets applications have visibility and integrate deep into the network, Java APIs that let developers create custom functions to enable support for advanced scenarios, and “southbound” plug-ins for the network controller device that hook up virtual networks to physical networks to make heterogeneous network environments take the leap into an SDN world.

Open Source Network Controllers

Many network controllers are also open source; written and contributed to by a community of network professionals and talented developers. POX and Beacon are good examples of serviceable open source network controllers. Other network controllers are offered by enterprises and service companies to enable special integration with other types of network hardware and software. Both open source and proprietary network controllers have their role. Open source controllers are a great way to ensure a standards based network especially when used with other network devices from multiple vendors, and often have a vibrant community enhancing the solution as the state of the SDN art advances. On the other hand, proprietary network controllers often offer increased speeds and capabilities working on vendor specific hardware while also providing a support infrastructure for when things go haywire, as they eventually will.

Fault Tolerance and High Availability

Given that it is the single point of control in a software defined network, the network controller can also be a single point of failure. Simple clustering and replicating virtual machines and states across the wire is not enough in an SDN environment because network switches maintain a hard state has to be handled across boot iterations for the network controller. Good network controllers incorporate the state of switches into a transactional replication system that makes sure one can replicate instances of the network controller in the event of a fault while also maintaining a consistent switch state and without taking down the network so that the network switches can establish a consensus on the state of their switches. However, other implications of fault tolerance include the ability of the controller to continually manage all of the different devices on the software defined network after a fail over or a fail back procedure, and how well these fault tolerant procedures scale when your environment is not rather simply a corporate campus, but is a global scale cloud datacenter with hundreds of thousands of customers and thousands of hosted network configurations. Will your network controller vendor measure up?

Network Controller Serviceability

Another differentiation point with network controllers from various vendors is how serviceable they are. Often the logs from a controller will be a jofirst source to consult when troubleshooting network issues. Are the logs easy to find? Can they be shipped to central logging facilities where an existing network monitoring environment can capture events as they happen and proceed through alerting workflows? Another servicing concern is patching the network controller both for feature enhancements as well as to repair or mitigate security flaws. Since the network controller is a key role, most controllers have a way to patch the role without bringing it down.

Consequences of the Network Controller Role in Today’s Environments

What’s the implication for today’s software defined networking environments? What are the main concerns of more decision making and centralized management happening right inside the network controller? There are a couple of points to make here:

  • The benefit is potentially a single pane of glass (or for complex network, just a few panes of glass) where your entire network strategy can be handled, monitored, and if need be, reconfigured. The network controller is the strategic control point of the network, the one place that handles all of the abstraction of resources.
  • The challenge is that one incurs more risk with many virtualized networks and components directing activity through and by the network controller: if the controller goes down, how limited is network functionality? The extent to which services are impaired really depends on each network configuration, but without fault tolerance and high availability capabilities built into your production SDN, the impact of a failed network controller could be quite severe.

Office 365: Configure User’s Password to Never Expire

An edited version of this article appeared on the Netwrix blog on May 25, 2017. Credit: Netwrix

One of the most frequently asked questions we come across here is users or administrators who subscribe to Office 365 but who tire of being prompted to change their passwords when they expire. These users might not be used to Microsoft’s default password expiration policy for the service, especially if they are coming from an in house system that had a more lenient expiration requirement. While I am not entirely convinced setting passwords to never expire is a smart move, if you choose secure passwords and use multifactor authentication, I do not think you are opening up a big can of worms by disabling expiry.

Configuring Using PowerShell

As always, here is a good old PowerShell coming in for the rescue. First off, you need to set yourself up to connect to the service through PowerShell remoting. If you have not done this yet, then you will need two pieces of software: the Microsoft Online Services Sign-In Assistant for IT Professionals RTW (yes, that’s the official name), and the Azure Active Directory Module for Windows PowerShell. Install both of those, then open up a PowerShell command session and type in

Connect-MsolService

And then enter your credentials at the prompt.

Once you have successfully authenticated, then enter the following command to set one particular user’s password to never expire:

Set-MsolUser -UserPrincipalName <fullemailaddress@yourdomain.com> -PasswordNeverExpires $true

If you know a little bit about PowerShell, then you also know that if the verb in a command is Set, then you can also use Get to get information or properties about a certain object. In this case, we can use Get-MsolUser to see if the user’s password has already been configured to never expire, and we do so using the following command which selects that one certain attribute among many to display as a response to our command:

Get-MsolUser -UserPrincipalName <fullemailaddress@yourdomain.com> | Select PasswordNeverExpires

You can extrapolate this command to see the password expiration status of all users in your tenant by using the following command:

Get-MSOLUser | Select UserPrincipalName, PasswordNeverExpires

You can also combine these two commands to set all of the users in your tenant to have passwords that never expire using the pipelining feature of PowerShell. Here we get a list of users from Get-MsolUser and then we pipe that information to Set-MsolUser, leaving out the specific reference to names since those will be fed into the new command from the pipeline, and we can leave the attribute configuration the same:

Get-MsolUser | Set-MsolUser –PasswordNeverExpires $true

Configuring using the Graphical User Interface (GUI)

If you’re afraid of the PowerShell command line, two words of advice: 1) do not fear it, for it is your friend, and 2) there is also a way to take care of this from within the web based Office 365 administration console. You will need to have administrator credentials for this.

  1. Sign in at https://portal.office.com/adminportal/home.
  2. From the Settings menu, select Security and privacy, and then click Edit.
  3. Under Password policy, click to On the lever that says “Set user passwords to never expire.”