Tag Archives: difficult

How to plan for an Azure cloud migration

The hype surrounding the public cloud is increasingly difficult for many in IT to overlook.

The promises of an Azure cloud migration were often overshadowed by fears related to security and loss of control. Over time, the resistance to moving away from the data center thawed, as did negative attitudes toward the cloud. And Microsoft’s marketing is making the Azure drumbeat harder to ignore, especially as the end-of-life for Windows Server 2008/2008R2 draws closer and the company offers enticements such as Azure Hybrid Benefit.

Some of the traditional administrative chores associated with on-premises workloads will dissipate after a move to Azure, but this switch also presents new challenges. Administrators who’ve put in the work to ensure a smooth migration to the cloud will find they need to account for some gaps in cloud coverage and put measures in place to protect their organization from downtime.

Gauging the on-premises vs. cloud services switchover

A decision to move to the cloud typically starts with an on-site evaluation. The IT staff will take stock of its server workload inventory and then see if there’s a natural fit in a vendor’s cloud services portfolio. Administrators in Windows shops may gravitate toward the familiar and stay with Microsoft’s platform to avoid friction during the Azure cloud migration process.

Part of the benefit — and drawback — of the cloud is the constant innovation. New services and updates to existing ones arrive at a steady pace. Microsoft sells more than 200 Azure services, nearly 20 for storage alone, which can make it difficult to judge which service is the right one for a particular on-premises workload.

Is it time to take that on-premises file server — and all its hardware support headaches — and lift it into the Azure Files service? It depends: There will be some instances where an Azure service is not mature enough or is too expensive for some Windows Server roles.

Take steps to avoid downtime

Administrators who’ve put in the work to ensure a smooth migration to the cloud will find they need to account for some gaps in cloud coverage and put measures in place to protect their organization from downtime.

It takes a lot of work to migrate multiple server roles into the cloud, including domain name services and print servers. But what happens when there’s an outage?

Major cloud providers offer a service-level agreement to guarantee uptime to their services, but problems can hit closer to home. Your internet service provider could botch an upgrade, or a backhoe could slice through an underground cable. In either scenario, the result is the same: Your business can’t access the services it needs to operate.

Outages happen all the time. There’s no way to avoid them, but you can minimize the effects. Ideally, you could flip a switch to turn on a backup of the infrastructure services and SaaS. But that type of arrangement is not financially possible for most organizations.

With a little preparation in advance, you can limp along with some of the essential infrastructure services that moved to the cloud, such as print servers and domain name services. With a spare Hyper-V host or two, your company can power up a few VMs designed to keep a few core services running in an emergency.

What’s the difference between IaaS and PaaS?

Organizations can move workloads to the cloud a few different ways, but the two most common methods are a “lift and shift” IaaS approach or using the cloud provider’s PaaS equivalent.

Taking an app and its corresponding data in a virtual machine and uploading it to the cloud typically requires little to no reworking of the application. This is called a “lift and shift” due to the minimal effort required compared to other migration options. This approach offers the path of least resistance, but it might not be the optimal approach in the long run.

Traditional Windows Server administrators — and their organizations — might unlock more benefits if they run the application as part of the Azure PaaS. For example, rather than putting a SQL Server VM into Azure and continuing administrative legacy of patching, upgrades and monitoring, the organization could switch to Azure SQL Database. This PaaS product takes granular control away, but some of the perks include cheaper operating costs and less chance of downtime through its geo-replication feature.

What to do when cloud isn’t an option

The cloud might not be the right destination for several reasons. It might not be technically feasible to move a majority of the on-premises workloads, or it just might not fit with the current budget. But it’s not a comfortable feeling when large numbers of vendors start to follow the greener pastures up into the cloud and leave on-premises support behind.

A forward-looking and resourceful IT staff will need to adapt to this changing world as their options for applications and utilities such as monitoring tools start to shrink. The risky proposition is to stay the course with your current tool set and applications and forgo the security updates. A better option is to take the initiative and look at the market to see whether an up-and-coming competitor can fill this vacuum.

Go to Original Article

Adobe brings graph database to customer journey touchpoints

Identity resolution is a difficult technology issue for marketers to solve, because current customer experience platforms have a hard time understanding when the same person contacts a company from multiple devices. Adobe’s Customer Journey Analytics, announced today, tackles the problem with a graph database.

Customer Journey Analytics is a feature subset of Adobe Analytics, itself a part of the Adobe Experience Platform. It features an interface that closely resembles Photoshop’s layers, the UX model familiar to  marketers and designers who typically use that application somewhere along the way creating marketing and sales content.

Combining a graph database — which makes more connections between data points than traditional relational databases — with analytics is a new way to solve the problem of identity resolution in the case of multiple customer journey touchpoints, said Nate Smith, Adobe Analytics product marketing manager.

Instead of creating new records when a customer who typically uses a smartphone app switches over to a desktop computer, for example, the graph database can connect the dots.

“It will tie those devices together to a unique ID,” Smith said.

Adobe Analytics dashboard screenshot
Adobe Analytics adds deeper insights to its platform capabilities for mapping customer journeytouchpoints.

Data science for marketers

It’s the latest chapter in a technology trend where customer experience platform vendors bring more data science capabilities to marketers, who aren’t typically data scientists. Using the metaphor of the customer journey, the features track the various stages of customer interaction with a company, from discovery to shopping to completing a purchase.

It’s the latest chapter in a technology trend where customer experience platform vendors bring more data science capabilities to marketers.

The idea is to subdivide the transaction process in order to find more opportunities for additional sales, upsells or retargeting. This becomes a more complex proposition as new customer journey touchpoints, such as social media mobile apps or even smart speakers such as Amazon’s Alexa, become popular among a company’s customers.

Smith said the “layers” approach enables customer experience teams to look for new potential revenue opportunities by mixing and matching different data sets, such as brick-and-mortar and website sales. Teams can also analyze trends to determine what’s behind issues such as customer attrition problems.

For customer experience teams employing data scientists, Adobe Analytics includes an advanced data analysis tool, Adobe Experience Platform Query Service.

The graph database component of Customer Journey Analytics pairs well with Adobe Sensei AI, according to Forrester analyst James McCormick. Together they can automate deduplication of records, a time-intensive manual task, closer to real time. The Photoshop-esque interface will help customers dive into the analytics tools more quickly, he added.

“These are iterative moves towards Adobe vision of creating a uniformed user experience across a fully integrated Adobe Experience Cloud,” McCormick said. “This common approach will really help Adobe customers work with, and across, multiple products.”

Go to Original Article

New Sierra-Cedar HR Systems Survey uncovers lack of tech adoption data

LAS VEGAS — A new report emphasized that data-driven HR is a difficult goal to achieve. And it’s even more challenging because few companies track how much employees use — or don’t use — HR applications.

The numbers from the 21st annual Sierra-Cedar HR Systems Survey illustrated the gap: Fifty-two percent of respondents indicated that HR tech influences their business decisions, but less than a quarter of those people possess data on employee buy-in, as illustrated by HR tech adoption in their organization.

“You have to know how people use your tech,” said Stacey Harris, vice president of research and analytics at Sierra-Cedar, a tech consulting and managed services firm based in Alpharetta, Ga. “[Only] 10% of organizations are measuring HR technology adoption — how their technology is being used. That’s an issue.”

She presented the findings at the HR Technology Conference here this week. TechTarget, the publisher of SearchHRSoftware, is a media partner for the event.

Michael Krupa, senior director of digitization and business intelligence at networking giant Cisco, told Harris he is not surprised by the statistics. To measure adoption, a series of detailed steps is necessary, including documenting HR users, creating metrics based on those personas and then presenting the data in dashboards. Along the way, companies must also determine who monitors adoption data.

“You have to do all that,” Krupa said. “It’s hard.”

However, there is a statistical correlation between those who successfully track HR tech adoption and a 10% increase in favorable business outcomes, Harris said.

Methods to track employee buy-in and HR tech adoption

Sierra-Cedar HR Systems Survey respondents indicated lots of ways to ascertain adoption and use, including the following:

  • measuring mobile and desktop logins;
  • determining average transactions completed during a period of time;
  • running Google Analytics reports;
  • tracking employee self-service volume;
  • talking to employees; and
  • receiving vendor reports on activity.

[Only] 10% of organizations are measuring HR technology adoption — how their technology is being used.
Stacey Harrisvice president of research and analytics, Sierra-Cedar

Chatham Financial, a financial advisory and technology company based in Kennett Square, Pa., tracks logins and sends out satisfaction surveys to users, said Lindsay Evans, director of talent. Chatham’s approach is to think of employees as customers.

However, Evans — who appeared with Harris and Krupa — said it is not always a bad thing to find out employees don’t use an application.

“At my company, we use a time tracker, and people hate it,” she said. “I wish we hadn’t rolled it out. It’s not really saving us a lot of time.”

Data-driven HR raises data privacy concerns

The big picture of human capital management has changed within the last 15 years. Software from back then focused on processes, whereas HR professionals now use a company’s strategy, culture and data governance to evaluate technology, Harris said.

Stacey Harris, Michael Krupa and Lindsay Evans discuss data-driven HR needs.
Stacey Harris, Michael Krupa and Lindsay Evans speak at the HR Technology Conference.

“Data is at the center of your HR technology conversation,” she added.

Broadly, data governance describes steps to ensure the availability, integrity and security of digital information. “Data governance is important, because we need to know where data is stored, how people are using it and where it’s moving,” Krupa said.

With the emphasis on data-driven HR comes the need for cybersecurity and data privacy, and the Sierra-Cedar HR Systems Survey uncovered an interesting twist to those duties as it concerns HRIT professionals. HRIT and HRIS roles are the top choices to handle data privacy and content security, with 48% of organizations with all-cloud HR systems using HRIT in this way.

However, for 47% of companies with on-premises HR systems, IT departments deal with data privacy and content security, while only 18% use HRIT.

“In the cloud environment, [HRIT workers] are the people standing between you and data privacy,” Harris said, adding that this rise in prominence for cloud-based HR indicates HRIT professionals are becoming more strategic in their duties.

Closing thoughts on HR cloud, mobile and spending

Beyond results on HR technology adoption, the Sierra-Cedar HR Systems Survey looked at a wide swath of HR tech issues, including these tidbits:

  • Cloud adoption of HR management systems continues to rise, with 68% of companies heading in that direction, compared with on-premises installations — an increase of 14% from last year’s Sierra-Cedar report.
  • Mobile HR has been adopted by 51% of organizations. So, if your company doesn’t use this tech, it lags behind, Harris said. However, this statistic came with a warning, too, as only 25% of companies have a BYOD policy, which hints at data privacy risks, she said.
  • For 2018, 42% of organizations reported plans to increase HR system spending, which is a 10% increase over 2017. “There is no return on investment with HR technology … but there is a return on value,” Harris said. “But you only get more [value] if people are using it.”

Helping customers shift to a modern desktop – Microsoft 365 Blog

IT is complex. And that means it can be difficult to keep up with the day-to-day demands of your organization, let alone deliver technological innovation that drives the business forward. In desktop management, this is especially true: the process of creating standard images, deploying devices, testing updates, and providing end user support hasn’t changed much in years. It can be tedious, manual, and time consuming. We’re determined to change that with our vision for a modern desktop powered by Windows 10 and Office 365 ProPlus. A modern desktop not only offers end users the most productive, most secure computing experience—it also saves IT time and money so you can focus on driving business results.

Today, we’re pleased to make three announcements that help you make the shift to a modern desktop:

  • Cloud-based analytics tools to make modern desktop deployment even easier.
  • A program to ensure app compatibility for upgrades and updates of Windows and Office.
  • Servicing and support changes to give you additional deployment flexibility.

Analytics to make modern desktop deployment easier

Collectively, you’ve told us that one of your biggest upgrade and update challenges is application testing. A critical part of any desktop deployment plan is analysis of existing applications—and the process of testing apps and remediating issues has historically been very manual and very time consuming. Microsoft 365 offers incredible tools today to help customers shift to a modern desktop, including System Center Configuration Manager, Microsoft Intune, Windows Analytics, and Office Readiness Toolkit. But we’ve felt like there’s even more we could do.

Today, we’re announcing that Windows Analytics is being expanded to Desktop Analytics—a new cloud-based service integrated with ConfigMgr and designed to create an inventory of apps running in the organization, assess app compatibility with the latest feature updates of Windows 10 and Office 365 ProPlus, and create pilot groups that represent the entire application and driver estate across a minimal set of devices.

The new Desktop Analytics service will provide insight and intelligence for you to make more informed decisions about the update readiness of your Windows and Office clients. You can then optimize pilot and production deployments with ConfigMgr. Combining data from your own organization with data aggregated from millions of devices connected to our cloud services, you can take the guess work out of testing and focus your attention on key blockers. We’ll share more information about Desktop Analytics and other modern desktop deployment tools at Ignite.

Standing behind our app compatibility promise

We’re also pleased to announce Desktop App Assure—a new service from Microsoft FastTrack designed to address issues with Windows 10 and Office 365 ProPlus app compatibility. Windows 10 is the most compatible Windows operating system ever, and using millions of data points from customer diagnostic data and the Windows Insider validation process, we’ve found that 99 percent of apps are compatible with new Windows updates. So you should generally expect that apps that work on Windows 7 will continue to work on Windows 10 and subsequent feature updates. But if you find any app compatibility issues after a Windows 10 or Office 365 ProPlus update, Desktop App Assure is designed to help you get a fix. Simply let us know by filing a ticket through FastTrack, and a Microsoft engineer will follow up to work with you until the issue is resolved. In short, Desktop App Assure operationalizes our Windows 10 and Office 365 ProPlus compatibility promise: We’ve got your back on app compatibility and are committed to removing it entirely as a blocker.

Desktop App Assure will be offered at no additional cost to Windows 10 Enterprise and Windows 10 Education customers. We’ll share more details on this new service at Ignite and will begin to preview this service in North America on October 1, 2018, with worldwide availability by February 1, 2019.

Servicing and support flexibility

Longer Windows 10 servicing for enterprises and educational institutions
In April 2017, we aligned the Windows 10 and Office 365 ProPlus update cadence to a predictable semi-annual schedule, targeting September and March. While many customers—including Mars and Accenture—have shifted to a modern desktop and are using the semi-annual channel to take updates regularly with great success, we’ve also heard feedback from some of you that you need more time and flexibility in the Windows 10 update cycle.

Based on that feedback, we’re announcing four changes:

  • All currently supported feature updates of Windows 10 Enterprise and Education editions (versions 1607, 1703, 1709, and 1803) will be supported for 30 months from their original release date. This will give customers on those versions more time for change management as they move to a faster update cycle.
  • All future feature updates of Windows 10 Enterprise and Education editions with a targeted release month of September (starting with 1809) will be supported for 30 months from their release date. This will give customers with longer deployment cycles the time they need to plan, test, and deploy.
  • All future feature updates of Windows 10 Enterprise and Education editions with a targeted release month of March (starting with 1903) will continue to be supported for 18 months from their release date. This maintains the semi-annual update cadence as our north star and retains the option for customers that want to update twice a year.
  • All feature releases of Windows 10 Home, Windows 10 Pro, and Office 365 ProPlus will continue to be supported for 18 months (this applies to feature updates targeting both March and September).

In summary, our new modern desktop support policies—starting in September 2018—are:

Windows 7 Extended Security Updates
As previously announced, Windows 7 extended support is ending January 14, 2020. While many of you are already well on your way in deploying Windows 10, we understand that everyone is at a different point in the upgrade process.

With that in mind, today we are announcing that we will offer paid Windows 7 Extended Security Updates (ESU) through January 2023. The Windows 7 ESU will be sold on a per-device basis and the price will increase each year. Windows 7 ESUs will be available to all Windows 7 Professional and Windows 7 Enterprise customers in Volume Licensing, with a discount to customers with Windows software assurance, Windows 10 Enterprise or Windows 10 Education subscriptions. In addition, Office 365 ProPlus will be supported on devices with active Windows 7 Extended Security Updates (ESU) through January 2023. This means that customers who purchase the Windows 7 ESU will be able to continue to run Office 365 ProPlus.

Please reach out to your partner or Microsoft account team for further details.

Support for Office 365 ProPlus on Windows 8.1 and Windows Server 2016
Office 365 ProPlus delivers cloud-connected and always up-to-date versions of the Office desktop apps. To support customers already on Office 365 ProPlus through their operating system transitions, we are updating the Windows system requirements for Office 365 ProPlus and revising some announcements that were made in February. We are pleased to announce the following updates to our Office 365 ProPlus system requirements:

  • Office 365 ProPlus will continue to be supported on Windows 8.1 through January 2023, which is the end of support date for Windows 8.1.
  • Office 365 ProPlus will also continue to be supported on Windows Server 2016 until October 2025.

Office 2016 connectivity support for Office 365 services
In addition, we are modifying the Office 365 services system requirements related to service connectivity. In February, we announced that starting October 13, 2020, customers will need Office 365 ProPlus or Office 2019 clients in mainstream support to connect to Office 365 services. To give you more time to transition fully to the cloud, we are now modifying that policy and will continue to support Office 2016 connections with the Office 365 services through October 2023.

Shift to a modern desktop

You’ve been talking, and we’ve been listening. Specifically, we’ve heard your feedback on desktop deployment, and we’re working hard to introduce new capabilities, services, and policies to help you on your way. The combination of Windows 10 and Office 365 ProPlus delivers the most productive, most secure end user computing experience available. But we recognize that it takes time to both upgrade devices and operationalize new update processes. Today’s announcements are designed to respond to your feedback and make it easier, faster, and cheaper to deploy a modern desktop. We know that there is still a lot of work to do. But we’re committed to working with you and systematically resolving any issues. We’d love to hear your thoughts and look forward to seeing you and discussing in more detail in the keynotes and sessions at Ignite in a few weeks!

Introduction to Azure Cloud Shell: Manage Azure from a Browser

Are you finding the GUI of Azure Portal difficult to work with?

You’re not alone and it’s very easy to get lost. There are so many changes and updates made every day and the azure overview blades can be pretty clunky to traverse through. However, with Azure Cloud Shell, we can utilize PowerShell or Bash to manage Azure resources instead of having to click around in the GUI.

So what is Azure Cloud Shell? It is a web-based shell that can be accessed via a web browser. It will automatically authenticate with your Azure sign-on credentials and allow you to manage all the Azure resources that your account has access to. This eliminates the need to load Azure modules on workstations. So for some situations where developers or IT Pros require shell access to their Azure resources, Azure Cloud Shell can be a very useful solution, as they won’t have to remote into “management” nodes that have the Azure PowerShell modules installed on them.

Cloud Masterclass webinar

How Azure Cloud Shell Works

As of right now, Azure Cloud Shell gives users two different environments to use. One is a Bash environment, which is basically a terminal connection to a Linux VM in Azure that gets spun up. This VM is free of charge. The second environment available is a PowerShell environment, which runs Windows PowerShell on a Windows Server Core VM. You will need to have some storage provisioned on your Azure account in order to create the $home directory. This acts as the persistent storage for the console session and allows users to upload scripts to run on the console.

Getting Started

To get started using Azure Cloud Shell, go to shell.azure.com. You will be prompted to sign in with your Azure account credentials:

Azure Cloud Shell welcome

Now we have some options. We can select which environment we prefer to run in. We can run in a Bash shell or we can use PowerShell. Pick whichever one you’re more comfortable with. For this example, I’ve selected PowerShell:

Next, we get a prompt for storage, since we haven’t configured the shell settings with this account yet. Simply select the “Create Now” button to go ahead and have Azure create a new resource group, or select “Show Advanced Settings” to configure those settings to your preference:

Once the storage is provisioned, we will wait a little bit for the console to finish loading, and then the shell should be ready for us to use!

In the upper left corner, we have all of the various controls for the console. We can reset the console, start a new session, switch to Bash, and upload files to our cloud drive:

For an example, I uploaded an activate.bat script file to my cloud drive. In order to access it we simply reference $home and specify our CloudDrive:

Now I can see my script:

This will allow you to deploy your custom PowerShell scripts and modules in Azure from any device! assuming you have access to a web browser, of course. Pretty neat!

Upcoming Changes and Things to Note

  • On May 21st, Microsoft announced that they will be going with Linux platform for both the Windows PowerShell and Bash experience. How is this possible? Essentially they will be using a Linux container to host the shell. By default PowerShell Core 6 will be the first experience. They claim that the startup time will be much faster than previous versions because of the Linux container. For switching between bash and PowerShell in the console, simply type “bash”. If you want to go back to PowerShell Core just type “pwsh”.
  • Microsoft is planning on having “persistent settings” for Git and SSH tools so that the settings for these tools are saved to the CloudDrive and users won’t have to hassle with them all the time.
  • There is some ongoing pain with modules currently. Microsoft is still working on porting over modules to .Net Core (for use with Powershell Core) and there will be a transition period while this happens. They are prioritizing the porting of the most commonly used modules first. In the meantime, there is one workaround that many people seem to forget, implicit remoting. This is the process of taking a module that is already installed on another endpoint and importing it into your PowerShell session allowing you to call that module and have it remotely execute on the node where the module is installed. It can be very useful for now until we get more modules converted over to .Net Core.

Want to Learn More About Microsoft Cloud Services?

The development pace of Azure is one of the most aggressive in the market today, and as you can see Azure Cloud Shell is constantly being updated and improved over a short period of time. In the near future, it will most likely be one of the more commonly used methods for interacting with Azure resources. It provides Azure customers with a seamless way of managing and automating their Azure resources without having to authenticate over and over again or install extra snap-ins and modules; and will continually shape the way we do IT today.

What are your thoughts regarding the Azure Cloud Shell? Have you used it yet? What are your initial thoughts? Let us know in the comments section below!

Do you have interest in more Azure Goodness? Are you wondering how to get started with the cloud and move some existing resources into Microsoft Azure? We actually have a panel styled webinar coming up in June that addresses those questions. Join Andy Syrewicze, Didier Van Hoye, and Thomas Maurer for a crash course on how you can plan your journey effectively and smoothly utilizing the exciting cloud technologies coming out of Microsoft including:

  • Windows Server 2019 and the Software-Defined Datacenter
  • New Management Experiences for Infrastructure with Windows Admin Center
  • Hosting an Enterprise Grade Cloud in your datacenter with Azure Stack
  • Taking your first steps into the public cloud with Azure IaaS

Journey to the Clouds

Save your seat

Emerging technologies to fuel collaboration industry growth

The future of any industry is not always certain, and it can be difficult to predict. However, some trends in the unified communications and collaboration industry indicate 2018 will be a strong year of growth.

Over the next two years, 80% of companies intend to adopt UCC tools, according to a survey published by market research firm Ovum. More importantly, 78% of the 1,300 global companies surveyed have already set aside budgets to adopt UCC tools — that’s a promising sign.

But what exactly will that growth in the unified communications and collaboration industry look like? What existing trends will continue? And what new trends will emerge?

The continued rise of APIs in the collaboration industry

As more companies emphasize streamlining their workflows, more IT departments will embed communication APIs into their existing applications. Integrating communication APIs is faster, easier and cheaper than a full internal development, which can take months. Additionally, deploying commercial software, which requires companies to run their own global infrastructure, can be burdensome.

In 2017, 25% of companies used APIs to embed UC features, according to a report from Vidyo, a video conferencing provider based in Hackensack, N.J. This trend is expected to continue, as half of companies plan to deploy APIs this year, and another 78% plan to integrate APIs for embedded video in the future.

Embedded communication APIs also provide contextual information for workflows. Information out of context does not exactly help organizations, and it provides users with a fragmented experience — even with a project management interface to organize workflows.

In 2018, look for new features to put more contextualized information at workers’ fingertips. For example, a sidebar during a video conference could offer users information, such as certain content to address during the meeting or tasks associated with the active speaker.

The AI party arrives in the collaboration industry

As we push into 2018, keep an eye on the emergence of AI in the unified communications and collaboration industry. Virtual assistants and bots, for example, use AI to enrich the meeting experience.

Imagine sitting through a long conference call when the discussion moves to a topic that interests you. You call out, “Start recording conversation,” and a virtual assistant immediately begins recording. Then, you say, “Send me a transcript of this conversation.” And at the end of the call, the virtual assistant sends you a transcript with an analysis of the conversation that you can replay with action items.

Emerging technology in the contact center

Unified communications apps are revolutionizing business in general. But I predict 2018 will be a banner year for the customer-support industry in particular. Some companies have already integrated click-to-call features into their chatbots, but the quality of those features to date has been subpar.

Companies will move from telephony to instant video calls when connecting customers with agents. Thanks to instant translation and transcription services, the video widget will include real-time subtitles translated into French, English, Spanish or whatever language the customer needs to understand the service agent.

The agent experience will also improve. We’ll start to see AI bots on the back end that transcribe conversations and index all the words, so agents can be prompted with special content as the conversation unfolds. Agents could then send information to customers on the spot with a voice command.

Customers and agents will also be able to illustrate what they’re talking about with augmented reality (AR). Imagine you’re on the phone with a Comcast agent, and you can show the agent your router with your iPhone. The agent could send you diagrams of what to do — superimposed onto your router in AR. This process is now possible, thanks to Google and Apple embracing AR toolkits.

These emerging technologies indicate a bright future for the unified communications and collaboration industry. Whatever the next year holds, good luck in your journey.

Stephane Giraudie is CEO and founder of Voxeet, a provider of voice over IP web conferencing software based in San Francisco. 

Five questions to ask before purchasing NAC products

As network borders become increasingly difficult to define, and as pressure mounts on organizations to allow many different devices to connect to the corporate network, network access control is seeing a significant resurgence in deployment.

Often positioned as a security tool for the bring your own device (BYOD) and internet of things (IoT) era, network access control (NAC) is also increasingly becoming a very useful tool in network management, acting as a gatekeeper to the network. It has moved away from being a system that blocks all access unless a device is recognized, and is now more permissive, allowing for fine-grained control over what access is permitted based on policies defined by the organization. By supporting wired, wireless and remote connections, NAC can play a valuable role in securing all of these connections.

Once an organization has determined that NAC will be useful to its security profile, it’s time for it to consider the different purchasing criteria for choosing the right NAC product for its environment. NAC vendors provide a dizzying array of information, and it can be difficult to differentiate between their products.

When you’re ready to buy NAC products and begin researching your options — and especially when speaking to vendors to determine the best choice for your organization — consider the questions and features outlined in this article.

NAC device coverage: Agent or agentless?

NAC products should support all devices that may connect to an organization’s network. This includes many different configurations of PCs, Macs, Linux devices, smartphones, tablets and IoT-enabled devices. This is especially true in a BYOD environment.

NAC agents are small pieces of software installed on a device that provide detailed information about the device — such as its hardware configuration, installed software, running services, antivirus versions and connected peripherals. Some can even monitor keystrokes and internet history, though that presents privacy concerns. NAC agents can either run scans as a one-off — dissolvable — or periodically via a persistently installed agent.

If the NAC product uses agents, it’s important that they support the widest variety of devices possible, and that other devices can use agentless NAC if required. In many cases, devices will require the NAC product to support agentless implementation to detect BYOD and IoT-enabled devices and devices that can’t support NAC agents, such as printers and closed-circuit television equipment. Agentless NAC allows a device to be scanned by the network access controller and be given the correct designation based on the class of the device. This is achieved with aggressive port scans and operating system version detection.

Agentless NAC is a key component in a BYOD environment, and most organizations should look at this as must-have when buying NAC products. Of course, gathering information via an agent will provide more information on the device, but it’s not viable on a modern network that needs to support many different devices.

Does the NAC product integrate with existing software and authentication?

This is a key consideration before you buy an NAC product, as it is important to ensure it supports the type of authentication that best integrates with your organization’s network. The best NAC products should offer a variety of choices: 802.1x — through the use of a RADIUS server — Active Directory, LDAP or Oracle. NAC will also need to integrate with the way an organization uses the network. If the staff uses a specific VPN product to connect remotely, for example, it is important to ensure the NAC system can integrate with it.

Supporting many different security systems that do not integrate with one another can cause significant overhead. A differentiator between the different NAC products is not only what type of products they integrate with, but also how many systems exist within each category.

Consider the following products that an organization may want to integrate with, and be sure that your chosen NAC product supports the products already in place:

1. Security information and event management

2. Vulnerability assessment

3. Advanced threat detection

4. Mobile device management

5. Next-generation firewalls

Does the NAC product aid in regulatory compliance?

NAC can help achieve compliance with many different regulations, such as the Payment Card Industry Data Security Standard, HIPAA, International Organization for Standardization 27002 — ISO 27002 — and the National Institute of Standards and Technology. Each of these regulations stipulates certain controls regarding network access that should be implemented, especially around BYOD, IoT and rogue devices connecting to the network.

By continually monitoring network connections and performing actions based on the policies set by an organization, NAC can help with compliance with many of these regulations. These policies can, in many cases, be configured to match those of the compliance regulations mentioned above. So, when buying NAC products, be sure to have compliance in mind and to select a vendor that can aid in this process — be it through specific knowledge in its support team or through predefined policies that can be tweaked to provide the compliance required for your individual business.

What is the true cost of buying an NAC product?

The price of NAC products can be the most significant consideration, depending on the budget you have available for procurement. Most NAC products are charged per endpoint (device) connected to the network. On a large network, this can quickly become a substantial cost. There are often also hidden costs with NAC products that must be considered when assessing your purchase criteria.

Consider the following costs before you buy an NAC product:

A differentiator between the different NAC products is not only what type of products they integrate with, but also how many systems exist within each category.

1. Add-on modules. Does the basic price give organizations all the information and control they need? NAC products often have hidden costs, in that the basic package does not provide all the functionality required. The additional cost of add-on modules can run into tens of thousands of dollars on a large network. Be sure to look at what the basic NAC package includes and investigate how the organization will be using the NAC system. Specific integrations may be an additional cost. Is there extra functionality that will be required in the NAC product to provide all the benefits required?

2. Upfront costs. Are there any installation charges or initial training that will be required? Be sure to factor these into the calculation, on top of the price per endpoint — of course.

3. Support costs. What level of support does the organization require? Does it need one-off or regular training, or does it require 24/7 technical support? This can add significantly to the cost of NAC products.

4. Staff time. While not a direct cost of buying NAC products, consider how much monitoring an NAC system requires. Time will need to be set aside not only to learn the NAC system, but to manage it on an ongoing basis and respond to alerts. Even the best NAC systems will require staff to be trained so if problems occur, there will be people available to address the issues.

NAC product support: What’s included?

Support from the NAC manufacturer is an important consideration from the perspective of the success of the rollout and assessing the cost. Some of the questions that should be asked are:

  1. What does the basic support package include?
  2. What is the cost of extended support?
  3. Is support available at all times?
  4. Does the vendor have a significant presence in the organization’s region? For example, some NAC providers are primarily U.S.-based, and if an organization is based in EMEA, it may not provide the same level of support.
  5. Is on-site training available and included in the license?

Support costs can significantly drive up the cost of deployment and should be assessed early in the procurement process.

What to know before you buy an NAC system

When it comes to purchasing criteria for network access control products, it is important that not only is an NAC system capable of detecting all the devices connected to an organization’s network, but that it integrates as seamlessly as possible. The cost of attempting to shoehorn existing processes and systems into an NAC product that does not offer integration can quickly skyrocket, even if the initial cost is on the cheaper side.

NAC should also work for the business, not against it. In the days when NAC products only supported 802.1x authentication and blocked everything by default, it was seen as an annoyance that stopped legitimate network authentication requests. But, nowadays, a good NAC system provides seamless connections for employees, third parties and contractors alike — and to the correct area of the network to which they have access. It should also aid in regulatory compliance, an issue all organizations need to deal with now.

Assessing NAC products comes down to the key questions highlighted above. They are designed to help organizations determine what type of NAC product is right for them, and accordingly aid them in narrowing their choices down to the vendor that provides the product that most closely matches those criteria.

Once seldom used by organizations, endpoint protection is now a key part of IT security, and NAC products have a significant part to play in that. From a hacker’s perspective, well-implemented and managed NAC products can mean the difference between a full network attack and total attack failure.

Hyper-converged infrastructures get a new benchmark

Hyper-converged infrastructures can be extremely difficult to manage, because everything is interconnected. Measuring performance in this type of infrastructure is just as challenging. And in the past, the available benchmarks only focused on one part of the system. Now, administrators have the ability to look at the infrastructure as a whole.

In November, the Transaction Processing Performance Council (TPC) announced the availability of TPCx-HCI, an application system-level benchmark for measuring the performance of hyper-converged infrastructures. With this benchmark kit, administrators can get a complete view of their virtualized hardware and converged storage, networking and compute platforms running a database application.

We spoke with Reza Taheri, chairman of the TPCx-HCI committee and principal engineer at VMware, who explained the new benchmark for hyper-converged infrastructures and how the council created it.

What was the process for developing the TPCx-HCI benchmark?

Reza Taheri: Originally, we developed a functional specifications document to leave people’s hands open to do any implementation [of the benchmark]. But over time, we realized that it actually made it very hard for people to implement. Not anybody could just go out and start learning the benchmark based on Transaction Processing Performance Council standards. So, we put out a benchmark kit that anybody can download, and it implements the benchmark, measurement, collection of data and all of that in the application kit itself.

The TPCx-V benchmark [for virtualization] was released a couple of years ago. The idea was to look at the performance of a virtualized server — so the hardware, hypervisor, storage and networking using the database workload. We wanted to compare different virtualization stacks using a very heavy business-critical database workload.

We looked at the TPCx-V benchmark kit and specifications and realized that we could very quickly repurpose that for hyper-converged infrastructures.
Reza Taherichairman of the TPCx-HCI committee and principal engineer at VMware

Earlier this year, we had a couple new members join the TPC, and they were HCI vendors — DataCore and Nutanix. They, along with other vendors, [started] asking about a benchmark for HCI systems. We looked at the TPCx-V benchmark kit and specifications and realized that we could very quickly repurpose that for hyper-converged infrastructures. We realized that the HCI market is hot and that there was a demand for a good benchmark.

Will this benchmark account for quality of service, in addition to price and performance?

Taheri: In a couple of ways, yes. One is that you need to have very strict response time performance. 

The other one is something that’s new in this benchmark: Combine performance with some notion of availability. Say you’re running on a four-node cluster. For the test, you limit the VMs on three of the nodes, but all four nodes supply data. At some point during the test, you kill the fourth node and run for a while, and then you turn it back on. You’re required to report the impact on performance during this run and also to report how long it took you to recover resilience and redundancy after the host came back on.

What types of applications do you use for benchmark testing?

Taheri: It’s an online transaction processing application — a database application — that runs on top of Postgres [an open source relational database management system] in a Linux VM. We use that to generate a realistic, very heavy workload that then runs on top of the hypervisor and virtualized storage, virtualized networking, the hardware and so on. The beauty of an application like that is that it really leaves nowhere to hide. Sometimes, for example, if it’s a very simple test of just IOPS, you can make up for low storage by using a lot of CPU or a lot of memory.

But you can’t do that with a high-level system benchmark like this, because if you make up for storage by using too much CPU in the HCI software itself or do caching and use memory, then the application suffers and your performance drops. So, to have good performance, you have to have good storage, memory, CPU and networking all at the same time.

Are all the tested systems running the same hypervisor? Can you accurately compare benchmark performance results for HCI systems that are running different hypervisors?

Taheri: Any hypervisor can be used for this benchmark. Different hyper-converged infrastructures might be running different software stacks besides different hypervisors. It might not be possible to state how much of a performance difference is solely due to the hypervisor. The TPCx-V benchmark is very similar to TPCx-HCI, but runs on one node and can use any type of storage. TPCx-V is a better tool for studying the performance of hypervisors.

Is there any way to compare this benchmark to something running in the cloud?

Taheri: Not directly, but the benchmark has many attributes of cloud-based applications, such as elasticity of load, virtualization and so on. Also, a sponsor might choose to run the benchmark on a cloud platform, which is allowed by the Transaction Processing Performance Council specifications.

As HCI is still evolving, are there plans to review and make changes to the benchmark at any point?

Taheri: We would need to. It was a quantum leap from the Iometer type of benchmarks — micro-benchmarks — to a system application benchmark like this. Going forward, these specs will evolve. Benchmarks … evolve in minor ways, and every few years we have to do a major change, which makes it incomparable to previous versions of the same benchmark.

Azure Resource Manager templates ease private cloud struggles

Private cloud improves certain management capabilities, but it’s difficult to control cloud applications, which…


* remove unnecessary class from ul
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

* Replace “errorMessageInput” class with “sign-up-error-msg” class
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {

* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
return validateReturn;

* DoC pop-up window js – included in moScripts.js which is not included in responsive page
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);

often consist of multiple cloud services and resource instances. Azure is no different, though Microsoft hopes to change that with its management portal.

Azure Resource Manager (ARM) is a management portal that lets admins roll a cloud application’s components — virtual machines, storage instances, virtual networks, databases and third-party services — into a group for easier management. Introduced in 2014, ARM lets administrators deploy, change and delete cloud app components as a single template-driven task.

ARM also works with Azure Stack, which enables administrators to build Azure Resource Manager templates for Azure PowerShell, Azure command line interface (CLI), the Azure portal, REST API, and various development tools. The portal lets administrators and developers create Azure Resource Manager templates that deploy and manage cloud apps on premises or in Azure. By comparison, traditional tools, such as Microsoft System Center, do not have native integration nor do they support Azure Stack. However, Microsoft Operations Management Suite reportedly can work with Azure Stack when OMS agents are installed.

ARM lets administrators deploy, change and delete cloud app components as a single template-driven task.

Azure and Azure Stack include networking resources — virtual networks and load balancers – as well as other resources, such as compute and storage instances with attributes unique to the particular resource. ARM gathers resources into resource groups; organizations use these templates to build the environment. Orchestration features in Azure Resource Manager templates enable users to call any combination of Azure resources as a single task and produce a desired operating state.

ARM requires an Azure subscription, which provides access to role-based access control (RBAC) and provides a level of granular access to Azure resources. RBAC establishes roles and correlates those roles to scopes of action, resource groups or individual resources. For example, admins create an application — or a resource that an application uses — that only certain administrators can modify or delete to secure cloud deployments. They also can customize policies to tailor deployment behaviors, such as enforcing region limitations or naming conventions.

With ARM, admins assign advanced tracking tags to resources and resource groups. These tags organize resources and shows business leaders how much a group of Azure or Azure Stack resources costs, which is helpful for budgeting. Audit features track resource activity so admins can monitor resource use and speed troubleshooting.

ARM is an integral part of Azure and Azure Stack — not a separate management tool such as System Center. Its APIs connect the varied interfaces, namely the Azure portal or Azure CLI, to Azure and Azure Stack. Those tools then connect to the underlying compute, storage, network and other resources and services.

Next Steps

How to effectively use Azure Resource Manager

Navigate through the Microsoft Azure portal

Build and manage Windows containers in Azure

Powered by WPeMatico