Tag Archives: businesses

IT infrastructure automation boosts digital initiatives

With businesses becoming more digitally dependent and IT responsibilities outpacing budgets, IT shops are being forced to evolve. This transformation requires not just a change in infrastructure technology, but in the organization of IT personnel as well — an organizational makeover that often determines the success of digital business.

As firms drive new digital initiatives, such as developing digital products and services, using analytics and investing in application development, IT services have started to have a more direct effect on revenue opportunities. As a result, IT must become more responsive in order to speed up the delivery of those new services.

To improve responsiveness, IT shops often shift personnel to work directly with the line-of-business teams to understand their demands better. Companies add budget and headcount to address this increase in IT demands and support each new initiative, while simultaneously adding budget to support the increased infrastructure needed to handle the new initiatives. Or you could find a new way to get the same results.

The new way

Ultimately, it’s the desire to find innovative ways to dramatically reduce the cost of routine IT maintenance and management that drives demand for infrastructure transformation. The end result is an as-a-service infrastructure that frees existing personnel to cover the added responsibilities and speed delivery of IT services. Multiple emergent technologies, such as flash storage, deliver transformational benefits in terms of performance, efficiency and TCO that can help. Technologies like flash are only part of the story, however. Another possibility that’s just as beneficial is IT infrastructure automation.

Manual tasks inhibit digital business. Every hour a highly trained IT resource spends on a manual — and likely routine — task is an hour that could have been spent helping to drive a potential revenue-generating digital initiative. As businesses increase their IT infrastructure automation efforts, an emerging concept called composable infrastructure has gained interest.

With composable infrastructure, infrastructure is virtualized to let resources be dynamically and efficiently allocated to individual applications.

With composable infrastructure, IT infrastructure is virtualized to dynamically and efficiently allocate resources to individual applications. Composable infrastructure also provides the necessary analytics to fine-tune infrastructure. Ideally, software ensures the right resources are available at the right time, new resources can be added on demand, and capacity or performance can be contracted when demand changes. Cisco, Hewlett Packard Enterprise, Kaminario and other vendors promote the composable infrastructure concept.

There are several factors to consider as composable infrastructure gains traction:

  • The intelligence to drive IT infrastructure automation: Arguably the first step in any effort to automate IT is knowing what to automate, along with when and how to do it efficiently. How much performance and capacity does each application need? How much can the infrastructure provide? How will these demands change over time? Providing this information requires the right level of intelligence and predictive analytics to understand the nature of each application’s demand. Done right, this results in more efficient infrastructure design and a reduction in capital investment. An even more valuable likely benefit is in personnel resource savings, as this intelligence enables automatic tuning of the infrastructure.
  • Granularity of control: Intelligence is important, but the ability to use that intelligence offers the most tangible benefits. Composable infrastructure products typically provide controls, such as APIs, to enable programmatic management. In some cases, this lets the application automatically demand resources when it identifies increasing demand. The more likely near-term scenario is that these controls will be used to automate planned manual tasks, such as standing up infrastructure for the deployment of a new application. Or, for example, you could use the controls to automate the expansion of a virtual machine environment. As IT infrastructure automation efforts expand and the number of infrastructure elements — e.g., performance and capacity — that can be automatically controlled increases, the value of composable infrastructure increases.
  • Architectural scale: Every IT infrastructure option seems to be scalable these days. For composable infrastructure, capacity and even performance scalability are just part of the story. Necessary data services and data management must scale as well. In addition, for the infrastructure to support IT automation, a time element is added to that scale. So when a request for scale is made, the infrastructure must react in a timely and predictable manner. For this, composable infrastructure requires high-performing components and latency reduction across data interconnects.

    Nonvolatile memory express (NVMe) plays a role here. While some view NVMe as just faster flash, the low-latency interconnect is critical to a scalable IT infrastructure effort. Data services add latency, and reducing the latency of the data path lets these data services extend to a broader infrastructure. Additionally, flexible scale isn’t just about adding resources; it’s also about freeing up resources that can be better used elsewhere.

The end goal is to deliver an infrastructure that can respond effectively to automation and reduce the number of manual tasks that must be handled by IT. Composable infrastructure isn’t the only way to achieve IT infrastructure automation, however. Software-defined storage and converged infrastructure can also help automate IT and go a long way toward eliminating the enemy of digital business, manual IT tasks.

And the more manual your IT processes are, the less competitive you’ll be as a digital business. As businesses seek to build an as-a-service infrastructure, composable infrastructure is another innovative step to create and automatic an on-demand data center.

Microsoft customers and partners envision smarter, safer, more connected societies – Transform

Organizations around the world are transforming for the digital era, changing how businesses, cities and citizens work. This new digital era will address many of the problems created in the earlier agricultural and industrial eras, making society safer, more sustainable, more efficient and more inclusive.

But an infrastructure gap is keeping this broad vision from becoming a reality. Digital transformation is happening faster than we expected — only in pockets. Microsoft and its partners seek to help city and other public infrastructures close the gaps, with advanced technologies in the cloud, data analytics, machine learning and artificial intelligence (AI).

Microsoft’s goal is to be a trusted partner to both public and private organizations in building connected societies. This summer, an IDC survey named Microsoft the top company for trust and customer satisfaction in enabling smart-city digital transformations.

Last week at a luncheon in New York City, Microsoft and executives from three organizations participating in the digital transformation shared how they are helping to close the infrastructure gap.

A photo of Arnold Meijer, TomTom's strategic business development manager.

Arnold Meijer, TomTom’s strategic business development manager, at the Building Digital Societies salon lunch. (Photo by John Brecher)

TomTom NV, based in Amsterdam, traditionally focused on providing consumers with personal navigation. Now, “the need for locations surpasses the need for navigation — it’s everywhere,” said Arnold Meijer, strategic business development manager. “Managing a fleet of connected devices or ordering a ride from your phone — these things weren’t possible five years ago. We’re turning to cloud connectivity and the Internet of Things as tools to keep our maps and locations up to date.”

Sensors from devices and vehicles on the road deliver condition and usage data essential to highway planners, infrastructure managers and fleet operators to make well informed decisions.

Autonomous driving is directly in TomTom’s sights, a way to cut down on traffic accidents, one of the top 10 causes of death worldwide, and to reduce emissions through efficient routing. “You probably won’t own a vehicle 20 years from now, and the one that picks you up won’t have a driver,” Meijer said. “If you do go out driving yourself, it will be for fun.”

With all that time freed up from driving, travelers can do something else such as relax or work. Either option presents new business opportunities for companies that offer entertainment or enable productivity for a mobile client, who is almost certainly connected to the internet. “There will be new companies coming out supporting that, and I definitely foresee Microsoft and other businesses active there,” Meijer said.

“Such greatly eased personal transport may decrease the need to live close to work or school, changing settlement patterns and reduce the societal impacts of mobility. All because we can use location- and cloud technology.” he added.

A photo of George Pitagorsky, CIO for the New York City Department of Education Office of School Support Services.

George Pitagorsky, CIO for the New York City Department of Education Office of School Support Services. (Photo by John Brecher)

The New York City Dept. of Education is using Microsoft technology extensively in a five-year, $25-million project that will tell parents their children’s whereabouts while the students are in transit, increase use of the cafeterias and provide access to information about school sports.

The city’s Office of Pupil Transportation provides rides to more than 600,000 students per day, with more than 9,000 buses and vehicles. For a preliminary version of the student-tracking system, the city has equipped its leased buses with GPS devices.

“When the driver turns on the GPS and signs in his bus, we can find out where it is at any time,” said George Pitagorsky, executive director and CIO for the department’s Office of School Support Services. If parents know what bus their child is on, they can more easily meet it at the stop or be sure to be there when the child is brought home.

A next step will be GPS units that don’t require driver activation. To let the system track not just the vehicle but its individual occupants, drivers will still need to register students into the GPS when they get on the bus.

“Biometrics like facial recognition that automate check-in when a student steps onto a bus — we’re most likely going to be there, but we’re not there yet,” Pitagorsky said.

Further out within the $25-million Illumination Program, a new bus-routing tool will replace systems developed more than 20 years ago, allowing the creation of more efficient routes, making course corrections to avoid problems, easily gathering vehicle-maintenance costs and identifying problem vehicles.

Other current projects include a smartphone app to advise students of upcoming meal choices in the school cafeterias, with an eye to increasing cafeteria use, enhancing students’ nutritional intake and offering students a voice in entree choices. The department has also created an app that displays all high school sports games, locations and scores.

A new customer-relations management app will let parents update their addresses and request special transport services on behalf of their children, with no more need to make a special visit to the school to do so. A mobile app will allow parents and authorized others to locate their children or bus, replacing the need for a phone call to the customer service unit. And business intelligence and data warehousing will get a uniform architecture, to replace the patchwork data, systems and tools now in place.

A photo of Christy Szoke CMO and co-founder of Fathym.

Christy Szoke CMO and co-founder of Fathym. (Photo by John Brecher)

Fathym, a startup in Boulder, Colorado, is directly addressing infrastructure gaps through a rapid-innovation platform intended to harmonize disparate data and apps and facilitate Internet of Things solutions.

“Too often, cities don’t have a plan worked out and are pouring millions of dollars into one solution, which is difficult to adjust to evolving needs and often leads to inaccessible, siloed data,” said co-founder and chief marketing officer Christy Szoke. “Our philosophy is to begin with a small proof of concept, then use our platform to build out a solution that is flexible to change and allows data to be accessible from multiple apps and user types.” Fathym makes extensive use of Azure services but hides that complexity from customers, she said.

To create its WeatherCloud service, Fathym combined data from roadside weather stations and sensors with available weather models to create a road weather forecast especially for drivers and maintenance providers, predicting conditions they’ll find precisely along their route.

“We’re working with at least eight data sets, all completely different in format, time intervals and spatial resolutions,” said Fathym co-founder and CEO Matt Smith. “This is hard stuff. You can’t have simplicity on the front end without a complicated back-end system, a lot of math, and a knowledgeable group of different types of engineers helping to make sense of it all.”

Despite the ease that cloud services have brought to application development, even 20 years from now foresees a need for experts to wrangle data.

“When people say, ‘the Internet of Things is here’ and ‘the robots are going to take over,’ I don’t think they have the respect they should have for how challenging it will remain to build complex apps,” Smith said.

Added Szoke, “You can’t just say ‘put an AI on it’ or ‘apply machine learning’ and expect to get useful data. You will still need creative minds, and data scientists, to understand what you’re looking at, and that will continue to be an essential industry.”

Alexa for Business sounds promising, but security a concern

Virtual assistant technology, popular in the consumer world, is migrating toward businesses with the hopes of enhancing employee productivity and collaboration. Organizations could capitalize on the familiarity of home-based virtual assistants, such as Siri and Alexa, to boost productivity in the office and launch meetings quicker.

Last week, Amazon announced Alexa for Business, a virtual assistant that connects Amazon Echo devices to the enterprise. Alexa for Business allows organizations to equip conference rooms with Echo devices that can turn on video conferencing equipment and dial into a conference via voice commands.

“Virtual assistants, such as Alexa, greatly enhance the user experience and reduce the complexity in joining meetings,” Frost & Sullivan analyst Vaishno Srinivasan said.

Personal Echo devices connected to the Alexa for Business platform can also be used for hands-free calling and messaging, scheduling meetings, managing to-do lists and finding information on business apps, such as Salesforce and Concur.

Overcoming privacy and security hurdles

Before enterprise virtual assistants like Alexa for Business can see widespread adoption, they must overcome security concerns.

“Amazon and other providers will have to do some evangelizing to demonstrate to CIOs and IT leaders that what they’re doing is not going to compromise any security,” Gartner analyst Werner Goertz said.

Amazon is well-positioned to grab this opportunity much ahead of Microsoft Cortana, Google Assistant and Apple’s Siri.
Vaishno Srinivasananalyst, Frost & Sullivan

Srinivasan said organizations may have concerns about Alexa for Business collecting data and sharing it in a cloud environment. Amazon has started to address these concerns, particularly when connecting personal Alexa accounts and home Echo devices to a business account.

Goertz said accounts are sandboxed, so users’ personal information will not be visible to the organization. The connected accounts must also comply with enterprise authentication standards. The platform also includes administrative controls that offer shared device provisioning and management capabilities, as well as user and skills management.

Another key challenge is ensuring a virtual assistant device, like the Amazon Echo, responds to a user with information that is highly relevant and contextual, Srinivasan said.

“These devices have to be trained to enhance its intelligence to deliver context-sensitive and customized user experience,” she said.

Integrating with enterprise IT systems

End-user spending on virtual assistant devices is expected to reach $3.5 billion by 2021, up from $720 million in 2016, according to Gartner. Enterprise adoption is expected to ramp up by 2019.

Goertz said Amazon had to do a lot of work “under the hood” to enable the integrations with business apps and vendors such as Microsoft, Cisco, Polycom and BlueJeans. The deep integrations with enterprise IT systems is required to enable future capabilities, such as dictating and sending emails from an Echo device, he said.

Srinivasan said Alexa for Business can extend beyond conference rooms through APIs provided by Amazon’s Alexa Skills Kit for developers.

“Thousands of developers utilize these APIs and have created ‘skills’ that enable automation and increase efficiency within enterprises,” she said.

Taking use cases beyond productivity tools

While enterprise virtual assistants could be deployed in any type of company looking to boost productivity, Alexa for Business has already seen deployments in industries such as hospitality.

Wynn Las Vegas is equipping its rooms with Amazon Echo devices, which are managed with Alexa for Business, Goertz said. Guests of the hotel chain can use voice commands, called skills, to turn on the lights, close the blinds or order room service.

Another industry that could see adoption of virtual assistants is healthcare. Currently, Alexa for Business supports audio-only devices. But the platform could potentially support devices with a camera and display that could add video conferencing and telemedicine capabilities, Goertz said.

Alexa for Business also has the potential to disrupt the huddle room market by turning Echo devices into stand-alone conference phones, Srinivasan said.

Amazon Echo prices range from $50 to $200, and the most recent generation of devices offers improved audio quality. The built-in virtual assistant with Alexa for Business and developer ecosystem fills a gap that exists in the conference phone market, she wrote in a blog post.

“Amazon is well-positioned to grab this opportunity much ahead of Microsoft Cortana, Google Assistant and Apple’s Siri,” she said.

Explosion in unstructured data storage drives modernization

Digital transformation is the key IT trend driving enterprise data center modernization. Businesses today rapidly deploy web-scale applications, file sharing services, online content repositories, sensors for internet of things implementations and big data analytics. While these digital advancements facilitate new insights, streamline processes and enable better collaboration, they also increase unstructured data at an alarming rate.

Managing unstructured data and its massive growth can quickly strain legacy file storage systems that are poorly suited for managing vast amounts of this data. Taneja Group recently investigated the most common of these file storage limitations in a recent survey. The study found the top challenges IT faces with traditional file storage are lack of flexibility, poor storage utilization, inability to scale to petabyte levels and failure to support distributed data. These obstacles often lead to high storage costs, complex storage management and limited flexibility in unstructured data storage.

So how are companies addressing the unstructured data management challenge? As with all things IT, it’s essential to have the right architecture. For unstructured data storage, this means a highly scalable, resilient, flexible, economical and accessible secondary storage environment.

Let’s take a closer look at modern unstructured data storage requirements and examine why distributed file systems and a scale-out object storage design, or scale-out storage, are becoming a key part of modern secondary storage management.

Scalability and resiliency

Given the huge amounts of unstructured data, it’s undeniable that scalability is the most critical aspect of modern secondary storage.

Given the huge amounts of unstructured data, scalability is undeniably the most critical aspect of modern secondary storage. This is where scale-out storage shines. It’s ideal for managing huge amounts of unstructured data because it easily scales to hundreds of petabytes simply by adding storage nodes. This inherent advantage over scale-up file storage appliances that become bottlenecked by single or dual controllers has prompted several data protection vendors to offer scale-out secondary storage platforms. Notable vendors with scale-out secondary storage offerings are Cohesity, Rubik and — most recently — Commvault.

Attaining storage resiliency is another important requirement of modern secondary storage. Two key factors are required to achieve storage resiliency. The first is high fault tolerance. Scale-out storage is ideal in this area because it uses space-efficient erasure coding and flexible replication policies to tolerate site, multiple node and disk failures.

Rapid data recovery is the second key factor for storage resiliency. For near-instantaneous recovery times, IT managers should look for secondary storage products that provision clones from backup snapshots to recover applications in minutes or even seconds. Secondary storage products should allow administrators to run recovered applications directly on secondary storage until data is copied back to primary storage and be able to orchestrate the recovery of multi-tier applications.

Flexibility and cost

To handle multiple, unstructured data storage use cases, modern secondary storage must also be flexible. Central to flexibility is multiprotocol support. Scale-out storage should support both file and object protocols, such as NFS for Linux, SMB or CIFS for Windows and Amazon Simple Storage Service for web-scale applications. True system flexibility also requires modularity, or composable architecture, which enables multidimensional scalability and I/O flexibility. Admins must be able to quickly vary computing, network and storage resources to accommodate IOPS-, throughput- and capacity-intensive workloads.

Good economics is another requirement for modern secondary storage. Scale-out storage reduces hardware costs by enabling software-defined storage that uses standard, off-the-shelf servers. It’s also simple to maintain. Administrators can easily upgrade or replace computing nodes without having to migrate data among systems, reducing administration time and operating costs. Scale-out secondary storage also provides the option to store data in cost-effective public cloud services, such as Amazon Web Services, Google Cloud and Microsoft Azure.

Moreover, scale-out storage reduces administration time by eliminating storage silos and the rigid, hierarchical structure used in file storage appliances. It instead places all data in a flat address space or single storage pool. Scale-out secondary storage also provides built-in metadata file search capabilities that help users quickly locate the data they need.

Some vendors, such as Cohesity, offer full-text search that facilitates compliance activities by letting companies quickly find files containing sensitive data, such as passwords and Social Security numbers. Add to this support for geographically distributed environments, and it’s easy to see why scale-out storage is essential for cost-effectively managing large-scale storage environments.

Data management

The final important ingredient of modern secondary storage environments is providing easy access to services required to manage secondary data. As the amount of unstructured data grows, IT can make things easier for storage administrators and improve organizational agility by giving application owners self-service tools that automate the full data lifecycle. This means providing a portal or marketplace and predefined service-level agreement templates that establish the proper data storage parameters. These parameters include recovery points, retention periods and workload placement based on a company’s standard data policies. Secondary storage should also integrate with database management tools, such as Oracle Recovery Manager.

Clearly, distributed file systems and scale-out object storage architectures are a key part of modern secondary storage offerings. There is an evolution of secondary product portfolios to address the immense unstructured data storage needs of modern organizations in the digital era. So stay tuned, as I expect nearly all major data protection vendors will introduce scale-out secondary storage products over the next 12 to 18 months. 

Autotask Community Live 2017: MSP tools meet transformation

FORT LAUDERDALE, Fla. — The technology that managed service providers use to run their businesses can also help customers reinvent their operations.

That’s one takeaway from this week’s Autotask Community Live 2017 conference, the IT business management software vendor’s annual MSP meetup. While automated systems enable MSPs to deliver services efficiently and profitably, those tools can also free up time to deal with customer’s digital transformation initiatives, Autotask executives suggested.

Mark Cattini, president and CEO of Autotask, said software-driven digital transformation powers enterprises from Amazon to Uber, but he noted small and medium-sized businesses — a core target market for many channel partners — are also on notice to recast themselves to remain relevant. But MSPs may need to change to help clients transform.

“Many of you are going to have to think about being a business technologist,” Cattini told MSP attendees at the conference, which wraps up Sept. 19.

Mark Cattini, president and CEO, AutotaskMark Cattini

He said customers are demanding digital transformation and if MSPs don’t deliver, those customers will turn elsewhere for services. Automation, however, can pave the way for MSPs to offer the more forward-looking services. In Autotask’s case, the company provides tools such as professional services automation (PSA), remote monitoring and management (RMM), file sync and share, and file backup.

Individual products, such as file sync and share, can directly contribute to a customer’s digital transformation. But the Autotask product line, as a whole, provides “a broad umbrella” that lets service providers automate and manage tactical, manual chores, so their personnel can play a more strategic role with customers, said Pat Burns, vice president of product management and strategy at the company.

Time-consuming management tasks can hinder service providers aspiring to offer higher value-added services. Indeed, Autotask’s annual IT service provider survey, dubbed Metrics that Matter, revealed many companies waste “up to 10 billable hours each week on manual processes that can be easily automated.” The survey, released at Autotask Community Live 2017, identified entering data into multiple systems and an inability to accurately capture billable hours among the top culprits. More than 1,030 service provider respondents participated in the survey.

Putting it all together

Product unification was another key theme at the conference, as it was at the 2016 event when Autotask unveiled Autotask Endpoint Backup as the fourth component of its product suite. The backup product lends channel partners the ability to offer backup services that work with other Autotask products, such as its PSA offering.

Many of you are going to have to think about being a business technologist.
Mark Cattinipresident and CEO, Autotask

A year later, many of the company’s service provider customers have gone beyond one-product implementations. Cattini said more than half of MSPs reported using two or more Autotask products.

Users of multiple products have additional integrations on the horizon. Burns said the company’s approach is to pursue database- and interface-level integration, noting the first phase of unification focuses on PSA. “That’s because it is the most foundational piece of the platform,” he noted.

Integration initiatives, meanwhile, extend beyond the Autotask product set. Cattini said the company’s PSA software offers a significant footprint, but added there are areas outside the scope of a PSA that the company doesn’t cover. For those, Autotask continues to invest in integrations, he said, noting a total of 160 partner-built integrations.

“It’s about product adoption,” Cattini said. “We need to make it easier for you to adopt the products.”

Project UI revamp in the works

As for individual products, Autotask customers can expect the next major releases of the company’s PSA and RMM (Autotask Endpoint Management) products in early 2018, Burns said. File sync and share (Autotask Workplace) and Autotask Endpoint Backup will be up for major releases prior to the PSA and RMM updates, he said.

In another product move outlined at Autotask Community Live 2017, Autotask PSA’s project task component will get a new user interface (UI) along the lines of the much-anticipated revised ticket UI. Burns said the latest UI effort will ship considerably sooner because Autotask’s engineers will be able to take advantage of reusable frameworks and UI controls from the earlier ticket project.

Burns said the reusable components “will save a lot of time.”

MSPs can look for that UI development in 2018.

Salesforce Chatter comes to Windows 10 – The Fire Hose

The Salesforce Chatter app enables cross-company cooperation that helps businesses drive productivity, accelerate innovation and share knowledge. Fast.

Salesforce Chatter is a forum for insights; a means to motivate and engage employees; and an easy way to exchange files, data and ideas. With it, you can track teams and projects wherever they are – in the office, on the road, at a conference, etc.

Dozens of functions and decisions can be executed right in the app, from conversations and approvals to edits and notifications, without waiting for a desk or a meeting.

So bring your company together, then move forward with Salesforce Chatter, free to download from the Windows Store.

Also, keep up with what’s hot, new and trending in the Windows Store on Twitter and Facebook.

Athima Chansanchai
Microsoft News Center Staff

Tags: Apps, Salesforce, Salesforce Chatter, Windows 10, Windows Store

Azure IoT Hub Device Provisioning Service is now in public preview – Internet of Things

Setting up and managing Internet of Things (IoT) devices can be a challenge of the first order for many businesses. That’s because provisioning entails a lot of manual work, technical know-how, and staff resources. And certain security requirements, such as registering devices with the IoT hub, can further complicate provisioning.

During the initial implementation, for instance, businesses have to create unique device identities that are registered to the IoT hub and install individual device connection credentials, which enable revocation of access in event of compromise. IT staff also may want to maintain an enrollment list that controls what devices are allowed to automatically provision.

Wouldn’t it be great if there was a secure, automated way to remotely deploy and configure devices during registration to the IoT hub—and throughout their lifecycles? With Microsoft’s IoT Hub Device Provisioning Service (DPS), now in public preview, you can.

In a post on the Azure blog, [Title], Sam George explains how the IoT Hub Device Provisioning Service can provide zero-touch provisioning that eliminates configuration and provisioning hassles when onboarding IoT devices that connect to Azure services. This allows businesses to quickly and accurately provision millions of devices in a secure and scalable manner. In fact, IoT Hub Device Provisioning Service simplifies the entire device lifecycle management through features that enable secure device management and device reprovisioning. Next year, we plan to add support for ownership transfer and end-of-life management.

DPS is now available in the Eastern U.S., Western Europe, and Southeast Asia. To learn more about how Azure IoT Hub Device Provisioning Service can take the pain out of deploying and managing an IoT solution in a secure, reliable way, read our blog post announcing the public preview. And for technical details, check out Microsoft’s DPS documentation center.

Tags: Announcement, Azure IoT Hub, Device Provisioning Service

Close ranks with key Office 365 security features

Businesses receive enormous convenience and cost control benefits from Office 365, but a move to the cloud also increases the company’s attack surface. This heightened exposure makes it imperative that administrators learn how best to implement the Office 365 security features.

Don’t sit back and expect adequate protection with the default security configurations in Office 365. Admins must tailor Office 365 security features to shield data on the platform from outside threats.

How does Office 365 affect business security?

Modern businesses cannot function as islands, surrounded by antimalware, antivirus and a secure perimeter and demilitarized zone for external users to access certain servers.

An enterprise that depends on Office 365 requires a more intelligent security approach that extends from the service provider to the users, who work on many different devices. Administrators need to discover and hold sensitive information, ensure compliance, prevent data loss and then identify and respond to potentially malicious traffic or use patterns quickly.

Advanced Office 365 security features include multifactor authentication, encryption to protect data at rest and in flight and data loss prevention to stop users from sending sensitive material over email or in unauthorized storage devices.

Office 365 enterprise users must balance features with price

Office 365 meets the requirements for compliance certifications, including those imposed by the Health Insurance Portability and Accountability Act, the Federal Risk and Authorization Management Program and the International Organization for Standardization/International Electrotechnical Commission 27001.

Suspicious activity afoot?

Administrators can manage and audit Office 365 security features with remote PowerShell, but the Office 365 Security & Compliance Center provides a GUI tool to enforce corporate policy and monitor potential threats. The portal provides seven major pages related to security and compliance:

  • Alerts page: This section warns you when a user violates policies that IT creates. Administrators can also view alerts, understand how each was generated and take remedial action. Office 365 includes a series of default alerts and will inform you when a user receives administrative privileges and when it detects malware or unusual file activity.
  • Permissions page: Administrators can grant users various permissions in compliance-related areas, such as device management and data retention. Elevated users can perform only the tasks assigned by the administrator. IT can alter or rescind permissions as business needs change.
  • Threat Management page: Dashboard, Threat explorer and Incidents tools let administrators oversee risks detected within Office 365.
  • Data Governance page: This area enables admins to import data into Office 365; archive and retain important messages and attachments as part of content lifecycle management; and establish supervision policies that review both inter- and intraoffice messages for inappropriate or sensitive content.
  • Search and Investigation page: This allows administrators to locate messages and search audit logs. For example, use the content search to comb mailboxes, folders, SharePoint Online sites and OneDrive for Business content in the company’s Office 365 subscription. Export results to another computer for further examination. Use audit logging to view user and other administrative activities involving files, folders, sharing, SharePoint, Azure Active Directory, Sway and PowerBI.
  • Reports page: This enables administrators to follow application use, identify suspicious app activity and provide notifications and alerts about unusual app use. The page generates reports that show how the organization’s employees use Office 365.
  • Service Assurance page: This page provides details about Office 365 compliance efforts. These include Microsoft security practices for customer data stored in the messaging platform; third-party audit reports of security; and security, privacy and compliance controls used by Office 365.

Wasabi Technologies takes on Amazon S3 on price, performance

Daring businesses to switch from Amazon to a company they’ve never heard of for cloud storage is a bold challenge. But Wasabi Technologies’ founders were so encouraged by its product launch that they raised another $10.8 million to fund a second data center.

Wasabi CEO David Friend said he expected the free trial of 1 TB for 30 days to attract a few dozen prospects when it became available on May 3. When more than 500 signed up, the Boston-based startup had to waitlist new subscribers until the week of May 17 to keep up with the server capacity demand.

Friend said about 80 users have converted to paying customers, and Wasabi boosted the available storage capacity at its leased data center space in Ashburn, Va., from about 7 PB to more than 20 PB to stay 90 days ahead of demand.

Those customers are likely lured mostly by Wasabi’s claims that its cloud storage is significantly cheaper and faster than Amazon’s Simple Storage Service (S3). They may also find it encouraging that Wasabi founders Friend and CTO Jeff Flowers also started Carbonite, an early successful cloud storage player for consumers and small and medium-sized businesses.

Wasabi CEO David FriendDavid Friend

The founders also likely learned a few things from Flowers’ post-Carbonite efforts to build on-premises cold data storage for financial and security firms and service providers. Storiant, initially known as SageCloud, raised $14.8 million in equity and debt between August 2012 and May 2015. But Storiant shut down operations in November 2015 and sold off its intellectual property for a mere $90,000.

“They were selling hardware systems and ended up competing with EMC, Dell and HP, which I thought was a mistake,” said Friend, who was CEO and later executive chairman at Carbonite, as well as a director on Storiant’s board.

Wasabi Technologies raises $8.2 million in 2016

In 2016, Friend, Flowers and Storiant’s founding engineers shifted their focus back to public cloud storage at BlueArchive, now called Wasabi Technologies. The startup raised $8.2 million over two rounds in 2016 to get started.

Has Wasabi built a better mousetrap when people don’t realize they have a mouse problem? Or, is this a real issue?
Stu Minimansenior analyst, Wikibon

Wasabi added $10.8 million through a convertible note that will become equity when the company decides to raise a Series B round of funding. That will help finance the West Coast expansion to a colocation facility in San Jose, Calif., or Seattle, according to Friend. That would allow Wasabi to add automatic replication across multiple geographies for compliance, and to mitigate the risk of having all customer data in a single data center. Wasabi is also investigating expansion into Europe, a prospect that Friend said he hadn’t planned to pursue until next year.

“I’m a cautious, conservative kind of guy, and I don’t like just spending money without knowing what I’m going to get for it. But at this point in time, the market is almost limitless for this,” Friend said. “Every day, new opportunities show up at the company for amounts of storage that are more than we had in our whole second-year projection. If any of these big deals start to come in our direction, it’s going to be pretty impressive.”

Speed ‘blows people away’

Friend said the speed at which Wasabi’s software can read and write data is “what really blows people away.” It offers performance that he said is generally achievable only at higher cost with on-premises data center hardware. He said the Wasabi software takes control of disk write heads and packs data onto storage drives more efficiently and at higher speed than Linux or Windows operating systems can.

“We get our speed by parallelizing. The speed comes from breaking the data up and reading it and writing it simultaneously to many drives at the same time,” Friend said. He added that the data is distributed with sufficient redundancy to enable 11 nines of data durability, as Amazon does.

Friend said Wasabi keeps costs low by buying directly from hard disk drive (HDD) manufacturers at about the same price as Amazon does in the low-margin HDD business. He said Wasabi’s technology also enables longer disk life.

Wasabi charges a flat 0.39 cents per GB per month for storage and 4 cents per GB for egress. Competing public clouds vary prices based on the amount of data stored or transferred, the type of storage service — such as cold or nearline — and the requests made, such as puts and gets.

“Our vision is that cloud storage is going to become a commodity that’s out there for everybody to use. You don’t need three plugs in the wall for good electricity, so-so electricity and crappy but cheap electricity. You don’t need all these different kinds of storage as well,” Friend said.

Wasabi vs. Amazon S3 and Glacier

Friend said he expects most potential customers to compare Wasabi to Amazon S3. But one trial participant, Phoenix-based WestStar Multimedia Entertainment Inc., pitted Wasabi against Amazon’s colder, cheaper Glacier, Backblaze and Google Coldline in addition to Amazon S3, Microsoft Azure Backup and Rackspace.

WestStar vice president of information technology Chris Wojno said his company had a pressing need to back up more than 26 TB of video with an estimated data growth rate of 2.7 TB per month. WestStar produces The Kim Komando Show, a syndicated digital lifestyle radio program, and operates a multimedia website.

Wojno calculated costs based on storing 39 TB of data and found Wasabi had the lowest per-month price per GB. If he chose Wasabi, his per-month cost would be $3,747.90 less than Rackspace, $1,590.80 less than Azure Backup, and $744.90 less than Amazon S3. The price differential was far less over Google Coldline ($120.90), Backblaze ($42.90) and Glacier ($3.90), according to his spreadsheet analysis.

Wojno also weighed the data recovery cost for 39 TB of backed-up video in the event of a disaster. Backblaze was least expensive at $780, compared to $1,560 for Wasabi and $3,900 for Glacier. But Wojno figured Blackblaze’s higher per-month storage fee than Wasabi would negate the savings.

Based on Wojno’s calculations, WestStar selected Wasabi Technologies for cloud storage. Wojno admitted he would have been suspicious of the new company had he not been familiar with Friend through his work at Carbonite, a former sponsor of the radio show. Komando, an owner of WestStar, last month invested in Wasabi after her company became a paying customer.

Wojno said WestStar spent about two weeks backing up 26.5 TB of video over a 200 Mbps connection with backup software from Wasabi partner CloudBerry Lab. He noted that WestStar received a complimentary CloudBerry license for his participation in a webinar with the vendors.

Friend said migrating data through transfer to a storage appliance, such as Amazon Web Services (AWS) Snowball, and transport by truck to the cloud storage provider is “an idea whose time has come and gone.”

“It’s much cheaper to go and put in a 10 Gigabit [Ethernet] pipe for a month, move your data and then shut it off, assuming you’re in a metropolitan area where such things are available,” Friend said.

AWS remains a formidable Goliath

Stu Miniman, a senior analyst at Wikibon, said Wasabi faces a stiff challenge against Amazon, the clear No. 1 cloud storage player. He said Amazon could lower costs as it has done in the past, or improve performance to respond to any perceived threat. Plus, he hasn’t heard many public cloud users complaining that storage is a problem.

“Has Wasabi built a better mousetrap when people don’t realize they have a mouse problem? Or, is this a real issue?” Miniman said.

Miniman said users might look to the free 30-day trial for new applications. He said the question is how long they’ll stick with the service over the long haul, especially if the initial application runs for only a limited time.

Opportunities with AWS customers

Friend said Wasabi Technologies is going after AWS customers who want to save money on their long-term data storage or keep a second copy of their data with a different cloud provider. Wasabi provides a free tool that customers can install in Amazon Elastic Compute Cloud (EC2) to copy their S3-stored data to Wasabi automatically.

Friend said, thanks to Wasabi’s S3 compatibility, organizations using EC2 to host applications could leave the applications there and move data to Wasabi’s data center via Amazon’s Direct Connect, rather than store it in Amazon S3. He said Wasabi does not compete against Amazon’s Elastic Block Storage, which he said is designed for fast-moving data that doesn’t stay in memory long.

Friend said Wasabi uses immutable buckets to protect data against accidental deletion, sabotage, viruses, malware, ransomware or other threats. Customers can specify the length of time they want a data bucket to be immutable.

Stand up infrastructure on a budget with Azure DevTest Labs

Many businesses expect IT teams to do more without giving them more money — and, sometimes, cutting an already…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

small budget. But new projects mean test and development — an expensive endeavor in the data center. One way to alleviate this financial strain is to move those test and development workloads into the cloud.

Running a test environment in the data center is expensive, with high costs connected to hardware, software, power and cooling — not to mention all the time and effort IT spends keeping everything running and properly updated. Instead, administrators can turn to the cloud to develop and test applications in Microsoft’s Azure DevTest Labs. This enables companies to trade in hardware expenses and switch to a pay-per-use model. Other features in the service, such as the auto shutdown for VMs, can further control costs.

In this first part of a two-part series, we explain the merits of using a test bed in Azure and configuring a VM for lab use. In part two, we explore ways to manage the VM in DevTest Labs, as well as benefits gained when a workload moves out of the data center.

What is Azure DevTest Labs?

Many businesses maintain an on-premises test environment that emulates the production environment, which lets development teams test code before it is pushed into production. This also enables other teams within the app dev team to perform usability and integration testing.

But a test environment can have slight variations from the production side. It might not have key updates or patches, or it could run on different hardware or software. These disparities cause the application to fail when it hits the production environment. Azure DevTest Labs address these issues, enabling admins to build an infrastructure that is disposable and adaptable. If the test environment requires drastic changes, the team can remove it and build a new one with minimal effort. In contrast, a typical on-premises production setting generally cannot be offline for very long; the investment in hardware, software and other infrastructure requires lengthy deliberation before IT makes any changes.

The team can turn off DevTest Labs when the test period ends so that resources go away, and there are no costs until the service is needed again.

Creating another lab scenario to test a new feature removes the effort to twist and tweak an existing test environment to bring necessary components online, which can cause problems with other testing scenarios. An on-premises test environment requires sizable expense and effort to maintain and keep in sync with production. In contrast, admins can quickly configure a test setting in Azure DevTest Labs.

What are the benefits of Azure DevTest Labs?

The most noticeable benefits to DevTest Labs include:

  • Pay as you go pricing: The lab only incurs cost when a VM runs. If the VM is deallocated, there are no charges.
  • Specified shutdown: IT staff can configure DevTest Labs to shut down at a certain time and automatically disconnect users. Turning the service off — for example, shutting it down between 5 p.m. and 8 a.m. — saves money.
  • Role-based access: IT assigns certain access rights within the lab to ensure specific users only have access to the items they need.

How do I get started with Azure DevTest Labs?

To set up Azure DevTest Labs, you’ll need an Azure subscription. Sign up for a 30-day trial from the Microsoft Azure site. Go to the Azure Resource Management portal, and add the DevTest Labs configuration from the Azure Marketplace with these steps:

  • Select the New button at the top of the left column in the Azure portal. This will change the navigation pane to list available categories of services and the main blade to a blank screen. As you make selections, this will populate with related information.
  • In the search box, enter DevTest Labs, and press Enter.
  • In the blade that displays the search results, click on DevTest Labs. This will display more information about DevTest Labs and a Create button.
Install Azure DevTest Labs
Figure 1. Find the option to add the Azure DevTest Labs to your subscription from the Azure Marketplace.

Click the Create button. Azure will prompt you to enter configuration settings for the instance, such as:

  • The name of the lab: The text box shows a green checkmark if the value is acceptable.
  • The Azure subscription to use
  • The region where the DevTest Lab will reside: Pick a region closest to user(s) for better performance.
  • If auto shutdown should be enabled: This is enabled by default; all VMs in the lab will shut down at a specified time.

Enter values for these options; items marked with a star are required. Click Create, and Azure will provision the DevTest Labs instance. This typically takes a few minutes to gather the background services and objects needed to build the lab. Click the bell icon in the header area of the Azure portal screen to see the progress for this deployment.

DevTest Labs provisioning
Figure 2. Click the bell-shaped icon in the Azure portal to check the provisioning progress of the DevTest Labs instance.

Once Azure provisions the lab, you can add objects and resources to it. Each lab gets a resource group within Azure to keep all the items packaged. The resource group takes the name of the lab with some random characters at the end. This ensures the resource group name for the lab is unique and ensures the admin manages its resources through DevTest Labs.

To find the lab, select the option for DevTest Labs from the left navigation pane. For new users, it might be listed under More Services at the bottom. When the lab is located, scroll down to the Developer Tools section, and click the star icon next to the service name to pin DevTest Labs to the main navigation list.

Click DevTest Labs in the navigation list to open the DevTest Labs blade and list all the labs. Click on the name of the new lab: techTarget — for the purposes of this article.

Azure DevTest Labs environment
Figure 3. After Azure provisions the lab, the administrator can add compute and other resources.

This opens the blade for that lab. The administrator can populate the lab with compute and other resources. New users should check the Getting Started section to familiarize themselves with the service.

What components can we put in the lab?

DevTest Labs creates sandbox environments to test applications in development or to see how a feature in Windows Server performs before moving it to a production environment.

Administrators can add components to each lab, including:

  • VMs: Azure uses VMs from the Marketplace or uploaded images.
  • Claimable VMs: The IT department provides a pool of VMs for lab users to select.
  • Data disks: You can attach these disks to VMs to store data within a lab.
  • Formulas: Reusable code and automation objects are available to objects within the lab.
  • Secrets: These are values, such as passwords or keys, the lab needs. These reside in a secure key vault within the Azure subscription.

Administrators can modify configuration values and policies related to the lab, change the auto startup and auto shutdown times and specify machine sizes that users can create. To find more information on these items, select My virtual machines under MY LAB in the navigation list. Click Add at the top of the blade to insert a VM.

Add a new VM
Figure 4. Create a new VM with the Add button in the lab.

For the purposes of this article, select Windows Server 2016 Datacenter as the VM base image. The next blade shows the following items that are required to build the VM:

  • VM name: A unique name for the VM.
  • Username: The admin username for this VM — it cannot be administrator.
  • Disk type: Options include solid-state drive or hard disk drive — SSD provides better performance, but will raise the cost of operations slightly.
  • VM size: The number of CPU cores and amount of RAM — after selecting the one you want, click Select.
Configure the lab VM
Figure 5. Make selections to build the VM for the lab. The blades show the options and prices based on the size of the VM.

You can also select artifacts to install when the VM is created, and configure advanced options for the resource. Find more information about artifacts at Microsoft’s Azure documentation site.

For labs with more complex needs, advanced settings let administrators adjust the VM’s networking settings and set the VM as claimable.

When you finish the lab VM configuration, click Create. Azure will do its work, which will take some time to complete.

In the next installment of this article, we will look at VM management in Azure DevTest Labs and different testing scenarios within the service.

Next Steps

A Hyper-V lab can help with certification studies

Explore OpenStack’s capabilities with a virtual home lab

Keep a test VM from affecting the production environment

Powered by WPeMatico