Tag Archives: Cloud

On cloud-first policy, not all companies are equal

Cloud guru Daryl Plummer had some weighty numbers to share with the CIOs and other IT folks at Gartner’s Symposium/ITxpo in Orlando, Fla., earlier this month: By the end of the year, 77% of organizations will have adopted some type of cloud computing. And by 2019, nearly 90% will be using cloud.

“Cloud-first is the mantra for the day,” said Plummer, a Gartner fellow and research chief. He was referring to the policy many organizations are adopting that requires them to choose cloud computing for new tech iniatives unless there’s some reason, regulatory or otherwise, not to. “Maybe next it’ll be cloud-only. But today we’re still in a cloud-first mode.”

Well, some organizations are — and they’re blazing a path in the cloud, seeking and even reaping vaunted benefits such as lower overhead costs, near-zero maintenance, and the ability to quickly experiment and scale up or down in response to even the ficklest business demand.

Judging by IT leaders attending the conference, though, organizations today vary widely on whether they’ve adopted a cloud-first policy, the types of cloud services they’ve taken on, the pace at which they’re adopting them and how they’re rolling them out. And some are — to use Plummer’s term — lingering in the “cloud-maybe” phase.

Gartner analyst Daryl Plummer at Gartner Symposium/ITxpo in Orlando, Fla.
Gartner analyst Daryl Plummer discusses the state of cloud computing in 2017 at Gartner Symposium/ITxpo in Orlando, Fla., on Oct. 2.

‘An enabler’

PBF Energy, for example, a petroleum refining company in Parsippany, N.J., is in a “very conservative industry,” said director of applications Yael Urman, and has just started moving pieces of its IT and business operations to the cloud.

I think cloud is not a target, but it’s an enabler. My target is to bring value or to add to the top line and the bottom line.
Yael Urmandirector of applications, PBF Energy

The company uses a cloud infrastructure deployment by Amazon Web Services as an archive and moved to Workday’s cloud application for human resources a few months ago. Beyond that, business applications are on physical servers, and as yet there is no strategy for moving to cloud, Urman said.

“Previously we liked everything on prem, and now we understand that there are more options and we should consider them,” she said.

Indeed, PBF Energy is mulling a cloud-first policy. Now, for example, the company uses SAP’s enterprise resource planning system — traditional, server-bound software. But when the time comes to renew the hardware that software sits on, it may consider a switch to a cloud infrastructure service and building ERP on that.

Changing the company’s IT focus to cloud would suit PBF Energy, Urman said, because the company grows by acquisition, so computing power could be easily ramped up as refineries are added.

And cloud makes business sense, she said. Today, PBF Energy hooks sensors on field equipment to the device-connected internet of things for information on machine wear and tear. In the future, technologies such as artificial intelligence, which requires the storage capacity and processing power cloud can provide, could be layered on top to refine data collection efforts and make smarter parts-replacement decisions, potentially saving hundreds of millions of dollars. Cloud for cloud’s sake is the wrong way to go, Urman argued.

“I think cloud is not a target, but it’s an enabler,” she said. “My target is to bring value or to add to the top line and the bottom line.”

Cloud exploration

Bill Schneider is also from the slow-to-cloud oil and gas industry. And like Urman, the vice president of IT at Pioneer Energy Services in San Antonio, is eyeing cloud technologies to someday house and crunch vast amounts of sensor data — and tap new business value.

But the company, which offers drilling and well services, has a way to go. It has a private cloud implementation and is piloting applications in the public cloud “to get a better understanding of how we could move some of our critical infrastructure there.” As for current critical company data or applications in the public cloud: “We haven’t taken that leap yet,” Schneider said.

Pioneer Energy Services also hasn’t yet tapped the cloud for a popular use: trying new things. But Schneider is not blind to the wisdom of doing innovation in the cloud over the alternative: buying a server, setting up networking and building software.

“That’s where we would put it, honestly,” he said. “If we were to develop something, especially in the digital space, I think it has to live on the cloud.”

Cloud-first policy in motion

BECU, the fourth largest credit union in the U.S., is farther along on the journey to cloud. Julie Wesche, vice president of technology operations at the Tukwila, Wash., financial institution, is working to “get out of the data center business.”

That entails examining the whole range of as-a-service options — software, infrastructure and developer-centric platform — and determining where they might fit in BECU’s IT operations, Wesche said. The company is also looking at the possibility of using colocation facilities, or colos — data centers that rent physical space for servers, power and cooling — for other parts of its IT environment.

So far, Wesche sees a lot of room for more SaaS — human resources and recruiting are good matches, for example. Applications requiring colo cloistering are fewer. Those include systems with information that can’t move to the cloud because of regulatory compliance, because the information is too sensitive or because the technology is not compatible with a cloud provider’s.

“We’re having a lot of conversations with vendors,” Wesche said. The credit union has persuaded some vendors to develop applications for cloud. Legacy systems, though, are a no-go. “Some of them, because of what they are, the vendors will not support them.”

BECU is 26% in the cloud and more than 90% virtualized — and that’s helping it to become a cloud-centric, post-data-center organization, Wesche said. Could it ever hit the all-cloud mark or close to it?

“Out there, out there. I think we’ll end up with a majority in the cloud over the next 18 months,” she said. “But yes, I think it will be the exception that would be in a colo space.”

Public cloud security concerns wane as IaaS ramps, IAC rises

BOSTON — Enterprises have warmed up to the public cloud with the belief it can be at least as secure, if not more, than on-premises infrastructure. But IT teams still need to fortify their cloud apps, and some increasingly rely on automation and infrastructure as code to do the job.

It’s taken a long time for businesses’ public cloud security concerns to subside. In fact, though, the security controls put into place on the public cloud are often more robust than a company’s on-premises setup, in part because enterprises can tap into the public cloud providers’ significant security investments, said Andrew Delosky, cloud architect at Coca-Cola Co.

“A hack on you is a hack on your vendors,” Delosky said here, during a presentation at Alert Logic’s Cloud Security Summit this week. “[Cloud providers] don’t want to be in the news just as much as you don’t want to be in the news.”

While public cloud security concerns, in general, have dwindled, IT security professionals still take the subject seriously, said Bob Moran, chief information security officer at American Student Assistance, a federal student loan organization based in Boston.

“I think security professionals are the ones that are uncomfortable with cloud security because they don’t understand it,” said Moran, whose company’s cloud deployment is mostly SaaS right now, but includes some trials with Amazon Web Services (AWS) infrastructure.

Adjust a security strategy for cloud

IT security professionals face a learning curve to evolve their practices and tool sets for cloud. For starters, they need to grasp the concept of shared responsibility — a model by which a public cloud provider and a customer divvy up IT security tasks.

In AWS’ shared responsibility model for infrastructure as a service (IaaS), the vendor assumes responsibility for everything from the hypervisor down, said Danielle Greshock, solutions architect at AWS, in a presentation. This means AWS secures the hardware that underpins its IaaS offering, which includes servers, storage and networks, as well as the physical security of its global data centers. AWS users are generally responsible for the security of their data, applications and operating systems, as well as firewall configurations.

However, the line between AWS’ security responsibilities and those of its users can blur and shift, depending on which services you use.

“[With AWS Relational Database Service], you don’t actually have access to the underlying server, so we patch the operating system for you,” Greshock said. This is different than a traditional IaaS deployment based on Elastic Compute Cloud instances, where users themselves are responsible for OS patches and updates.

“Knowing what part you need to worry about is probably the key to your success,” Greshock said.

Apart from reviewing shared responsibility models, IT teams can evolve their security strategies for public IaaS in other ways. Some tried-and-true tools and practices they’ve used on premises, such as user access controls, encryption and patch management, remain in play with cloud, albeit with some adjustments. For identity and access management, for example, teams will want to sync any on-premises systems, such as Active Directory, with those they use in the cloud. If they delete or alter a user ID on premises, they implement the change in the public cloud, as well.

But some organizations have adopted more emerging technologies or practices, such as infrastructure as code (IAC), to ease public cloud security concerns.

In traditional on-premises models, IT teams centralize control over any new resources or services that users deploy, and this should still be the case with public IaaS. But cloud’s self-service nature enables users to spin up resources on demand — sometimes without IT approval – and bypass that centralized control, said Jason LaVoie, vice president of global platform and delivery at SessionM, a customer engagement software provider based in Boston.

With on-prem, you have an IT team with keys to the kingdom. But it doesn’t always work that way with AWS.
Jason LaVoievice president of global platform and delivery, SessionM

“With on-prem, you have an IT team with keys to the kingdom,” said LaVoie, whose company uses Amazon Web Services. “But it doesn’t always work that way with AWS.”

SessionM uses IAC to minimize the security risks in cloud self-service. IAC introduces more frequent and formal code reviews, increased automation and other practices that minimize the “human element” of public cloud resource deployment, so it helps reduce risk, LaVoie said.

Coca-Cola, which uses both AWS and Azure for IaaS, has adopted a similar approach.

“The whole infrastructure as code is such a revelation,” Delosky said. “Just being able to deploy the exact same application footprint, from an infrastructure level, every single time, no matter if you are in dev, test or production, with the same security controls … that’s a huge game-changer.”

Another way enterprises can evolve their security strategies for cloud is to appoint a dedicated IT staff member to oversee a cloud deployment, often with a specific focus on security or governance. Some organizations refer to this role as a cloud steward, said Adam Schepis, solutions architect at CloudHealth, a cloud management tool provider based in Boston.

Others, such as Coca-Cola, have created a Cloud Center of Excellence to unify IT and business leaders, as well as line-of-business managers, CISOs and others, to outline goals, discuss challenges and more.

“For us, that was probably the most critical thing we did,” Coca-Cola’s Delosky said.

Sibos 2017

The financial services industry banks on the Microsoft Cloud for digital transformation

Financial services organizations are at a transformational tipping point. Faced with fierce market pressures – nimble disruptors, complex regulations, digital native customers – technology transformation in the industry is essential, and it’s increasingly becoming a competitive edge. Firms leading the charge into this new era are transforming customer experiences, fostering a new culture of work, optimizing operations and driving product innovation, and they are using Microsoft cloud, AI and blockchain technologies in their journey to become digital leaders of the future. TD Bank, Sumitomo Mitsui and Credito Agricola are among the organizations that have embraced this digital transformation happening in every part of the world.

Read more

Arkadin unveils Cisco integrations for cloud telephony

Cloud communications provider Arkadin, based in Paris, has introduced a series of Cisco integrations that support cloud-based telephony with Cisco Spark and WebEx.

The Cisco integrations will use Arkadin’s PSTN service to support cloud-based calling. Arkadin will introduce Cisco Spark Calling for businesses deploying a cloud PBX or for businesses that want to use their existing on-premises telephony equipment.

Arkadin will also use Cisco’s Cloud Connected Audio – Service Provider, a partner architecture in which Cisco and its partners team up to provide customers with telephony connectivity. Arkadin’s service is branded Cloud Connected Audio – Arkadin and integrates with WebEx meetings. The service allows organizations to augment their audio conferencing cost structures with their existing networks. The service also offers global toll, toll-free and callback services.

The Cloud Connected Audio service offers an Opex model and enables organizations to integrate with on-premises telephony with WebEx capabilities. Audio bridging for the service is managed in the WebEx cloud.

Box debuts machine learning tools

Cloud content manager Box has revealed Box Skills, a framework for applying machine learning tools to content stored in Box. The service will allow organizations to digitize and automate business processes.

The machine learning tools are powered by IBM Watson, Microsoft Azure and Google Cloud. These tools include video indexing, computer vision and sentiment analysis.

Box, based in Redwood City, Calif., also announced a set of developer resources to allow organizations to build custom machine learning skills, such as using a third-party machine learning platform for specific workflows or linking multiple skills to enable intelligent business processes.

At its annual customer conference this week, Box showcased three Box Skills that are currently in development:

  • Audio intelligence creates and indexes transcripts of audio files that can be searched.
  • Video intelligence offers features such as transcription and topic detection to allow users to find the information they need in a video.
  • Image intelligence detects individual objects and concepts in image files and automatically add keyword labels to images.

The Box Skills framework and developer kit are expected to enter beta early next year.

8×8 offers new UC product suites

Unified communications as a service provider 8×8 Inc. has launched a suite of UC products called Virtual Office Editions. The new product suite offers enterprise-grade communications and mix-and-match pricing.

X8 offers unlimited calling to 45 countries and includes the full suite of Virtual Office. The suite also includes integrations with Salesforce, Zendesk, NetSuite CRM and Salesforce analytics for call recording, call quality reporting and barge-monitor-whisper capabilities. The platform also integrates contact center capabilities from 8×8 ContactNow.

X5 offers unlimited calling to 32 countries, Virtual Office features such as meetings and HD voice and video, and integrations with Salesforce, Salesforce analytics, Zendesk and NetSuite CRM.

X2 offers unlimited calling to 14 countries, Virtual Office features such as HD voice and video, and integrations with Salesforce, Zendesk and NetSuite CRM.

The X8 version of Virtual Office is currently available in the U.S. and U.K., while the X5 and X2 versions are available in the U.S.

Sierra-Cedar unveils new survey of HR technology trends

LAS VEGAS — With their sights fixed firmly on the cloud, HR managers and IT purchasers are placing greater emphasis on data security, a personalized user experience, and improved learning and development. Those are among the top HR technology trends highlighted in the just-released Sierra-Cedar 2017-2018 HR Systems Survey, which for the first time also attempted to measure the impact of socially responsible policies on corporate bottom lines.

Sierra-Cedar Inc. has made a tradition of unveiling the survey results at the HR Technology Conference & Exposition, and it did so again here at the 2017 annual gathering. The survey celebrated its 20th anniversary this year.

“We’re having the same type of change today that we had 20 years ago,” said Stacey Harris, vice president of research and analytics for the consulting firm, which is based in Alpharetta, Ga. “The market is flipping on its head.”

Harris showed a chart of the top HR technology trends and purchase priorities in the survey’s history, noting that the HR automation and employee self-service of the early days are matched by today’s cloud, mobile and social HR in their ability to transform organizations.

She said the two-decade evolution in HR technology trends reflects HR’s own shift from a focus on its own processes to being more a driver of outcomes for the entire business. “It isn’t just about doing HR better,” she said. “It’s about doing HR for a purpose.”

But the practical aspects of delivering HR and talent management improvements remain at the forefront. “Integration strategies and risk and security strategies are roiling to the top,” Harris told conference attendees. “If you don’t have something going on in [those areas], you’re one of the few organizations that we talked to.”

Sierra-Cedar conducted the survey of 1,312 organizations last spring. Just over half of them were small organizations (2,500 or fewer employees), with medium-sized organizations (2,500 to 10,000 employees) representing around a quarter of respondents and large organizations (10,000-plus) the remaining quarter.

Social responsibility correlates with HR technology trends

In previous surveys, the consulting firm compared the primary criteria an organization uses to make decisions — categorizing it as either top-performing in the financial sense, talent-driven or data-driven — and then measured the effect of that management style on positive business outcomes, such as return on equity.

For this year’s survey, social responsibility was added for the first time as a decision-making style. Its impact on business outcomes proved significant.

Sierra-Cedar found organizations that emphasized diversity, wellness, flexible schedules, family leave and employee engagement performed 14% better than a control group. Talent-driven organizations — those with mature career- and succession-planning processes and which were rated strong in such talent metrics as employee retention and engagement — also saw a 14% advantage. Organizations that emphasized financial or data-driven decision-making performed 8% and 3% better, respectively.

“Social responsibility has become such a big issue, both in our headlines and in our talking points and in our businesses and in our brands,” Harris said. “You can’t ignore it. Some organizations are taking it to a whole new level,” and technology is helping them to address it.

Other survey data showed the most socially responsible organizations were much more likely to have highly rated HR processes for performance and compensation management and onboarding. 

HR technology trends reflected in purchase intentions

A significant portion of the survey deals with respondent’s purchase intentions. While all seven major HR technology categories that Sierra-Cedar tracks showed slight growth, the percentage growth over last year was greatest in talent management and in business intelligence and analytics.

One category getting significant attention from purchasers is learning and development tools. The survey report noted that learning management systems (LMSes) are the oldest HR systems — second only to core HR and payroll — and have been installed for five years on average.

LMSes are being considered for change at a higher rate than other applications today.
Stacey Harrisvice president of research and analytics, Sierra-Cedar

“LMSes are being considered for change at a higher rate than other applications today, with 14% of organizations planning for replacement in the next 24 months, and 24% evaluating other solutions,” the report said. A better user experience, new functionality and improved integration were by far the top hopes for new LMSes.

The report also found a strong need to integrate the many HR systems and applications that most organizations have. The average organization has 18 integration touch points, though the number varies widely by size, with large organizations averaging 62 touch points and small ones only five. Of survey respondents, 20% had a major initiative to improve system integration, and 10% are working on one, while 17% already have a regularly updated enterprise integration strategy. But that still leaves around half with no real integration strategy, which Sierra-Cedar suggested is a missed opportunity, because organizations that have a strategy earn 21% higher ratings for their business outcomes.

Enterprise integration strategies also proved to be positive contributors in organizations with effective processes to protect HR data privacy and security. In addition, 70% of the organizations in the top tier for business outcomes have a regularly updated risk and security strategy, according to Sierra-Cedar.

The survey also tracked HR technology trends when it comes to the IT architectures that organizations planned to use to transform their HR systems. The results showed 22% planned the rip-and-replace approach, moving everything at once to the cloud, while 25% planned a hybrid setup, with talent management and workforce management typically in the cloud and the rest remaining on premises. Another 22% planned to run similar applications in parallel in both deployment models, while 19% chose to outsource HR to service providers.

The report provided further evidence of the inexorable march to SaaS HR applications and the increased importance of human capital management that is personalized for each employee. “If you haven’t talked about personalization yet, make sure you put that in your notes,” Harris said. “Personalization is going to be the next big thing.”

The 122-page report is free for download after registration and a short questionnaire.

How to bring Azure costs down to earth

The migration of virtual machines to the cloud sounds great — until your IT department is hit with a huge bil…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

l.

For every minute the VM runs and byte it uses, Microsoft adds charges to a monthly tab. How do you manage Azure costs? The formula is relatively simple — admins should understand the approximate price tag before workloads move to Azure and right-size VMs to reduce wasteful expenses.

Find the right Azure region

The first step is to select the proper Azure region. Each region has different resources, capabilities and services; these facets — and its relative location compared to the business — produce the cost per region. Not every region is available — it depends on the organization’s location or subscription. For example, users in the United States cannot use Australian data centers without an Australian billing address.

A move to a less expensive Azure region makes a noticeable difference when it involves several dozen servers. However, a migration to a different Azure region affects the end-user experience with increased latency if applications move further from users and customers. Admins use Microsoft’s Azure latency test site to understand network performance per region.

Don’t make one-size-fits-all VMs

To further reduce Azure costs, align VMs to the proper performance level. For example, differentiate between production and dev/test environments, and build VMs accordingly. Dev/test VMs don’t usually need the production specifications as they rarely require high availability. Reduce the resources — and their associated costs — for dev/test VMs so they get only what they need.

Look at infrastructure as a service (IaaS) servers

In the web-based GUI wizard admins use to create servers, Azure presents the high-performance VMs as the default. Click on “View All” in the top right-hand corner of the dialog to reveal the range of server sizes. A0 is small and costs significantly less than Microsoft’s suggested options, which makes it ideal for experimentation.

Range of server sizes
Figure 1: The A0 server size is the smallest and least expensive option.

A0 is also oversubscribed, which means CPU performance varies based on other workloads in the node. The lower tiers also do not support load balancing and have other limitations, but the VMs in those levels make for ideal inexpensive test machines.

Admins also have a disk choice to limit Azure costs. To build an IaaS VM, there are two options: hard disk drives or solid-state drives (SSDs). Standard disks are good enough for most workloads with speeds up to 500 IOPS, depending on the configuration. If speed is not a concern, avoid the more expensive SSD choices.

Aside from IaaS, there are other options that many users are unaware of or fail to understand.

Implement services as a service

Some administrators new to the cloud see it as pure IaaS where everything needs to run on its own VM. This is an option — but an expensive one.

A move to a less expensive Azure region makes a noticeable difference when it involves several dozen servers.

Instead, think of that SQL Server and all the associated costs for compute, storage and licensing. Why deal with the price and deployment headaches, and instead just use the SQL Server as a service? It’s cheaper — a Standard_B4ms VM (four cores, 16 GB of RAM) with SQL standard costs about $383 a month while an Azure setup for multiple databases costs $224 a month on a standard tier. Plus, SQL as a service saves the administrator from the patch and update process.

Check your company’s security requirements to see if it clears the use of database servers in the cloud. Because these databases are on a shared resource with potentially hundreds of other companies, an exploit or misconfiguration could leak data outside the organization.

Analyze the cost of cloud resources

Admins must understand business requirements and know what costs they bring before a move to the cloud. On-premises compute has inefficiencies and sprawl that add expenses, but the lack of a monthly bill for most environments lets those costs fly under the radar.

By the same token, it’s vital to know the cloud environment’s requirements and the expenses for applications and infrastructure. Use Microsoft’s Azure calculator to work out the potential price tag.

Bundle resources for easier management

Admins should tap into resource groups to further control Azure costs. This feature collects the service resources, such as the VM, database and other assets, into a unit. Once the business no longer needs the service, the admins remove the resource group. This avoids a common housekeeping problem where the IT staff missed an item and the charges for it show up in the next bill.

Efficient code makes a difference

In an on-premises scenario, admins overcome inefficient code with additional resources. In the cloud, where every item has a cost per transaction or per second, better programming lowers expenses.

For example, an inexperienced database programmer who builds an additional temporary database, costs the company more money each time a new one spins up in the cloud. As this inefficient practice multiplies with each deployed instance, so does the cost. A better programmer with a more thorough understanding of SQL avoids this waste and builds code that takes less time to run.

Good programmers require higher salaries, but for a company that uses the cloud to scale out, that expense is worth it. The business saves more in the long run because lower resource utilization — thanks to better code — results in a smaller bill from Microsoft.

Next Steps

How Azure users can avoid higher Oracle licensing bills

Five steps to control cloud storage costs

Azure Stack adopters might need a hand

SoftNAS Cloud architecture gets facelift, new features

SoftNAS redesigned its Cloud Data Platform, splitting it into three product editions that offer native cloud support for applications, faster data migration across multiclouds and more feature-rich data services for hybrid cloud storage deployments.

SoftNAS split its Cloud Data Platform into Cloud Enterprise and Cloud Essential editions, and it added a new Cloud Platinum version. SoftNAS Cloud Enterprise employs the vendor’s high-performance network attached storage (NAS). Cloud Essential is optimized for archiving, backup and big data.

SoftNAS Cloud Platinum, which is still in beta, is a replication product for cloud data management.

The new architecture is designed to deliver data services across multicloud and hybrid cloud configurations. 

“This is about delivering a richer set of data services to enable the hybrid cloud,” said Jeff Kato, senior storage analyst at Taneja Group. “Software-defined storage vendors are going natively into the cloud and they are providing more data services.”

SoftNAS Cloud Platinum includes wide area network (WAN) optimization for Amazon Web Services (AWS) and Microsoft Azure clouds. It has the company’s patent-pending UltraFast technology for faster bulk data movement in cloud, which is comparable to the IBM Aspera faspex for high-speed, global person-to-person file delivery of unlimited-sized digital packages.

“We both do this a bit differently, but it accomplishes the same results,” SoftNAS CEO Rick Braddy said. “We built UltraFast into the data platform so we can move massive amounts of data around the world. I like to say that it flattens the internet.

“This is a cloud data platform for any hybrid cloud data needs,” Braddy added. “Once the data is migrated, it keeps it live and it keeps updating it to keep it fresh.”

SoftNAS Cloud Enterprise is a rebranding of the company’s flagship NAS product for primary storage, which competes with NetApp filers. The software targets SaaS-enabled applications, business application migration, web server content and dev-test environments.

The SoftNAS Cloud Essential for secondary storage supports basic file services, file server consolidation, cloud backup, disaster recovery and tape-to-cloud archiving. The company’s ZFS-based NAS is layered over object storage and supports Amazon S3 and Microsoft Azure Blob Storage.

Braddy said the SoftNAS Cloud Platinum supports Apache NiFi for data flow automation between systems that provides a web-based interface to manage the data flows in real time. The key technology within Platinum is the UltraFast technology that gives customers the ability to “lift and shift” applications from an on-premises location to the cloud. It does live migration of workloads.

The Platinum software handles one-to-many and many-to-one replication, multisite disaster recovery, and bulk data transfers. The product is delivered through a subscription-based software service.

SoftNAS has another patent-pending technology that speeds up workload performance. The ObjFast technology is integrated into the company’s NAS and gives customers block-storage performance at the price of object storage. The company also has a SoftNAS Cloud File Gateway virtual appliance that serves as a unified file system for VMware vSphere and Microsoft Hyper-V hypervisors for on-premises and hybrid cloud storage.

 “The company is providing a stack to ramp up cloud compatibility,” Kato said.

A pair of big companies explain why they jumped on board the Microsoft cloud


microsoft julia white
Microsoft Corporate VP of
Azure Julia White

Microsoft

In the cloud service market, Microsoft finds itself
firmly in second place. 

But in trying to catch up with market leader Amazon, the
tech giant argues it has a distinct advantage — its
long history in the business software game and its
long-established relationships with companies of all sizes.
Microsoft knows how to meet companies’ needs, it argues.

That’s not just an idle boast, if my conversations with Geico and
Dun & Bradstreet are any indication. Both companies recently
turned to Microsoft’s Azure cloud service. And in both cases, the
companies saw distinct benefits to using Azure over rival
services. 

“You can tell Microsoft is hungry,” said Pat McDevitt, the chief
technology officer of Dun & Bradstreet, which recently
started experimenting with Azure. “They are doing exactly what
they need to do.”

Azure is in the spotlight this week thanks to Ignite, Microsoft’s
annual developer conference. The company typically uses Ignite to
promote its massive cloud computing platform. At this year’s
event,
Microsoft announced several tools to make it easier for large
companies in particular to use Azure. 

‘Essentially evacuating the data center’


fikri larguet geico
Fikri Larguet, Director of
IT

Geico

One big company that’s already embraced Azure is Geico. The
insurance giant began shifting over to Microsoft’s cloud
service about three years ago, said Fikri Larguet, the
company’s director of information technology. Geico’s rationale:
Owning and operating data centers and servers is both expensive
and outside its core area of expertise.

The company, which has more than 38,000 employees, is
“essentially evacuating the data center,” Larguet
said. Geico, a wholly-owned subsidiary of Berkshire
Hathaway, has been moving a little bit at a time. But over 50% of
the company’s core business services are already in the cloud and
its goal is to be “full cloud” by 2020, he said.

Larguet said his team has a mandate to “get out of the data
center business as quickly as possible.” 

Geico bet on Azure because Microsoft had already built into its
cloud service the ability to interact with lots of different
applications. That made it a smooth process for Geico
to bring over its existing software, Larguet said. Similar
support for newer tools and technologies also made it easier
for Geico to add on things like software containers, a trendy new
Silicon Valley technology for building large-scale apps. 

 The biggest challenge Geico’s faced hasn’t been technical,
Larguet said. As the team tries to adjust to a post-data center
era, Larguet is trying to teach the team to “fail fast” and
be unafraid of trying new things. For him, this is a chance for a
fresh start in the software organization. 

“We don’t want to carry over our bad habits,” he said.

‘We don’t need to be bleeding edge’


dun bradstreet cto pat mcdevitt
Dun & Bradstreet CTO
Pat McDevitt

Dun &
Bradstreet


Dun & Bradstreet, a firm that’s provided data and analytics
to businesses since 1841, is taking its cloud migration a little
more slowly. 

The firm “doesn’t need to be a pioneer, doesn’t need to be
bleeding edge,” McDevitt, its CTO, said. Dun & Bradstreet
has been around the better part of two centuries; it can
afford to experiment with the cloud rather than rushing to push
everything over to it right away. And the firm has been
experimenting; it moved over some key applications to Amazon
Web Services a few years ago.

About three months ago, though, the firm started
experimenting with Azure. What appealed to Dun & Bradstreet
were Microsoft’s tools for managing data. Those tools make it
easy for companies to build cloud-based applications that
read and write from their existing databases. With them, the
firm could more quickly build mobile apps and other new-wave
tools.

McDevitt asked one of the firm’s development teams in Austin
to test Azure by using it to build a new mobile app for Dun &
Bradstreet’s clients. Although these developers’ past experience
was primarily with Amazon’s cloud service, they found it so easy
to work with Azure that they finished ahead of schedule. 

And Azure offered the firm an another benefit.
Because Microsoft has embraced technology like Docker
software containers and the Linux operating system, Azure
integrated better with Dun & Bradstreet’s existing
systems than McDevitt had first thought. Originally, the
firm was going to use Azure just for new apps. Now the firm
is considering using it for older apps also, he said. 

Microsoft worked really hard to win his business, McDevitt
said. 

Get the latest Microsoft stock price here.

AWS cloud platform will share cloud computing heights, CEO Jassy says

ORLANDO, Fla. — Andy Jassy is confident that Amazon Web Services, Amazon’s cloud computing division, will remain the world’s top-selling cloud infrastructure provider. The word the CEO gave is optimistic.

But AWS won’t be the world’s only cloud provider, Jassy said.

“There’s not going to be only one successful company,” Jassy said at the Gartner Symposium/ITxpo here Monday. He predicts companies moving applications and data to the cloud will go with two or more of a cluster of large providers, which include Microsoft, Google and IBM.

But for most companies, Jassy said, multicloud won’t mean splitting workloads evenly among different providers. Many CIOs and IT leaders start off thinking they’ll do that, but “very few end up going that route.”

For one thing, multiple cloud platforms means multiple systems for cloud teams to learn and keep straight, he said. And cloud providers offer volume discounts to companies — the more cloud they use, the better the price. If companies divvy up their workloads, they lose their buying power.

Instead, most companies pick one provider and then put a small percentage of their workloads in a second one. They do this to avoid vendor lock-in and give them the option to switch if the first vendor raises its prices or goes out of business, Jassy said.

The AWS cloud platform will be the primary cloud for many companies, he said, because the company has “so much more functionality than anybody else, a much larger and more mature ecosystem and then a much more mature platform.”

AWS was launched in 2006, and rivals have struggled to keep up with its fast growth and pace of innovation. But in August, AWS tied Microsoft’s Azure for best cloud platform in Gartner’s annual ranking of cloud providers.

Focus on customer needs

Gartner analyst Daryl Plummer interviewed Jassy onstage in front of thousands of CIOs and IT leaders, and asked Jassy about the factors that helped AWS grow so big so fast.

About 90% of what we build is driven by what customers tell us matters to them. The other 10% is listening to customers really carefully about what they’re trying to solve.
Andy JassyCEO, Amazon Web Services

“About 90% of what we build is driven by what customers tell us matters to them. The other 10% is listening to customers really carefully about what they’re trying to solve,” Jassy said. AWS puts this to practice in its product development. Once it launches a new product or service, it examines customer feedback, makes adjustments and tries again.

Another way AWS distinguishes itself, Jassy said, is through its customer support function called Trusted Advisor, which surveys how customers are using AWS and then alerts them if they’re using it in a “suboptimal way.”

If, for example, a customer has a lot of virtual machines in the AWS cloud but isn’t using them, “we’ll send out a note saying, ‘Hey, maybe you want to stop spending money with us, and you can resume spending money when you need it.'”

AWS has sent millions of such notices out to customers over the last few years and as a result has saved them $500 million a year, Jassy said.

But AWS gets dinged by customers, too. In an unofficial survey of about 75 people, Plummer found that people were frustrated by the deluge of AWS cloud platform features — and how they’re accounted for in infamously confusing bills.

“Nobody knows what they’re paying for,” he said.

Jassy said that for some companies keeping up with the number of new features and services is “sometimes daunting” — there were more than 1,000 in 2016 and 1,250 are expected by the end of the year. Customers are happy with the cost savings and innovation they afford them, he said. But they’ve also told AWS it needs to simplify things.

“I think we’ve made a lot of progress there, but we still have work to do,” Jassy said.

A call to cloud

Jassy would find an ally in Sasi Pillay, who was in the audience. Now vice president of IT services and CIO at Washington State University, Pillay is a longtime advocate of cloud computing.

Pillay moved external websites to AWS as CTO at space agency NASA, and now he’s trying to do the same at WSU, where websites are hosted on premises.

“My long-term vision is to own as little IT assets as possible and instead focus on delivering solutions for our customers,” he said.

To determine what faculty, staff and students need from technology, Pillay has worked to streamline the governance structure through which IT discusses projects with working groups, deans and the university cabinet. Also in the works are a student survey, an interactive list of IT projects students can vote on and a hackathon in which they can develop mobile applications that IT can later deploy.

Pillay saw a clear parallel between the partnership he’s forming with WSU constituents and the one AWS has made with its customers.

“I think that’s what AWS is trying to do with its customers. Instead of being just a service provider, it’s becoming a strategic partner,” he said.

Cloud pricing models reignite IaaS provider feud

As the biggest cloud providers battle for customers, the main tactic always comes back to cost.

A few years ago Amazon, Microsoft and Google engaged in a public cloud price war after Google Cloud Platform (GCP) entered the market and began to undercut the other two hyperscalers. Prices continue to drop, though the fanfare and rapid-fire back and forth has largely subsided. But in the past month, the public cloud war has reignited in a new way, as these vendors add more cloud pricing models to reduce users’ costs.

The first salvo in the latest round of one-upmanship came from Amazon Web Services (AWS), with last week’s long-anticipated departure from per-hour billing in response to per-minute billing available on GCP and Microsoft Azure. Amazon jumped ahead with per-second billing, only to be matched days later by Google – which stated that its customers will feel less impact from the change than users of a certain unnamed vendor that used to charge on a per-hour basis – a thinly-veiled shot at AWS.

Not to be outdone, Microsoft this week added Reserved VM Instances, through which users can purchase advanced capacity in one- and three-year increments and save up to 72% compared to the on-demand price. It’s roughly modeled after AWS EC2 Reserved Instances, but adds a decidedly Microsoft slant, with even bigger discounts for users that incorporate Azure Hybrid Use Benefit to transfer Windows Server licenses to Azure.

The race around cloud costs has become less about direct cuts and more about cloud pricing models that give users a variety of ways to design their workloads, said Greg Arnette, CTO and founder of Sonian, an archival storage company Waltham, Mass., whose service works with AWS, Azure and GCP.

“At some point, it feels like pricing has to bottom out, so it has to be about more creativity on how to design and develop software for how you use the cloud to find more savings,” Arnette said.

Microsoft may have trailed competitors in this area because cloud pricing models are different than how it’s used to selling to enterprises, but its customers likely see these options on AWS and GCP and ask why they can’t get the same thing on Azure, said Owen Rogers, who heads up the Cloud Price Index at 451 Research.

“For the most part, Microsoft has been really slow to tackle the issue of cloud economics,” he said. “It’s almost like Azure is now playing catch up with Google and AWS when it comes to cloud economics, but they’re also trying to be more flexible.”

Between Microsoft’s quickened pace to adapt its pricing options and Google and Amazon’s shift to per-second billing, there’s constant pressure to show ongoing value to users, Rogers said.

The per-second shift likely won’t impact users much for now, particularly for VMs that run constantly, Rogers said. He does, however, see potential benefits in the future, as users move to short-lived workloads that run on containers or are constructed around serverless functions.

Microsoft’s me-too updates go beyond price

These types of discounts aren’t new. AWS and GCP have had spot instances for years, an option Microsoft finally added in May. AWS has built out its EC2 Reserved Instances program so extensively that some worry it’s on the brink of being too complicated. Google has a set of discounts for continued usage, and added its take on reserved instances earlier this year.

The me-too updates aren’t limited cloud pricing models. Microsoft took its turn to play catch up this week with a spread of important features to coincide with Ignite, one of its major annual IT conferences. Among the new tools, which have popular equivalents on other cloud platforms, is Azure Data Box, with which users mail up to 100 terabytes of data from private data centers to the cloud. Microsoft also added multiple availability zones within a region, another major upgrade for customers that want more resiliency and high availability. This service is available in two regions now (East US 2 and West Europe) with previews for additional zones in the US, Europe and in Asia by the end of the year.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.