Tag Archives: promises

Google releases TensorFlow Enterprise for enterprise users

Google Wednesday launched TensorFlow Enterprise, which promises long-term support for previous versions of TensorFlow on its Google Cloud Platform.

The new product, which also bundles together some existing Google Cloud products for training and deploying AI models, is intended to aid organizations running older versions of TensorFlow.

The product is also designed to help “customers who are working with previous versions of TensorFlow and also those where AI is their business,” said Craig Wiley, director of product management for Google Cloud’s AI Platform.

Open sourced by Google in 2015, TensorFlow is a machine learning (ML) and deep learning framework widely used in the AI industry. TensorFlow Enterprise, available on the Google Cloud Platform (GCP), provides security patches and select bug fixes for certain older versions of TensorFlow for up to three years.

Also, organizations using TensorFlow Enterprise will have access to “engineer-to-engineer assistance from both Google Cloud and TensorFlow teams at Google,” according to an Oct. 30 Google blog post introducing the product.

“Data scientists voraciously download the latest version of TensorFlow because of the steady pace of new, valuable features. They always want to use the latest and greatest,” Forrester Research analyst Mike Gualtieri said.

Yet, he continued, “new versions don’t always work as expected,” so the “”dive-right-in” approach of data scientists is often at conflict with an enterprise’s standards.

Google’s TensorFlow Enterprise support of prior versions back to three years will accelerate enterprise adoption.
Mike GualtieriAnalyst, Forrester Research

“That’s why Google’s TensorFlow Enterprise support of prior versions back to three years will accelerate enterprise adoption,” Gualtieri said. “Data scientists and ML engineers can experiment with the latest and greatest, while enterprise operations professionals can insist on versions that work will continue to be available.”

TensorFlow Enterprise comes bundled with Google Cloud’s Deep Learning VMs, which are preconfigured virtual machine environments for deep learning, as well as the beta version of Google Cloud’s Deep Learning Containers.

To be considered for the initial rollout of TensorFlow Enterprise, however, organizations must have spent $500,000 annually, or commit to spending $500,000 annually on Google Cloud’s Deep Learning VMs, Deep Learning Containers, or AI Platform Training and Prediction products, or some combination of those systems.

Over the past several months, Google has made progress in a campaign to offer more tools on its Google Cloud Platform to train, test, and deploy AI models. In April 2019, the tech giant unveiled the Google Cloud AI Platform, a unified AI development platform that combined a mix of new and rebranded AI development products. At the time, analysts saw the release as a move to attract more enterprise-level customers to Google Cloud.

Go to Original Article
Author:

How to plan for an Azure cloud migration

The hype surrounding the public cloud is increasingly difficult for many in IT to overlook.

The promises of an Azure cloud migration were often overshadowed by fears related to security and loss of control. Over time, the resistance to moving away from the data center thawed, as did negative attitudes toward the cloud. And Microsoft’s marketing is making the Azure drumbeat harder to ignore, especially as the end-of-life for Windows Server 2008/2008R2 draws closer and the company offers enticements such as Azure Hybrid Benefit.

Some of the traditional administrative chores associated with on-premises workloads will dissipate after a move to Azure, but this switch also presents new challenges. Administrators who’ve put in the work to ensure a smooth migration to the cloud will find they need to account for some gaps in cloud coverage and put measures in place to protect their organization from downtime.

Gauging the on-premises vs. cloud services switchover

A decision to move to the cloud typically starts with an on-site evaluation. The IT staff will take stock of its server workload inventory and then see if there’s a natural fit in a vendor’s cloud services portfolio. Administrators in Windows shops may gravitate toward the familiar and stay with Microsoft’s platform to avoid friction during the Azure cloud migration process.

Part of the benefit — and drawback — of the cloud is the constant innovation. New services and updates to existing ones arrive at a steady pace. Microsoft sells more than 200 Azure services, nearly 20 for storage alone, which can make it difficult to judge which service is the right one for a particular on-premises workload.

Is it time to take that on-premises file server — and all its hardware support headaches — and lift it into the Azure Files service? It depends: There will be some instances where an Azure service is not mature enough or is too expensive for some Windows Server roles.

Take steps to avoid downtime

Administrators who’ve put in the work to ensure a smooth migration to the cloud will find they need to account for some gaps in cloud coverage and put measures in place to protect their organization from downtime.

It takes a lot of work to migrate multiple server roles into the cloud, including domain name services and print servers. But what happens when there’s an outage?

Major cloud providers offer a service-level agreement to guarantee uptime to their services, but problems can hit closer to home. Your internet service provider could botch an upgrade, or a backhoe could slice through an underground cable. In either scenario, the result is the same: Your business can’t access the services it needs to operate.

Outages happen all the time. There’s no way to avoid them, but you can minimize the effects. Ideally, you could flip a switch to turn on a backup of the infrastructure services and SaaS. But that type of arrangement is not financially possible for most organizations.

With a little preparation in advance, you can limp along with some of the essential infrastructure services that moved to the cloud, such as print servers and domain name services. With a spare Hyper-V host or two, your company can power up a few VMs designed to keep a few core services running in an emergency.

What’s the difference between IaaS and PaaS?

Organizations can move workloads to the cloud a few different ways, but the two most common methods are a “lift and shift” IaaS approach or using the cloud provider’s PaaS equivalent.

Taking an app and its corresponding data in a virtual machine and uploading it to the cloud typically requires little to no reworking of the application. This is called a “lift and shift” due to the minimal effort required compared to other migration options. This approach offers the path of least resistance, but it might not be the optimal approach in the long run.

Traditional Windows Server administrators — and their organizations — might unlock more benefits if they run the application as part of the Azure PaaS. For example, rather than putting a SQL Server VM into Azure and continuing administrative legacy of patching, upgrades and monitoring, the organization could switch to Azure SQL Database. This PaaS product takes granular control away, but some of the perks include cheaper operating costs and less chance of downtime through its geo-replication feature.

What to do when cloud isn’t an option

The cloud might not be the right destination for several reasons. It might not be technically feasible to move a majority of the on-premises workloads, or it just might not fit with the current budget. But it’s not a comfortable feeling when large numbers of vendors start to follow the greener pastures up into the cloud and leave on-premises support behind.

A forward-looking and resourceful IT staff will need to adapt to this changing world as their options for applications and utilities such as monitoring tools start to shrink. The risky proposition is to stay the course with your current tool set and applications and forgo the security updates. A better option is to take the initiative and look at the market to see whether an up-and-coming competitor can fill this vacuum.

Go to Original Article
Author:

IT pros navigate the software-defined data center market

Software-defined infrastructure promises flexibility and agility in the data center, but many IT pros still struggle with challenges such as cost concerns and implementation issues.

The software-defined data center (SDDC) aims to decouple hardware from software and automate networking, compute and storage resources through a centralized software platform. IT can either implement this type of data center in increments by deploying software-defined networking, storage and compute separately, or in one fell swoop. IT pros at Gartner’s data center conference this month in Las Vegas said their organizations are interested in SDDC to address changing storage needs.

In the beginning of SDDC’s foray into the IT landscape, IT pros generally used software-defined infrastructure for one application or region. But in the past 18 months or so, more organizations are expanding the use of software-defined from one application to everywhere, as general-purpose infrastructure, said Daniel Bowers, research director at Gartner.

“That’s a shift,” he said. “That means software-defined is going from a niche technology — great for certain applications — to the mainstream.”

Why SDDC?

As interest levels increase, adoption in the software-defined data center market is on the rise. By 2023, 85% of large global enterprises will require the programmatic capabilities of an SDDC, as opposed to 25% today, according to Gartner.

Some IT teams are evaluating the software-defined data center market as their higher-ups demand innovation, including one financial services company.

“Our CIO is increasingly demanding to move in a software-defined direction,” said an infrastructure architect at the company, who requested anonymity because he was not authorized to speak with the media.

The company’s IT strategy is to shift away from a traditional, scale-up, monolithic storage model and toward a scale-out storage model, which enables IT to buy more storage in smaller chunks. The company also aims to update its “big, flat network” through software-defined networking’s automation and orchestration capabilities, the infrastructure architect said.

Currently, the company’s IT department struggles to deliver adequate test environments to its developers. It aims to close those gaps by spinning up an entire test environment through APIs. When developers are finished testing, they can spin it down, rinse and repeat.

A software-defined data center is a perfect match for an API-driven infrastructure, the infrastructure architect said. With the click of a few buttons, programmers can provision the temporary development environments they need to build applications.

For others, software-defined infrastructure is a secondary solution to an accidental problem. Wayne Morse, a network administrator and systems analyst at Jacobs Technology, an IT services company based in Dallas, runs local storage across 24 servers.

“The problem is, we’re running out of disk space on any individual server, and we need to share those resources across multiple servers,” he said.

IT didn’t implement a SAN due to cost issues, Morse said. Now, the company needs distributed storage across the data center to share resources — and software-defined storage (SDS) is a way to achieve that.

SDDC challenges

But one of the most significant advantages of an SDDC — the ability to implement it gradually — can also be its biggest downfall.

“[Software-defined storage] needs to be a part of a bigger picture,” said Julia Palmer, a research director at Gartner. “It’s very difficult, because all of the components of software-defined are developed separately.”

For Morse, that means a limited network could hinder the capabilities of SDS. He is considering upgrading the company’s network to fully take advantage of the SDS’ storage-sharing features.

Other organizations see the advantages of software-defined, but costs keep actual adoption just out of reach.

The costs of implementing and purchasing the products to make [an SDDC] happen are greater than the actual need.
Walt Baineydirector of infrastructure operations, Kent State University

Walt Bainey, the director of infrastructure operations at Kent State University in Kent, Ohio, has looked at the software-defined data center market for years, but only from afar. That’s because his IT team doesn’t roll out a lot of compute storage or make constant changes to the network.

“We are more static,” Bainey said. “The costs of implementing and purchasing the products to make [an SDDC] happen are greater than the actual need.”

Still, one ideal use case for SDDC would be in the university’s research computing cluster, which provides the infrastructure that supports research needs of professors, researchers and students. There, the IT team could license a smaller footprint of hardware, software and networking components to cut costs, Bainey said. Through software and scripts, IT can provide resources such as servers and file shares and automate routine tasks to build out the environment’s compute, storage and networking components.

“We could have our faculty members and professors self-serve and dole out things they want by spinning them up and spinning them down,” Bainey said. “I think there’s a huge advantage in that type of scenario, but we’re not there yet.”

Microsoft Azure Stack has finally arrived — or has it?

ORLANDO, Fla. — After a year or more of previews, promises and a six-month delay, Microsoft finally has rolled out its much-anticipated Azure Stack hybrid cloud offering.

Well, sort of.

Microsoft officials declared the product ready to ship to corporate customers here at the company’s annual Ignite conference, but most of the five authorized hardware OEMs in attendance said they haven’t finished their tests to certify strict compatibility with their respective servers.

Executives among the authorized hardware OEMs said they expect their respective systems won’t be ready to ship until late October, or even as late as December. Some said their Azure testing has taken longer than initially expected because Microsoft continues to make minor changes to Azure Stack’s code, and the OEMs don’t feel comfortable shipping their systems until the final code is ready.

“We won’t complete the proper testing of all our servers for another 30 days and possibly longer,” said a senior executive at one of the hardware OEMs.

At this point, OEMs’ changes or additions to Azure Stack are minor, but they don’t want to ship until Microsoft tells them it’s ready. “I think the code will be locked down for everyone’s platform and ready to go in early October,” another OEM executive said.

In a session to discuss the technical aspects of Azure Stack, Microsoft presenters said they expected Huawei, which is not represented at the show, to ship Azure Stack sometime in next year’s first quarter. They also said Wortmann AG has signed up to sell the product; however, they offered no details on when that company might ship systems containing Azure Stack.

Microsoft last year authorized only three OEMs to bundle Azure Stack: Dell EMC, Hewlett Packard Enterprise and Lenovo. Microsoft later added Cisco, followed a few months later by Avanade and Huawei.

Microsoft Azure Stack, an extension of the Azure public cloud environment, is one of the company’s most strategically important cloud offerings, and it will go head-to-head against the major public cloud platforms — particularly from competitors such as Amazon Web Services and Google. Azure Stack allows larger corporations to build and deploy applications using the same programming tools and APIs they would use to create cloud-based applications for Azure.

Microsoft has also delivered several updates to Azure, including a preview of Azure Machine Learning, a set of tools aimed at developers interested in creating artificial-intelligence-based applications that will work both in the cloud and on premises.

The company also unveiled the integration of its Azure CosmosDB with its Azure Functions serverless offering. This marriage allows corporate and third-party developers to produce any applications with only a few lines of code, so developers can react more quickly to a range of events — from critical changes in databases to data updates from internet-of-things sensors.

Microsoft also has updated its Azure Security Center with new features to reduce vulnerabilities and improve threat protection, as well as tighten security for workloads in a hybrid cloud environment.

Microsoft Azure Stack’s arrival, along with the new development tools and the integration of some existing tools for Azure, should open up new opportunities for both corporate and third-party developers, according to Scott Guthrie, executive vice president for Microsoft’s cloud and enterprise group, in a keynote at this week’s conference.

“I think it will be easier for developers to build one application and have it run in Azure or locally on Azure Stack,” he said. “This should create new use cases, such as edge and disconnected solutions, that can meet regulatory requirements. Often, the most difficult thing to deal with in any applications is the data, and dealing with data in a hybrid application especially can be very expensive.”

Ed Scannell is a senior executive editor with TechTarget. Contact him at [email protected].

Atlassian chat tool revamp faces long odds in ChatOps shops

An Atlassian ChatOps product makes all the right promises, but IT pros are skeptical it will fare any better than Atlassian HipChat in the pursuit of rival Slack.

The Atlassian chat tool, Stride, combines voice, video and chat in one interface that can be used to make decisions and take action on those decisions from user-flagged messages within team discussions. Users can also mute notifications and incoming messages while in Focus Mode on Stride.

With Stride’s introduction last week, Atlassian specifically called out Slack and Microsoft’s Teams product, and dropped heavy hints that HipChat users will be pushed — the Stride website uses the word “encouraged” — to move to Stride soon. Enterprises that already use the SaaS version of HipChat are enthused about Stride, but there are plenty of skeptics on the sidelines about Atlassian chat tools’ quest to capture market share, particularly from Slack.

“From what I can tell, it’s mostly a rebranding effort to try to get people to use their product as a true alternative to Slack,” said Chris Moyer, vice president of technology at ACI Information Group, a content aggregator based in Ipswich, Mass. Moyer is also a TechTarget contributor who closely follows the ChatOps trend. “They’re adding some features to it for sure, but they’re just a little too late to the game.”

Moyer’s company uses Flowdock for chat, and the tool has stored the company’s entire chat history. Despite interesting features such as integrated voice and video collaboration available with Atlassian’s Stride, the firm will be loath to move away from Flowdock unless Atlassian provides import utilities to siphon such data out of competitors’ platforms, Moyer said. No such tools have been publicly discussed by Atlassian.

They’re adding some features to it for sure, but they’re just a little too late to the game.
Chris Moyervice president of technology, ACI Information Group

ChatOps tools are still emerging and market share is hard to pin down, but analysts said Slack has the early momentum.

To compete, Atlassian will surround Stride with integrations into the other products its customers already use, said Rob Stroud, an analyst at Forrester Research. Such integrations could include hooks into its own Confluence and JIRA, or the Kanban boards Atlassian acquired with Trello.

Atlassian chat tool’s cloudy dilemma

For existing enterprise customers, however, the drawback with Stride is that for the foreseeable future, it will be offered only as SaaS. Large companies strongly prefer on-premises deployments. Some of these customers perceive Atlassian as too focused on cloud-based products, which is why they lobbied for the HipChat Data Center product, which was released in June.. There are hints that Stride will integrate with other Atlassian on-premises products such as JIRA Server, but the company is mum about any plans for a Stride Server product. Other recent products, such as Bitbucket Pipelines, are also SaaS-only.

Some enterprise Atlassian chat customers that currently use the SaaS version of HipChat said they are interested in Stride SaaS.

“[An] on-premises [version] would have some advantages, like legal control of our conversation content, but we could work with the cloud version, which is what we do with HipChat currently,” said Eric Hilfer, vice president of software engineering at Rosetta Stone, in Arlington, Va. The company uses Atlassian tools in its DevOps pipeline.

Rosetta Stone wants to integrate voice, video and screen-share meetings into text conversations that are linked into JIRA and Confluence workflows, Hilfer said. Right now the company uses Google Hangouts for video meetings, and has developer conversations in HipChat, so video meetings aren’t wired directly into JIRA issues as developers discuss them.

Sticking with a SaaS-only product could hurt Atlassian’s ChatOps ambitions in the long run, Moyer said.

“If they target more enterprise-level targets by offering on-premises versions, they’ll have a lot more luck — securing an on-premises application is much simpler,” he said.

Stride is still in preview and Atlassian has added customers to an early access waitlist. It doesn’t yet offer the kinds of integrations Hilfer wants, though that seems to be the plan. Meanwhile, as a relatively young IT software company, Atlassian has yet to discontinue a product such as HipChat, which will be “an interesting process to watch,” Stroud said.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at [email protected] or follow @PariseauTT on Twitter.

Microsoft acquires Cycle Computing to accelerate Big Computing in the cloud – The Official Microsoft Blog

From finding a cure for cancer to making vehicles safer to fulfilling the promises of artificial intelligence, today’s complex problems require the ability to harness massive amounts of computing power. For too long, Big Computing has been accessible only to the most well-funded organizations. At Microsoft, we believe that access to Big Computing capabilities in the cloud has the power to transform many businesses and will be at the forefront of breakthrough experimentation and innovation in the decades to come. Thus far, we have made significant investments across our infrastructure, services and partner ecosystem to realize this vision.

Cycle Computing logo

As a further step in this direction, I’m pleased to share that we’ve acquired Cycle Computing, a leader in cloud computing orchestration, to help make it easier than ever for customers to use High-Performance Computing (HPC) and other Big Computing capabilities in the cloud. The cloud is quickly changing the world of Big Compute, giving customers the on-demand power and infrastructure necessary to run massive workloads at scale without the overhead. Your compute power is no longer measured or limited by the square footage of your data center.

Azure has a massive global footprint and, more than any other major cloud provider. It also has powerful infrastructure, InfiniBand support for fast networking and state-of-the-art GPU capabilities. Combining the most specialized Big Compute infrastructure available in the public cloud with Cycle Computing’s technology and years of experience with the world’s largest supercomputers, we open up many new possibilities. Most importantly, Cycle Computing will help customers accelerate their movement to the cloud, and make it easy to take advantage of the most performant and compliant infrastructure available in the public cloud today.

We’ve already seen explosive growth on Azure in the areas of artificial intelligence, the Internet of Things and deep learning. As customers continue to look for faster, more efficient ways to run their workloads, Cycle Computing’s depth and expertise around massively scalable applications make them a great fit to join our Microsoft team. Their technology will further enhance our support of Linux HPC workloads and make it easier to extend on-premise workloads to the cloud.

Customers like the City of Hope and MetLife have already benefited from the flexibility and scalability of Azure HPC data-processing capabilities to achieve faster and more accurate results, while saving significant infrastructure costs. We look forward to hearing many more success stories from other customers as well, and we’re excited for you to put Azure to the test.

In the meantime, I’m excited to welcome the Cycle Computing team to Microsoft, and look forward to seeing the impact their technology and talent will have on Azure and the customer experience.

You can also read a blog post from Jason Stowe, founder and CEO of Cycle Computing, here.

Tags: Azure, Big Computing, Cloud, High-Performance Computing