Tag Archives: pros

IT pros look to VMware’s GPU acceleration projects to kick-start AI

SAN FRANCISCO — IT pros who need to support emerging AI and machine learning workloads see promise in a pair of developments VMware previewed this week to bolster support for GPU-accelerated computing in vSphere.

GPUs are uniquely suited to handle the massive processing demands of AI and machine learning workloads, and chipmakers like Nvidia Corp. are now developing and promoting GPUs specifically designed for this purpose.

A previous partnership with Nvidia introduced capabilities that allowed VMware customers to assign GPUs to VMs, but not more than one GPU per VM. The latest development, which Nvidia calls its Virtual Compute Server, allows customers to assign multiple virtual GPUs to a VM.

Nvidia’s Virtual Compute Server also works with VMware’s vMotion capability, allowing IT pros to live migrate a GPU-accelerated VM to another physical host. The companies have also extended this partnership to VMware Cloud on AWS, allowing customers to access Amazon Elastic Compute Cloud bare-metal instances with Nvidia T4 GPUs.

VMware gave the Nvidia partnership prime time this week at VMworld 2019, playing a prerecorded video of Nvidia CEO Jensen Huang talking up the companies’ combined efforts during Monday’s general session. However, another GPU acceleration project also caught the eye of some IT pros who came to learn more about VMware’s recent acquisition of Bitfusion.io Inc.

VMware acquired Bitfusion earlier this year and announced its intent to integrate the startup’s GPU virtualization capabilities into vSphere. Bitfusion’s FlexDirect connects GPU-accelerated servers over the network and provides the ability to assign GPUs to workloads in real time. The company compares its GPU vitalization approach to network-attached storage because it disaggregates GPU resources and makes them accessible to any server on the network as a pool of resources.

The software’s unique approach also allows customers to assign just portions of a GPU to different workloads. For example, an IT pro might assign 50% of a GPU’s capacity to one VM and 50% to another VM. This approach can allow companies to more efficiently use its investments in expensive GPU hardware, company executives said. FlexDirect also offers extensions to support field-programmable gate arrays and application-specific integrated circuits.

“I was really happy to see they’re doing this at the network level,” said Kevin Wilcox, principal virtualization architect at Fiserv, a financial services company. “We’ve struggled with figuring out how to handle the power and cooling requirements for GPUs. This looks like it’ll allow us to place to our GPUs in a segmented section of our data center that can handle those power and cooling needs.”

AI demand surging

Many companies are only beginning to research and invest in AI capabilities, but interest is growing rapidly, said Gartner analyst Chirag Dekate.

“By end of this year, we anticipate that one in two organizations will have some sort of AI initiative, either in the [proof-of-concept] stage or the deployed stage,” Dekate said.

In many cases, IT operations professionals are being asked to move quickly on a variety of AI-focused projects, a trend echoed by multiple VMworld attendees this week.

“We’re just starting with AI, and looking at GPUs as an accelerator,” said Martin Lafontaine, a systems architect at Netgovern, a software company that helps customers comply with data locality compliance laws.

“When they get a subpoena and have to prove where [their data is located], our solution uses machine learning to find that data. We’re starting to look at what we can do with GPUs,” Lafontaine said.

Is GPU virtualization the answer?

Recent efforts to virtualize GPU resources could open the door to broader use of GPUs for AI workloads, but potential customers should pay close attention to benchmark testing, compared to bare-metal deployments, in the coming years, Gartner’s Dekate said.

So far, he has not encountered a customer using these GPU virtualization tactics for deep learning workloads at scale. Today, most organizations still run these deep learning workloads on bare-metal hardware.

 “The future of this technology that Bitfusion is bringing will be decided by the kind of overheads imposed on the workloads,” Dekate said, referring to the additional compute cycles often required to implement a virtualization layer. “The deep learning workloads we have run into are extremely compute-bound and memory-intensive, and in our prior experience, what we’ve seen is that any kind of virtualization tends to impose overheads. … If the overheads are within acceptable parameters, then this technology could very well be applied to AI.”

Go to Original Article
Author:

Smart cloud storage tier management added to Druva cloud

IT pros could significantly reduce their monthly bills with the new intelligent data tiering feature from Druva.

Based on frequency of access, data can be considered cold, warm or hot, and Amazon Web Services (AWS) has respective storage tiers for them: Glacier Deep Archive, Glacier and S3. The hotter the tier, the more expensive it is to maintain.

Druva Cloud Platform, a software product built on AWS for cloud data protection and management, has a new functionality that uses machine learning algorithms to assess which tier backup data belongs and automatically moves it there, optimizing cloud storage costs. Customers would also have oversight of the data and be able to manually manage it if they choose.

NetApp Cloud Volumes OnTap and Hitachi Vantara have similar automated cloud tiering features to optimize data storage between on-premises environments and public clouds. Druva’s upcoming feature is different in that it works with all layers of AWS storage.

Druva is a privately held software company based in Sunnyvale, Calif., and completed a $130 million funding round earlier this year. It acquired CloudLanes around that same time, adding its on-premises data ingestion capability to Druva Cloud Platform. In 2018, Druva acquired CloudRanger, which provides data protection for AWS.

AWS supplies its customers with the tools to build tiered storage and prices the tiers competitively, according to Mike Palmer, chief product officer at Druva, but customers have to rely on their own know-how in order to stitch together a system to manage it. They’d have to build their own indexes for data visibility and develop a clear understanding of the pricing and benefits of each cloud storage tier, then come up with ways to determine where their data should go to maximize savings.

“Amazon makes the customer the systems integrator, and that’s by design. They’re providing the absolute best price and performance, but it’s your job to [put it together],” Palmer said.

Mismatching data to their respective tiers could be costly to a business. With deeper archives come larger penalties for pulling data out early, and there are fees for putting data in AWS and taking it out. Palmer said many enterprises understand the potential benefits of the cloud, but they are also worried that mismanagement will wipe out any cost savings. Intelligent tiering provided by Druva cloud is designed to remove the need for this level of Amazon expertise.

“We’re going to make it easy for an untrained person to be able to clearly understand what’s happening to their data, how to get it back, and how to manage the cost,” Palmer said.

From a storage standpoint, the cloud has some benefits over on-premises options, said Steven Hill, senior analyst at IT market analyst firm 451 Research. Consumption-based pricing prevents overprovisioning (although storage as a service exists for on-premises infrastructure), and there are often five or more storage tiers with a wide range of response times.

Realizing those benefits can be difficult, as more choices also mean more complexity. Druva Cloud’s new automated tiering addresses the complexity of balancing the availability of data with the cost of storing it.

“Having more choices can be really cost-efficient, provided that you have an abstraction layer capable of placing data in the appropriate tiers efficiently and automatically,” Hill said.

Druva’s intelligent tiering not only works for existing storage tiers in AWS, but future ones as well. The machine learning algorithm works behind the scenes to predict data usage patterns and compares them against the costs of different AWS tiers.

Hill said many organizations are understandably hesitant to allow automated processes to determine storage policy, which is why Druva’s new function allows users who know their data access patterns to set tiering themselves. However, as AWS and other cloud vendors introduce more tiers to their cloud storage services, the complexity will grow to a point where automated management is the only logical step.

A growing number of customers won’t have the time or skills needed to manually make these decisions as they evolve.
Steven HillSenior analyst, 451 Research

“A growing number of customers won’t have the time or skills needed to manually make these decisions as they evolve,” Hill said.

Hill believes over time, the automation of hybrid storage tiering will be efficient and reliable enough that, “it will gain the trust of even the most apprehensive storage or data protection administrator.”

Druva’s automated cloud tiering feature is currently in early access and will be available in September 2019. The feature will be free and can be enabled directly from Druva’s backup policy interface. After providing info about their data retention needs, customers will be presented with an adjusted bill of their existing Druva subscription that factors in the discounts from moving data to cold tiering. Customers only receive a bill from Druva, and not AWS.

Go to Original Article
Author:

How to manage Windows with Puppet

IT pros have long aligned themselves with either Linux or Windows, but it has grown increasingly common for organizations to seek the best of both worlds.

For traditional Windows-only shops, the thought of managing Windows systems with a server-side tool made for Linux may be unappealing, but Puppet has increased Windows Server support over the years and offers capabilities that System Center Configuration Manager and Desired State Configuration do not.

Use existing Puppet infrastructure

Many organizations use Puppet to manage Linux systems and SCCM to manage Windows Servers. SCCM works well for managing workstations, but admins could manage Windows more easily with Puppet code. For example, admins can easily audit a system configuration by looking at code manifests.

Admins manage Windows with Puppet agents installed on Puppet nodes. They use modules and manifests to deploy node configurations. If admins manage both Linux and Windows systems with Puppet, it provides a one-stop shop for all IT operations.

Combine Puppet and DSC for greater support

Admins need basic knowledge of Linux to use a Puppet master service. They do not need to have a Puppet master because they can write manifests on nodes and apply them, but that is likely not a scalable option. For purely Windows-based shops, training in both Linux and Puppet will make taking the Puppet plunge easier. It requires more time to set up and configure Windows systems in Puppet the same way they would be configured in SCCM. Admins should design the code before users start writing and deploying Puppet manifests or DevOps teams add CI/CD pipelines.

SCCM works well for managing workstations, but admins could more easily manage Windows with Puppet code.

DSC is one of the first areas admins look to manage Windows with Puppet code. The modules are written in C# or PowerShell. DSC has native monitoring GUI, which makes the overall view of a machine’s configuration complex. In its enterprise version, Puppet has native support for web-based reporting. Admins can also use a free open source version, such as Foreman.

Due to the number of community modules available on the PowerShell Gallery, DSC receives the most Windows support for code-based management, but admins can combine Puppet with DSC to get complete coverage for Windows management. Puppet contains native modules and a DSC module with PowerShell DSC modules built in. Admins may also use the dsc_lite module, which can use almost any DSC module available in Puppet. The dsc_lite modules are maintained outside of Puppet completely.

How to use Puppet to disable services

Administrators can use Puppet to run and disable services. Using native Puppet support without a DSC Puppet module, admins could write a manifest to always have the net logon, BITS and W3SVC running when a Puppet run completes. Place the name of each Windows service in a Puppet array $svc_name.

$svc_name  = [‘netlogon’,’BITS’,’W3SVC’]


   service { $svc_name:
   

   ensure => ‘running’


}

In the next example, the Puppet DSC module ensures that the web server Windows feature is installed on the node and reboots if a pending reboot is required.

dsc_windowsfeature {‘webserverfeature’:

  dsc_ensure = ‘present’

  dsc_name = ‘Web-Server’

}

reboot { ‘dsc_reboot’ :

  message => Puppet needs to reboot now’,

  when    => ‘pending’,

  onlyif  => ‘pending_dsc_reboot’,

}

Go to Original Article
Author:

AIOps platforms delve deeper into root cause analysis

The promise of AIOps platforms for enterprise IT pros lies in their potential to provide automated root cause analysis, and early customers have begun to use these tools to speed up problem resolution.

The city of Las Vegas needed an IT monitoring tool to replace a legacy SolarWinds deployment in early 2018 and found FixStream’s Meridian AIOps platform. The city introduced FixStream to its Oracle ERP and service-oriented architecture (SOA) environments as part of its smart city project, an initiative that will see municipal operations optimized with a combination of IoT sensors and software automation. Las Vegas is one of many U.S. cities working with AWS, IBM and other IT vendors on such projects.

FixStream’s Meridian offers an overview of how business process performance corresponds to IT infrastructure, as the city updates its systems more often and each update takes less time as part of its digital transformation, said Michael Sherwood, CIO for the city of Las Vegas.

“FixStream tells us where problems are and how to solve them, which takes the guesswork, finger-pointing and delays out of incident response,” he said. “It’s like having a new help desk department, but it’s not made up of people.”

The tool first analyzes a problem and offers insights as to the cause. It then automatically creates a ticket in the company’s ServiceNow IT service management system. ServiceNow acquired DxContinuum in 2017 and released its intellectual property as part of a similar help desk automation feature, called Agent Intelligence, in January 2018, but it’s the high-level business process view that sets FixStream apart from ServiceNow and other tools, Sherwood said.

FixStream’s Meridian AIOps platform creates topology views that illustrate the connections between parts of the IT infrastructure and how they underpin applications, along with how those applications underpin business processes. This was a crucial level of detail when a credit card payment system crashed shortly after FixStream was introduced to monitor Oracle ERP and SOA this spring.

“Instead of telling us, ‘You can’t take credit cards through the website right now,’ FixStream told us, ‘This service on this Oracle ERP database is down,'” Sherwood said.

This system automatically correlated an application problem to problems with deeper layers of the IT infrastructure. The speedy diagnosis led to a fix that took the city’s IT department a few hours versus a day or two.

AIOps platform connects IT to business performance

Instead of telling us, ‘You can’t take credit cards through the website right now,’ FixStream told us, ‘This service on this Oracle ERP database is down.’
Michael SherwoodCIO for the city of Las Vegas

Some IT monitoring vendors associate application performance management (APM) data with business outcomes in a way similar to FixStream. AppDynamics, for example, offers Business iQ, which associates application performance with business performance metrics and end-user experience. Dynatrace offers end-user experience monitoring and automated root cause analysis based on AI.

The differences lie in the AIOps platforms’ deployment architectures and infrastructure focus, said Nancy Gohring, an analyst with 451 Research who specializes in IT monitoring tools and wrote a white paper that analyzes FixStream’s approach.

“Dynatrace and AppDynamics use an agent on every host that collects app-level information, including code-level details,” Gohring said. “FixStream uses data collectors that are deployed once per data center, which means they are more similar to network performance monitoring tools that offer insights into network, storage and compute instead of application performance.”

FixStream integrates with both Dynatrace and AppDynamics to join its infrastructure data to the APM data those vendors collect. Its strongest differentiation is in the way it digests all that data into easily readable reports for senior IT leaders, Gohring said.

“It ties business processes and SLAs [service-level agreements] to the performance of both apps and infrastructure,” she said.

OverOps fuses IT monitoring data with code analysis

While FixStream makes connections between low-level infrastructure and overall business performance, another AIOps platform, made by OverOps, connects code changes to machine performance data. So, DevOps teams that deploy custom applications frequently can understand whether an incident is related to a code change or an infrastructure glitch.

OverOps’ eponymous software has been available for more than a year, and larger companies, such as Intuit and Comcast, have recently adopted the software. OverOps identified the root cause of a problem with Comcast’s Xfinity cable systems as related to fluctuations in remote-control batteries, said Tal Weiss, co-founder and CTO of OverOps, based in San Francisco.

OverOps uses an agent that can be deployed on containers, VMs or bare-metal servers, in public clouds or on premises. It monitors the Java Virtual Machine or Common Language Runtime interface for .NET apps. Each time code loads into the CPU via these interfaces, OverOps captures a data signature and compares it with code it’s previously seen to detect changes.

OverOps Grafana dashboard
OverOps exports reliability data to Grafana for visual display

From there, the agent produces a stream of log-like files that contain both machine data and code information, such as the number of defects and the developer team responsible for a change. The tool is primarily intended to catch errors before they reach production, but it can be used to trace the root cause of production glitches, as well.

“If an IT ops or DevOps person sees a network failure, with one click, they can see if there were code changes that precipitated it, if there’s an [Atlassian] Jira ticket associated with those changes and which developer to communicate with about the problem,” Weiss said.

In August 2018, OverOps updated its AIOps platform to feed code analysis data into broader IT ops platforms with a RESTful API and support for StatsD. Available integrations include Splunk, ELK, Dynatrace and AppDynamics. In the same update, the OverOps Extensions feature also added a serverless AWS Lambda-based framework, as well as on-premises code options, so users can create custom functions and workflows based OverOps data.

“There’s been a platform vs. best-of-breed tool discussion forever, but the market is definitely moving toward platforms — that’s where the money is,” Gohring said.

For Sale – 13” MacBook Pro 2017 with AppleCare until Sept 2020 – £1100

Just in case there weren’t enough MacBook Pro’s for sale in the classifieds

Selling my 13” MacBook Pro I traded here some weeks back. It’s a 2017 model with 8GB RAM and 256GB SSD. It also has AppleCare until Sept 2020.

The MacBook is as new and comes boxed with the original power adapter and a blue hard shell case. It has been used very little, battery count is only 8. No marks or scratches I can see. It has been very well looked after.

I’m selling to fund a MacBook Pro with a bigger SSD. Maybe be interested in a trade for a 2017/2018 MacBook with 512GB SSD.

Price is £1100 inc delivery.

Price and currency: £110p
Delivery: Delivery cost is included within my country
Payment method: BACS/PPG
Location: London
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – 13” MacBook Pro 2017 with AppleCare until Sept 2020 – £1100

Just in case there weren’t enough MacBook Pro’s for sale in the classifieds

Selling my 13” MacBook Pro I traded here some weeks back. It’s a 2017 model with 8GB RAM and 256GB SSD. It also has AppleCare until Sept 2020.

The MacBook is as new and comes boxed with the original power adapter and a blue hard shell case. It has been used very little, battery count is only 8. No marks or scratches I can see. It has been very well looked after.

I’m selling to fund a MacBook Pro with a bigger SSD. Maybe be interested in a trade for a 2017/2018 MacBook with 512GB SSD.

Price is £1100 inc delivery.

Price and currency: £110p
Delivery: Delivery cost is included within my country
Payment method: BACS/PPG
Location: London
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – 13” MacBook Pro 2017 with AppleCare until Sept 2020 – £1100

Just in case there weren’t enough MacBook Pro’s for sale in the classifieds

Selling my 13” MacBook Pro I traded here some weeks back. It’s a 2017 model with 8GB RAM and 256GB SSD. It also has AppleCare until Sept 2020.

The MacBook is as new and comes boxed with the original power adapter and a blue hard shell case. It has been used very little, battery count is only 8. No marks or scratches I can see. It has been very well looked after.

I’m selling to fund a MacBook Pro with a bigger SSD. Maybe be interested in a trade for a 2017/2018 MacBook with 512GB SSD.

Price is £1100 inc delivery.

Price and currency: £110p
Delivery: Delivery cost is included within my country
Payment method: BACS/PPG
Location: London
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

IT infrastructure management software learns analytics tricks

IT infrastructure management software has taken on a distinctly analytical flavor, as enterprise IT pros struggle to keep up with the rapid pace of DevOps and technology change.

Enterprise IT vendors that weren’t founded with AIOps pedigrees have added data-driven capabilities to their software in 2018, while startups focused on AI features have turned heads, even among traditional enterprise companies. IT pros disagree on the ultimate extent of AI’s IT ops automation takeover. But IT infrastructure management software that taps data analytics for decision-making has replaced tribal knowledge and manual intervention at most companies.

For example, Dolby Laboratories, a sound system manufacturer based in San Francisco, replaced IT monitoring tools from multiple vendors with OpsRamp’s data-driven IT ops automation software, even though Dolby is wary of the industry’s AIOps buzz. OpsRamp monitors servers and network devices under one interface, and it can automatically discover network configuration information, such as subnets and devices attached to the network.

“You can very easily get a system into the monitoring workflow, whereas a technician with his own separate monitoring system might not take the last step to monitor something, and you have a problem when something goes down,” said Thomas Wong, Dolby’s senior director of enterprise applications. OpsRamp’s monitoring software alerts are based on thresholds, but they also suggest remediation actions.

Dolby’s “killer app” for OpsRamp’s IT ops automation is to patch servers and network devices, replacing manual procedures that required patches to be downloaded separately and identified by a human as critical, Wong said.

Still, Wong said Dolby will avoid OpsRamp version 5.0 for now, which introduced new AIOps capabilities in June 2018.

“We’re staying away from all of that,” he said. “I think it’s just the buzz right now.”

Data infiltrates IT infrastructure management software

While some users remain cautious or even skeptical of AIOps, IT infrastructure management software of every description — from container orchestration tools to IT monitoring and incident response utilities — now offer some form of analytics-driven automation. That ubiquity indicates at least some user demand, and IT pros everywhere must grapple with AIOps, as tools they already use add AI and analytics features.

PagerDuty, for example, has concentrated on data analytics and AI additions to its IT incident response software in 2017 and 2018. A new AI feature added in June 2018, Event Intelligence, identifies patterns in human incident remediation behavior and uses those patterns to understand service dependencies and communicate incident response suggestions to operators when new incidents occur.

“The best predictor of what someone will do in the future is what they actually do, not what they think they will do,” said Rachel Obstler, vice president of products at PagerDuty, based in San Francisco. “If a person sees five alerts and an hour later selects them together and says, ‘Resolve all,’ that tells us those things are all related better than looking at the alert payloads or the times they were delivered.”

PagerDuty users are intrigued by the new feature, but skeptical about IT ops automation tools’ reach into automated incident remediation based on such data.

“I can better understand the impact [of incidents] on our organization, where I need to make investments and why, and I like that it’s much more data-driven than it used to be,” said Andy Domeier, director of technology operations at SPS Commerce, a communications network for supply chain and logistics businesses based in Minneapolis.

SPS has built webhook integrations between PagerDuty alerts and AWS Lambda functions to attach documentation to each alert, which saves time for teams to search a company wiki for information on how to resolve an alert. This integration also facilitates delivery of recent change information.

“But if I want to do something meaningful in response to alerts, I have to be inside my network,” Domeier said. “I don’t think PagerDuty would be able to do that kind of thing at scale, because everyone’s environment is different.”

From IT ops automation to AIOps

AIOps is far from mainstream, but more companies aspire to full data-driven IT ops automation. In TechTarget’s 2018 IT Priorities Survey, nearly as many people said they would adopt some form of AI (13.7%) as would embrace DevOps (14.5%). And IT infrastructure management software vendors have wasted no time to serve up AIOps features, as AI and machine learning buzz crests in the market.

Dynatrace’s IT monitoring tool performs predictive analytics and issues warnings to IT operators in shops such as Barbri, which offers legal bar review courses in Dallas.

“We just had critical performance issues surface recently that Dynatrace warned us about,” said Mark Kaplan, IT director at Barbri, which has used Dynatrace for four years. “We were able to react before our site went down.”

[AI and neural networks are] just an evolution of the standard statistics we’ve always used, and that evolution is much more human than most people believe.
Dennis Curryexecutive director and deputy CTO, Konica Minolta

The monitoring vendor released Dynatrace Artificial Virtual Intelligence System, or DAVIS, an AI-powered digital virtual assistant for IT operators, in early 2017. And Barbri now uses it frequently for IT root-cause analysis and incident response. Barbri will also evaluate Dynatrace log analytics features to possibly replace Splunk.

Kaplan has already grown accustomed to daily reports from DAVIS and wants it to do more, such as add a voice interface similar to Amazon Echo’s Alexa and automated incident response.

“We can already get to the point of self-remediation if we make the proper scripts in a convoluted setup,” he said. “But we see something smoother coming in the future.”

Since Barbri rolled out DAVIS, IT ops pros have embraced a more strategic role as infrastructure architects, rather than put out fires. Nevertheless, enterprises still insist on control. Even as AIOps tools push the boundaries of machine control over other machines, unattended AI remains a distant concept for IT infrastructure management software, if it ever becomes reality.

“No one’s talking about letting AI take over completely,” Kaplan said. “Then, you end up in a HAL 9000 situation.”

The future of AI looks very human

Konica Minolta Inc., a Japanese digital systems manufacturing company, teamed up with AIOps startup ScienceLogic for a new office printer product, called Workplace Hub, which can also deliver IT management services for SMB customers. ScienceLogic’s AIOps software will be embedded inside Workplace Hub and used on the back end at Konica Minolta to manage services for customers.

But AI will only be as valuable as the human decisions it enables, said Dennis Curry, executive director and deputy CTO at Konica Minolta. He, too, is skeptical about the idea of AI that functions unattended by humans and instead sees that AI will augment human intelligence both inside and outside of IT environments.

“AI is not a sudden invention — I worked in the mid-1990s for NATO on AI and neural networks, but there wasn’t a digital environment then where it could really flourish, and we have that now,” Curry said. “It’s just an evolution of the standard statistics we’ve always used, and that evolution is much more human than most people believe.”

Azure PaaS strategy hones in on hybrid cloud, containers

Evaluate
Weigh the pros and cons of technologies, products and projects you are considering.

Microsoft’s PaaS offerings might have a leg-up in terms of support for hybrid deployments, but the vendor still faces tough competition in a quickly evolving app-dev market.


The Azure PaaS portfolio continues to offer a compelling story for companies that need a development environment…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

where legacy applications can move freely between on premises and the cloud. But even as the vendor increasingly embraces hybrid cloud, open source and emerging technologies, such as containers and IoT, it still faces tough competition from the likes of Google and AWS.

Strong foundation

Azure App Service is Microsoft’s flagship PaaS offering, enabling developers to build and deploy web and mobile applications in a variety of programming languages — without having to manage the underlying infrastructure.

But App Service represents just one of many services that Microsoft has rolled out over the years to help developers create, test, debug and extend application code. The company’s Visual Studio line, for example, now includes four product families: Visual Studio Integrated Development Environment, Visual Studio Team Services, Visual Studio Code and Visual Studio App Center, which includes connections to GitHub, Bitbucket and VSTS repositories to support continuous integration.

Microsoft has also created a vast independent software vendor and developer community, and has tightly integrated many of its development tools, according to Jeffrey Kaplan, managing director at THINKstrategies, Inc. Visual Studio and SQL Server, for example, support common design points and feature high levels of integration with App Service.

Microsoft’s Azure PaaS strategy is also unique in its focus on hybrid cloud deployments. Through its Hybrid Connections feature, for example, developers can build and manage cloud applications that access on-premises resources. What’s more, Azure App Service is also available for Azure Stack — Microsoft’s integrated hardware and software platform designed to bring Azure public cloud services to enterprises’ local data centers and simplify the deployment and management of hybrid cloud applications.

Missing pieces

But despite its broad portfolio and hybrid focus, Azure PaaS is not a panacea. While many traditional IT departments have embraced the offering, it hasn’t been as popular in business units, which now drive development initiatives in many organizations, according to Larry Carvalho, research director for IDC’s PaaS practice.

What’s more, organizations that don’t have a large footprint of legacy systems often prefer open source development tools, rather than tools like Visual Studio. Traditionally, Microsoft hasn’t offered support for open source technology as quickly as other cloud market leaders, such as AWS, according to Carvalho. This is likely because competitors like AWS are not weighed down by support for legacy products.

But while, historically, Microsoft’s business model has been antithetical to the open source approach, that’s started to change. The company has made an effort to embrace more open source technologies and recently purchased GitHub, a version control platform founded on the open source code management system Git.

The evolving face of PaaS

The PaaS landscape is evolving rapidly. Rather than traditional VMs, developers increasingly focus on containers, and interest in DevOps continues to rise. In an attempt to align with these trends, Microsoft now offers a managed Kubernetes service on its public cloud and recently added Azure Container Instances to enable developers to spin up new container workloads without having to manage the underlying server infrastructure.

Additionally, enterprises have a growing interest in application development for AI, machine learning and IoT platforms. And while Azure PaaS tools offer support for these technologies, Microsoft still needs to compete against fellow market leaders, AWS and Google — the latter of which has garnered a lot of attention for its development of TensorFlow, an open source machine learning framework.

Dig Deeper on Platform as a Service and cloud computing