The post How to bring more water to more people? Grundfos looks – naturally – to the cloud appeared first on News Center.
SolarWinds has beefed up its cloud monitoring platform with tools that allow managers to track both application performance and infrastructure components in a single view.
The upgrades to SolarWinds’ Cloud software-as-a-service portfolio include a new application, as well as updates to two existing products.
The new network device monitoring application, AppOptics, uses a common dashboard to track application performance metrics and network component health — both within the enterprise network or throughout a public cloud provider’s network.
The software combines two existing SolarWinds cloud monitoring apps, Librato and TraceView, into a single network device monitoring product, said Christoph Pfister, executive vice president of products at the company, based in Austin, Texas. Initially, AppOptics will support both Amazon Web Services and Microsoft Azure; support for other providers could be added at a later date, Pfister said.
“Infrastructure and application monitoring are now in separate silos,” he said. “We are trying to integrate them. The digital experience has become very important. But behind the scenes, applications have become very complex, making monitoring and troubleshooting challenging.”
AppOptics uses downloaded agents to collect tracing, host and infrastructure monitoring metrics to feed a common dashboard, through which managers can keep tabs on network device monitoring and application behavior and take appropriate steps in the wake of performance degradation.
In addition to launching AppOptics, SolarWinds added a more powerful search engine and more robust analytics to Papertrail, its log management application. And it added capabilities to Pingdom, a digital experience measurement tool, to allow enterprises to react more quickly to issues that might affect user engagement with a website or service.
Both AppOptics and Papertrail are available Nov. 20; SolarWinds will release Pingdom Nov. 27. All are available as downloads from SolarWinds. The cloud monitoring platform is priced at $7.50 per host, per month.
Ruckus launches high-speed WLAN switches
Ruckus Wireless Inc. introduced a new group of wireless LAN switches engineered to support network edge and aggregation functions.
The new switches, the ICX 7650 series, come in three models, including a multi-gigabit access switch that supports both 2.5 Gbps and 5 Gbps throughput; a core switch with Layer 3 features and up to 24 10 Gbps and 24 1 Gbps fiber ports of capacity; and a high-performance gigabit switch that can be deployed as a stack of up to 12 switches.
“As more wireless users access cloud and data-intensive applications on their devices, the demand for high-speed, resilient edge networks continues to increase,” said Siva Valliappan, vice president of campus product management at Ruckus, based in Sunnyvale, Calif., in a statement. “The ICX 7650 switch family captures all these requirements, enabling users to scale and future-proof their network infrastructure to meet the increasing demand of wired and wireless network requirements for seven to 10 years,” he added.
The switches, available early next year, are priced starting at $11,900, Ruckus said.
DDoS attacks on rise, thanks to IoT
Distributed denial-of-service, or DDoS, attacks have risen sharply in the past year, according to a new security report from Corero Network Security.
The firm, based in Marlborough, Mass., said Corero enterprise customers experienced an average of 237 DDoS attempts each day during the third quarter of 2017, a 35% increase from the year-earlier period and almost double from what they experienced in the first quarter of 2017.
The company attributed the growth in attacks to DDoS for-hire services and the proliferation of unsecured internet of things (IoT) devices. One piece of malware, dubbed the Reaper, has already infected thousands of IoT gadgets, Corero said.
In addition, Corero’s study found that hackers are using multiple ways to penetrate an organization’s security perimeter. Some 20% of attacks recorded in the second quarter of 2017 used multiple attack vectors, the company said.
Finally, Corero said ransom-oriented DDoS attacks also rose in the third quarter, attributing many of them to one group, the Phantom Squad, which targeted companies across the United States, Europe and Asia.
A few weeks ago at BoxWorks 2017, Scott Guthrie, EVP of Microsoft’s Cloud and Enterprise group, joined our CEO Aaron Levie to announce some exciting news: Box using Azure will be generally available in November. The day has come!
What is Box using Azure?
Box using Azure is the first product milestone in the expanded partnership between Box and Microsoft. Now customers can benefit from combining Box’s cloud content management platform with Microsoft’s global-scale Azure cloud platform, to:
- Simplify, cross-company collaborative processes between employees and external stakeholders.
- Securely manage content for the enterprise, with integrations for 1,400 best-of-breed SaaS apps, including Office 365 apps, while allowing users to work in their familiar productivity and line-of-business tools.
- Bring Box cloud content management capabilities to their own custom applications that deliver new digital content experiences and streamline business processes for their employees, customers and partners.
Today thousands of businesses get work done using Box with Microsoft Office 365 including the new Microsoft Teams. This new integration with Azure is another step toward delivering a great user experience for our customers using Box with the Microsoft stack.
“Flex has successfully been using Box as our primary platform for digital content sharing, storage and collaboration globally. We also use Microsoft Azure as one of our cloud computing services for our global IT infrastructure,” said Gus Shahin, CIO of Flex. “We look forward to seeing how Box and Microsoft Azure Cognitive Services work together to deploy next generation A.I. and machine learning capabilities.”
What’s coming next?
Microsoft and Box engineering teams are working hard to build out even more capabilities over the coming months, such as:
- Powering Box content with intelligent capabilities from Microsoft Cognitive Services, that enable customers to automatically identify and categorize content, trigger workflows and tasks and make content more discoverable for users.
- Leveraging Azure’s broad global footprint to meet data sovereignty requirements and ensure compliance with industry regulations or corporate policies.
“The integration of Box and Azure services is a welcome development for our digital transformation journey as a company. This can help deliver a more streamlined approach to our content management and ensures that Schneider Electric employees can securely and quickly work together and with customers and partners in a much more productive way, adding more value to our use of Box and Microsoft solutions,” said Herve Coureil, Chief Digital Officer, Schneider Electric.
Box using Azure is currently available with content storage in US data centers. Box add-on packages can be used with Box using Azure, including: information governance to meet all your organization’s security requirements and compliance standards, customer-managed encryption keys to take ownership over your encryption keys, and workflow automation to streamline business processes.
How do I get started?
If you’re interested in Box using Azure, learn more or get in touch with Box Sales.
Once again, Department of Defense data was found publicly exposed in cloud storage, but it is unclear how sensitive the data may be.
Chris Vickery, cyber risk analyst at UpGuard, based in Mountain View, Calif., found the exposed data in publicly accessible Amazon Web Services (AWS) S3 buckets. This is the second time Vickery found exposed data from the Department of Defense (DoD) on AWS. The previous exposure was blamed on government contractor Booz Allen Hamilton; UpGuard said a now defunct private-sector government contractor named VendorX appeared to be responsible for building this database. However, it is unclear if VendorX was responsible for exposing the data. Vickery also previously found exposed data in AWS buckets from the Republican National Committee, World Wrestling Entertainment, Verizon and Dow Jones & Co.
According to Dan O’Sullivan, cyber resilience analyst at UpGuard, Vickery found three publicly accessible DoD buckets on Sept. 6, 2017.
“The buckets’ AWS subdomain names — ‘centcom-backup,’ ‘centcom-archive’ and ‘pacom-archive’ — provide an immediate indication of the data repositories’ significance,” O’Sullivan wrote in a blog post. “CENTCOM refers to the U.S. Central Command, based in Tampa, Fla. and responsible for U.S. military operations from East Africa to Central Asia, including the Iraq and Afghan Wars. PACOM is the U.S. Pacific Command, headquartered in Aiea, Hawaii and covering East, South and Southeast Asia, as well as Australia and Pacific Oceania.”
UpGuard estimated the total exposed data in the AWS buckets amounted to “at least 1.8 billion posts of scraped internet content over the past eight years.” The exposed data was all scraped from public sources including news sites, comment sections, web forums and social media.
“While a cursory examination of the data reveals loose correlations of some of the scraped data to regional U.S. security concerns, such as with posts concerning Iraqi and Pakistani politics, the apparently benign nature of the vast number of captured global posts, as well as the origination of many of them from within the U.S., raises serious concerns about the extent and legality of known Pentagon surveillance against U.S. citizens,” O’Sullivan wrote. “In addition, it remains unclear why and for what reasons the data was accumulated, presenting the overwhelming likelihood that the majority of posts captured originate from law-abiding civilians across the world.”
Importance of the exposed DoD data
Vickery found references in the exposed data to the U.S. Army “Coral Reef” intelligence analysis program, which is designed “to better understand relationships between persons of interest,” but UpGuard ultimately would not speculate on why the DoD gathered the data.
Ben Johnson, CTO at Obsidian Security, said such a massive data store could be very valuable if processed properly.
“Data often provides more intelligence that initially accessed, so while this information was previously publicly available, adversaries may be able to ascertain various insights they didn’t previously had,” Johnson told SearchSecurity. “What’s more of a problem than the data itself in this case is that this is occurring at all — showcasing that there’s plenty of work to do in safeguarding our information.”
Ben JohnsonCTO at Obsidian Security
Rebecca Herold, president of Privacy Professor, noted that just because the DoD collected public data doesn’t necessarily mean the exposed data includes accurate information.
“Sources of, and reliability for, the information matters greatly. Ease of modifying even a few small details within a large amount of data can completely change the reality of the topic being discussed. Those finding this information need to take great caution to not simply assume the information is all valid and accurate,” Herold told SearchSecurity. “Much of this data could have been manufactured and used for testing, and much of it may have been used to lure attention, as a type of honeypot, and may contain a great amount of false information.”
Herold added that the exposed data had worrying privacy implications.
“Just because the information was publicly available does not mean that it should have been publicly available. Perhaps some of this information also ended up being mistakenly being made publicly available because of errors in configurations of storage servers, or of website errors,” Herold said. “When we have organizations purposefully taking actions to collect and inappropriately (though legally in many instances) use, share and sell personal information, and then that information is combined with all this freely available huge repositories of data, it can provide deep insights and revelations for specific groups and individuals that could dramatically harm a wide range of aspects within their lives.”
The city of Atlanta is moving to Oracle’s full cloud for human resources, procurement and finance. To help win the deal, the city released eyebrow-raising cost estimates that show the city has little choice but to move to Oracle’s ERP/HCM Cloud platform.
Atlanta is a longtime Oracle user. Its last big ERP upgrade was around 2007. This time, it was planning on a hybrid cloud adoption, keeping some systems on premises and others in Oracle ERP Cloud. The city didn’t believe all of the Oracle ERP Cloud offerings were on par with the on-premises systems, hence the hybrid approach. This view changed as the planning progressed.
For Oracle, getting customers to migrate to its cloud platform is a top priority. But the financial incentives behind these deals are rarely disclosed, at least until Atlanta offered a glimpse at some of the cost estimates.
The Atlanta City Council finance committee was shown a series of slides that sketched out the financial case for a full cloud approach. Officials were told that the 10-year total cost of ownership difference between Oracle’s E-Business Suite (EBS)/HCM Cloud and Oracle’s full ERP/HCM Cloud was $26 million. That’s how much more the city would spend over a 10-year period if it went with a hybrid approach.
Oracle’s licensing terms between the two platforms were starkly different. Under the hybrid approach, the annual licensing would see a “4% increase per year for EBS/HCM Cloud hybrid (until year 10) vs. 0% increase until year 5” for the ERP Cloud. That full ERP Cloud saw a one-time 3% increase in year six over the 10-year agreement.
Analysts and consultants who have seen the slides say there’s not enough information to tell, exactly, how these estimates were calculated. However, the differences in licensing costs between hybrid, on-premises and full cloud ERP delivers a clear message.
“I assumed that this was Oracle’s way of financially motivating the decision they wanted,” said Marc Tanowitz, a managing director at Pace Harmon, a management consultancy that advises firms making similar decisions.
The Atlanta mayor’s office declined to make an official available for an interview or to answer written questions. A spokeswoman said the city would not comment. In addition, Oracle said it couldn’t discuss a specific customer agreement.
At the start of this project, Oracle’s HCM Cloud was described by Atlanta officials as mature and ready for full cloud deployment. The city initially concluded that there were functionality gaps in the finance system, and it intended to keep Oracle’s R12 financials on premises. That changed.
“Over the last six to eight months, Oracle has released new functionality to where we feel like those gaps will be addressed,” said John Gaffney, the city’s deputy chief financial officer, according to a video — discussion at the Oct. 25 meeting begins at about 2:17 in the video — of a recent city council finance meeting. That meant recommending a full cloud option.
Atlanta City Council members at the meeting didn’t probe the licensing difference. Gaffney, in presenting the savings, told them that “you’ve got lower costs that are primarily driven by your subscription cost being lower. You also don’t have to pay any hosting fees.”
Tanowitz said there were some things about Atlanta’s Oracle ERP Cloud project that were clear; the apparent 10-year agreement with Atlanta, for instance. Vendors have generally been seeking longer terms.
“That piece of it didn’t surprise us,” he said.
The first-year implementation cost for hybrid and full Oracle ERP Cloud were roughly equal, at about $19 million. That figure also wasn’t surprising to Tanowitz because there is a cost to migration. But Tanowitz said he struggles to understand why the on-premises deployment is escalating in cost faster than the full Oracle ERP Cloud deployment.
“If you think about the cloud cost, what are you paying for in a cloud subscription? You’re paying for some intellectual property and you’re paying for some hosting,” said Tanowitz. “That’s what’s under that number, if you peel it apart.”
“Why would an environment that I’m hosting on my own — presumably with the EBS deal — be going up at this rate?” he asked.
There has been a long-standing debate in IT about whether on premises is less costly than full cloud approaches. Frank Scavo, president of Computer Economics, a research firm, said the decision on these approaches can go either way.
“If the data center is underutilized, adding another application may not add much cost,” he said. “But if I need to build a new data center or add significant capacity, it will be much more costly. There is no right answer.”
Cloud computing technology is creating business opportunities so radically new and different that they can be built only if we junk much of what we know, how we operate and even how we think — everywhere in the enterprise, not just within IT. In other words, transform or die.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
That was the emphatic, no-nonsense message delivered by Ashish Mohindroo, vice president of Oracle Cloud, and Bill Taylor, co-founder and founding editor of Fast Company magazine. They spoke at the Boston stop of the 2017-2018 Oracle Cloud Day roadshow in November.
Legacy data centers won’t help, said Mohindroo. Neither will recreating on-premises complexity in the cloud. It’s time to think in new ways, as is typified by Uber and Lyft redefining transportation and Airbnb transforming the hospitality industry.
During a time of disruption, don’t let what you know limit what you can imagine, warned Taylor, giving a combination of scared-straight and do-it-now-or-else advice to an audience of about 400 IT professionals.
IT is currently in the midst of a once-every-20-years tectonic shift, according to Mohindroo. The most recent, the 1990s shift from client/server computing to the internet, is now being supplanted by the transition to cloud computing. The upheaval is far-reaching and impossible to avoid.
“No industry is immune,” Mohindroo said, citing key cloud computing technology drivers that include artificial intelligence, machine learning, blockchain, autonomous software, the internet of things and advances in human interface design.
A potentially debilitating problem that businesses face today is that existing legacy IT infrastructures and strategies were not built to leverage new technologies, support new business models, offer adequate control and do it all quickly. Traditional data centers, Mohindroo said, were constructed in a siloed manner, built for maximum capacity and peak loads, but not designed to be elastic, integrated or flexible.
Complicating matters is that each siloed service doesn’t talk to others and may have been built to differing standards. Integrating them can be difficult when incompatible standards, including authentication, database design or communications protocols, get in the way.
Though Mohindroo’s presentation eventually led into a sales pitch for Oracle’s cloud computing technology platforms, the underlying message was vendor neutral and clear: For businesses to exist, they must undergo a cloud transformation consisting of essential foundational services: data as a service (DaaS), software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). Those services, he said, need to be based on open technologies and standards, including SQL and NoSQL databases.
Six journey paths
Oracle defines six distinct pathways into the cloud. Each offers differing appeal depending on the age of the company, its compute workload and compliance mandates, among other factors. The six options include the following:
- Optimize an existing on-premises data center with plans to migrate later.
- Install a complete cloud infrastructure on premises behind the corporate firewall. The advantages of this are behind-the-firewall security and a pay-as-you-go model for usage.
- Move existing workloads into a cloud infrastructure with minimal optimization, often referred to as lift and shift. Mohindroo said the key challenge with this popular scenario is dealing with less-than-optimal I/O bottlenecks.
- Create all new, cloud-resident applications, developed using PaaS and IaaS technology, to fully replace outmoded legacy applications. DaaS replaces the legacy on-premises database. Advantages of this model include the availability of a wide variety of open source languages and services for application development, data management, analytics and integration, along with support for virtual machines, containerization for portability and Kubernetes for orchestration.
“The whole concept behind this is to make it easy for you to run your business,” Mohindroo said.
One way to utilize this option is through Oracle’s advanced AI and machine learning cloud technology. For example, Oracle offers an autonomous database that Mohindroo claims is self-running — managed, patched and tuned in real time without human intervention.
- Replace the core legacy application base with subscription-based, third-party SaaS counterparts. Similar to option four, this model offers application development tools for customization, along with the same AI and machine learning technology.
- Choose a born-in-the-cloud model, which would be the logical choice for new companies that have no legacy IT operation or applications, Mohindroo said.
Change the way you think
Mohindroo’s presentation was crafted to deliver a purely cloud computing technology message.
Taylor’s talk, which largely avoided tech speak, still targeted IT managers, application developers and operations personnel, saying their collective efforts can benefit from understanding the human side of the user experience. To do that, he said, requires becoming fully immersed in every nuance of what it means to be a customer.
Taylor suggested that IT employees expand their view beyond the technology.
Bill Taylorcofounder and founding editor, Fast Company magazine
“Are you determined to make sure that what you know doesn’t limit what you can imagine going forward?” he said. “Are you … learning as fast as the world is changing?”
Taylor’s message can be taken two ways: Gain insight into the people who use the cloud applications you build or learn about each new cloud computing technology and programming language or risk being left behind.
Taylor cited San Antonio-based USAA, the financial services company that serves military families, as an example of a leader in technology-driven disruption that immerses every employee — even highly skilled application developers — in understanding the customer experience. USAA gives new employees a packet called a virtual overseas deployment. The idea is to spend a day role-playing as a member of the Army Reserve or National Guard suddenly called up to active duty.
“You’ve got four weeks to get your financial affairs together,” Taylor said.
The exercise forces the role-player to go through credit card statements, bank statements, life insurance and car payments — all to help USAA employees understand what their customers need.
“They’re not early adopters of technology because they love technology per se; it’s because they’re so committed to their identity in the sense of impacting customers in their marketplace,” Taylor said.
Druva moved to help manage data protection in the cloud with its latest Apollo software as a service, which helps protect workloads in Amazon Web Services through the Druva Cloud Platform.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The company’s new service provides a single control plane to manage infrastructure-as-a-service and platform-as-a-service cloud workloads.
Druva, based in Sunnyvale, Calif., sells two cloud backup products, Druva InSync and Druva Phoenix, for its Druva Cloud Platform. The enterprise-level Druva InSync backs up endpoint data across physical and public cloud storage. The Druva Phoenix agent backs up and restores data sets in the cloud for distributed physical and virtual servers. Phoenix applies global deduplication at the source and points archived server backups to the cloud target.
Steven Hillsenior storage analyst, 451 Research
Apollo enables data management of Druva Cloud Platform workloads under a single control plane so administrators can do snapshot management for backup, recovery and replication of Amazon Web Services instances. It automates service-level agreements with global orchestration that includes file-level recovery. It also protects Amazon Elastic Compute Cloud instances.
Druva Apollo is part of an industrywide trend among data protection vendors to bring all secondary data under global management across on-premises and cloud storage.
“There is a big change going on throughout the industry in how data is being managed,” said Steven Hill, senior storage analyst for 451 Research. “The growth is shifting toward secondary data. Now, secondary data is growing faster than structured data, and that is where companies are running into a challenge.”
“Apollo will apply snapshot policies,” said Dave Packer, Druva’s vice president of product and alliance marketing. “It will automate many of the lifecycles of the snapshots. That is the first feature of Apollo.”
Automation for discovery, analysis and information governance is on the Druva cloud roadmap, Packer said.
Druva last August pulled in $80 million in funding, bringing total investments into the range of $200 million for the fast-growing vendor. Druva claims to have more than 4,000 worldwide customers that include NASA, Pfizer, NBCUniversal, Marriott Hotels, Stanford University and Lockheed Martin.
Druva has positioned its data management software to go up against traditional backup vendors Commvault and Veritas Technologies, which also are transitioning into broad-based data management players. It’s also competing with startups Rubrik, which has raised a total of $292 million in funding since 2015 for cloud data management, and Cohesity, which has raised $160 million.
New tools to help increase developer productivity and simplify app development for intelligent cloud and edge, across devices, platforms or data sources
NEW YORK — Nov. 15, 2017 — Wednesday at Connect(); 2017, Microsoft Corp.’s annual event for professional developers, Executive Vice President Scott Guthrie announced new Microsoft data platform technologies and cross-platform developer tools. Guthrie outlined the company’s vision and shared what’s next for developers across a broad range of Microsoft and open source technologies, and how Microsoft is helping them get more done across apps or platforms. He also touched on key application scenarios and ways developers can use built-in artificial intelligence (AI) to support continuous innovation and continuous deployment of today’s intelligent applications.
“With today’s intelligent cloud, emerging technologies like AI have the potential to change every facet of how we interact with the world,” Guthrie said. “Developers are in the forefront of shaping that potential. Today at Connect(); we’re announcing new tools and services that help developers build applications and services for the AI-driven future, using the platforms, languages and collaboration tools they already know and love.”
Across devs, apps, data, platforms
Microsoft is continuing its commitment to delivering open technologies and contributing to and partnering with the open source community. New tools and partnerships are designed to help developers build intelligent, enterprise-ready and cloud-scale apps — regardless of their platform, and to give them the peace of mind with the built-in security, performance, compliance features, support and SLAs available in Azure.
- Designed in collaboration with the founders of Apache Spark, the preview of Azure Databricks is a fast, easy and collaborative Apache Spark-based analytics platform that delivers one-click setup, streamlined workflows and an interactive workspace. Native integration with Azure SQL Data Warehouse, Azure Storage, Azure Cosmos DB, Azure Active Directory and Power BI simplifies the creation of modern data warehouses that enable organizations to provide self-service analytics and machine learning over all data with enterprise-grade performance and governance.
Microsoft Joins MariaDB Foundation
- Microsoft joins MariaDB Foundation as a platinum member and announces the upcoming preview of Azure Database for MariaDB for a fully managed MariaDB service in the cloud.
Azure Cosmos DB with Apache Cassandra API
- The preview expands on the multimodel capabilities of Azure Cosmos DB to offer Cassandra as a service over turnkey global distribution, multiple consistency levels and industry-leading SLAs.
GitHub Roadmap for Git Virtual File Systems (GVFS)
- Microsoft and GitHub will further their open source partnership to extend GVFS support to GitHub. GVFS is an open source extension to the Git version control system developed by Microsoft to support the world’s largest repositories.
Helping developers get more done
Microsoft is releasing tools designed to help developers, development teams and data scientists collaborate and work together more efficiently for application development, deployment and management. New tools and feature improvements help streamline essential tasks, so developers can focus more on getting apps to market across multiple platforms, and for any scenario — whether cloud, mobile or AI.
Visual Studio App Center General Availability
- New cloud service for developers to ship higher-quality applications more frequently. Objective-C, Swift, Android Java, Xamarin and React Native developers can use App Center to increase productivity and accelerate application lifecycle, freeing them to spend more time on new features and better user experiences.
Visual Studio Live Share
- Unique new capability for developers to collaborate in a seamless and secure way with full project context. With this preview, developers can share projects with teammates, or other developers, to edit and debug the same code in their personalized editor or IDE.
Azure DevOps Projects
- The preview lets developers configure a full DevOps pipeline and connect to Azure Services within five minutes for faster app development and deployment. With just a few clicks in the Azure portal, developers can set up Git repositories, wire up completely automated builds and release pipelines without any prior knowledge of how to do so.
Transforming business through analytics and AI
Advances in AI and machine learning are placing the seemingly impossible within reach. The combination of cloud services, infrastructure and tools from Microsoft are designed to help any developer embrace AI and create apps across the cloud and the edge, harnessing the power of data and AI.
Azure IoT Edge
- Azure IoT Edge preview availability, enabling AI, advanced analytics and machine learning at the Internet of Things (IoT) edge.
Azure Machine Learning updates
- Integration with Azure IoT Edge and AI deployment on iOS devices with Core ML, bringing AI everywhere from the cloud to the IoT edge of devices.
Visual Studio Tools for AI
- Developers and data scientists can develop AI models with all the productivity of Visual Studio, on frameworks and languages. Updates to .NET make it easier for .NET developers to consume AI models from their applications.
Azure SQL Database Machine Learning services preview
- Support for R models inside SQL Database makes it seamless for data scientists to develop and train models in Azure Machine Learning and deploy those models directly to Azure SQL Database to create predictions at blazing-fast speeds.
For the next three days, Microsoft is streaming more than 36 live, engineering-led training sessions that are designed to give developers hands-on experience with the tools and technologies featured throughout the keynote presentations.
More about Connect(); 2017 announcements can be found here.
Microsoft (Nasdaq “MSFT” @microsoft) is the leading platform and productivity company for the mobile-first, cloud-first world, and its mission is to empower every person and every organization on the planet to achieve more.
For more information, press only:
Microsoft Media Relations, WE Communications, (425) 638-7777, email@example.com
Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.
On Monday, cloud CMS vendor Acquia Inc. announced Michael Sullivan, former Hewlett Packard Enterprise senior vice president and general manager for SaaS, has been named the company’s new CEO. Sullivan will move into the position next month.
SearchContentManagement interviewed Sullivan and Acquia co-founder Dries Buytaert, who was also the lead developer on the open source Drupal content management system, upon which the Acquia CMS is based. Buytaert remains Acquia’s CTO and also takes over as board chair.
Dries, according to your blog, there were more than 140 candidates to succeed longtime CEO Tom Erickson. How did Acquia choose Michael Sullivan?
Dries Buytaert: There are a lot of reasons. First of all, there’s a very good fit with Mike. That’s not just a good fit between him and me, but also to our culture and personality and how we think about different things, like the importance of cloud and open source. I also felt Mike was really well-prepared to lead our business. Mike has 25 years [of] experience with software as a service, enterprise content management and content governance. Mike has worked with small companies, as well as larger companies.
At HP Enterprise and Micro Focus [acquired by HPE], Mike was responsible for managing more than 30 SaaS products. Acquia is evolving its product strategy to go beyond Drupal and the cloud to become a multiproduct company with Acquia Digital Asset Manager and Acquia Journey. So, our own transformation as a company is going from a single-product company to a multiproduct company. Mike is uniquely qualified to help us with that, based on his experience.
Mike, why was it a fit for you, and what excites you about the market position of the Acquia CMS and the company’s future as a cloud CMS provider?
Michael Sullivan: I’ve been involved in both [enterprise] content management and web content management during the course of my career, so it’s not new to me. I’ve always found it interesting and have had a lot of success in this space, broadly. There’s a fundamental shift that’s occurring in the content management world, where people are moving from static web presence to a different model of engaging with their customers — an intelligent digital experience.
Companies will need to compete on that basis in the future, and they need to have personalized experiences and work with customers through lots of channels, not just the website. Acquia sits at the intersection of a lot of these technologies — Drupal, open source, SaaS, DevOps, machine learning, predictive analytics. If you look at what Acquia’s already done and what they’re working on, this is a company that has the right vision and a proven ability to execute … and a history of winning. That was important to me; it makes it believable to me this company will succeed.
What do you see as Acquia’s biggest challenges moving forward the next few years?
Sullivan: There’s a lot of work to do. We have to move fast; we have to execute well. Our challenge is execution — we know what we want to build, [and] we know where we want to go. The question is: How do we get there, and how do we get there efficiently?
What is the role of AI in the future of content management and the Acquia CMS?
Buytaert: There’s a big future for AI in our space; it’s something we’re investing in, with a team of six people working on machine learning solutions in our space. We believe we are in the early stages of what will be a pretty big transformation of the web, or digital.
Historically, the web has been pull-based: You have to go to the web and search for information. We believe, in the future, more of those experiences will become push-based: Information will start to find you. The Holy Grail is delivering customers the right information for the right service at the right time, in the right context, on the right channel — web, mobile, chatbots or voice assistants. That’s a pretty big vision.
Dries Buytaertco-founder, Acquia
To [accomplish] that, you need to build systems that are smart and can predict what users want at what point in time. If you can do that, you can really change the customer experience. Instead of having the customer find the information, it increasingly comes to you.
There’s a lot of early examples of that; a simple example is [music streaming services] Spotify and Pandora. The old pull-based model is turning the knob on your radio to find the music that you want; Spotify and Pandora push you information that you like, so you don’t have to go look for it. We think that will happen across every industry, and the Acquia platform will help companies build these digital experiences.
Dries, Acquia is expanding past the original concept of Drupal with headless CMS and all of these new SaaS offerings and CRM-style tools to help companies service customers. What will become of Drupal?
Buytaert: One of the great things about Drupal is that there aren’t a lot of technologies that remain relevant for 18 years [since Drupal debuted]. The reason Drupal has been successful is that we’ve literally reinvented ourselves more than 10 times. Drupal is evolving quite rapidly; I would argue we’re ahead — an API-first player, compared to our proprietary competitors.
Drupal is evolving from a website management system to a digital experience platform; it’s becoming a content repository, where you can manage content and can feed that content into a variety of different touchpoints or channels. It’s not just specialized in creating HTML output for webpages, but we have integrations with Alexa, chatbots, digital kiosks, [and] we have a long list of customers who come to us because they want to move beyond building websites.
We’ve been investing in headless Drupal for four years, since before it was called headless. I feel like we spotted those trends and have done a pretty good job going after them earlier than our competitors.
Mike, what will the Acquia CMS look like in five years?
Sullivan: We have big ambitions for this space. Some of these pieces we already have plans for. I think we’ll be in the position to do acquisitions over time. Obviously, I haven’t had my first day yet, so it’s hard to say for sure, but we think we are well-positioned to fill in all these pieces [to build the next-generation digital experience platform]. Five years is a long time; I’d like to think that we’ll be able to do it a lot sooner than that.
Using a mix of data protection software, hardware and cloud services from different vendors, Amvac Chemical Corp. found itself in a cycle of frustration. Backups failed at night, then had to be rerun during the day, and that brought the network to a crawl.
The Los Angeles-based company found its answer with Quorum’s one-stop backup and disaster recovery appliances. Quorum OnQ’s disaster recovery as a service (DRaaS) combines appliances that replicate across sites with cloud services.
The hardware appliances are configured in a hub-and-spoke model with an offsite data center colocation site. The appliances perform full replication to the cloud that backs up data after hours.
“It might be overkill, but it works for us,” said Rainier Laxamana, Amvac’s director of information technology.
Quorum OnQ may be overkill, but Amvac’s previous system underwhelmed. Previously, Amvac’s strategy consisted of disk backup to early cloud services to tape. But the core problem remained: failed backups. The culprit was the Veritas Backup Exec applications that the Veritas support team, while still part of Symantec, could not explain. A big part of the Backup Exec problem was application support.
“The challenge was that we had different versions of an operating system,” Laxamana said. “We had legacy versions of Windows servers so they said [the backup application] didn’t work well with other versions.
“We were repeating backups throughout the day and people were complaining [that the network] was slow. We repeated backups because they failed at night. That slowed down the network during the day.”
Rainier Laxamanadirector of information technology, Amvac
Quorum OnQ provides local and remote instant recovery for servers, applications and data. The Quorum DRaaS setup combines backup, deduplication, replication, one-click recovery, automated disaster recovery testing and archiving. Quorum claims OnQ is “military-grade” because it was developed for U.S. Naval combat systems and introduced into the commercial market in 2010.
Amvac develops crop protection chemicals for agricultural and commercial purposes. The company has a worldwide workforce of more than 400 employees in eight locations, including a recently opened site in the Netherlands. Quorum OnQ protects six sites, moving data to the main data center. Backups are done during the day on local appliances. After hours, the data is replicated to a DR site and then to another DR site hosted by Quorum.
“After the data is replicated to the DR site, the data is replicated again to our secondary DR site, which is our biggest site,” Laxamana said. “Then the data is replicated to the cloud. So the first DR location is our co-located data center and the secondary DR our largest location. The third is the cloud because we use Quorum’s DRaaS.”
Amvac’s previous data protection configuration included managing eight physical tape libraries.
“It was not fun managing it,” Laxamana said. “And when we had legal discovery, we had to go through 10 years of data. We kept tapes at Iron Mountain, but it became very expensive so we brought it on premises.”
Laxamana said he looked for a better data protection system for two years before finding Quorum. Amvac looked at Commvault but found it too expensive and not user-friendly enough. Laxamana and his team also looked at Unitrends. At the time, Veeam Software only supported virtual machines, and Amvac needed to protect physical servers. Laxamana said Unitrends was the closest that he found to Quorum OnQ.
“The biggest (plus) with Quorum was that the interface was much more user-friendly,” he said. “It’s more integrated. With Unitrends, you need a third party to integrate the Microsoft Exchange.”