Tag Archives: Public

What are some considerations for a public folders migration?


A public folders migration from one version of Exchange to another can tax the skills of an experienced administrator…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

— but there’s another level of complexity when cloud enters the mix.

A session at last week’s Virtualization Technology Users Group event in Foxborough, Mass. detailed the nuances of Office 365 subscription offerings and the migration challenges administrators face. Microsoft offers a la carte choices for companies that wish to sign up for a single cloud service, such as Exchange Online, and move the messaging platform into the cloud, said Michael Shaw, a solution architect for Office 365 at Whalley Computer Associates in Southwick, Mass., in his presentation.

Microsoft offers newer collaboration services in Office 365, but some IT departments cling to one holdover that the company cannot extinguish — public folders. This popular feature, introduced in 1996 with Exchange 4.0, gives users a shared location to store documents, contacts and calendars.

For companies on Exchange 2013/2016, Microsoft did not offer a way to move “modern” public folders — called “public folder mailboxes” after an architecture change in Exchange 2013 — to Office 365 until March 2017. Prior to that, many organizations either developed their own public folders migration process, used a third-party tool or brought in experts to help with the transition.

Organizations that want to use existing public folders after a switch from on-premises Exchange to Office 365 should be aware of the proper sequence to avoid issues with a public folders migration, Shaw said.

Most importantly, public folders should migrate over last. That’s because mailboxes in Office 365 can access a public folder that is on premises, but a mailbox that is on premises cannot access public folders in the cloud, Shaw said.

“New can always access old, but old can’t access new,” he said.

IT admins should keep in mind, however, that Microsoft dissuades customers from using public folders for document use due to potential issues when multiple people try to work on the same file. Instead, the company steers Office 365 shops to SharePoint Online for document collaboration, and the Groups service for shared calendars and mobile device access.

In another attempt to prevent public folders migration to Office 365, Microsoft caps public folder mailboxes in Exchange Online at 1,000. They also come with a limit of 50 GB per mailbox in the lower subscription levels and a 100 GB quota in the higher E3 and E5 tiers. Public folder storage cannot exceed 50 TB.

Still, support for public folders has no foreseeable end despite Microsoft’s efforts to eradicate the feature. Microsoft did not include public folders in Exchange Server 2007, but reintroduced it in a service pack after significant outcry from customers, Shaw said. Similarly, there was no support for public folders when Microsoft introduced Office 365 in 2011, but it later buckled to customer demand.

Putting data on the beat— Public safety intelligence led policing

While most public safety agencies operate in a fog of big data, it doesn’t have to be this way. There are approaches to improving data integration and analytics that substantially enhance policing. Broadly these initiatives fall into three categories:

Knowing before:

Hindsight, as they say, is 20/20. But a retrospective view only gets you so far in terms of applicable intelligence. Machine learning can be your force multiplier—because it offers the possibility of actual foresight in real-time situations. Using predictive, cloud-based analytics, it is possible to identify subtle patterns in data streams that lead to advanced awareness of crimes about to be committed or emergencies about to occur. In this way, the sort of intuition that a seasoned police officer has can be extended to provide an always-on view. For example, individual activities that seem innocuous might collectively trigger suspicion or flag an increased risk when aggregated and analyzed by machine learning algorithms—such as shifts in travel or purchase patterns, or social media activity.

Knowing in the moment:

No doubt every public safety agency wishes they had an omniscient 360-degree view of the scene they are encountering. Well, today sensors coupled with real-time data ingestion and analysis (performed in a secure cloud environment) can greatly enhance this situational intelligence, coupled with geo-spatial information allows first responders to correlate and execute an appropriate response. The relevant technologies include:

  • Connected devices: Synchronized feeds from CAD; RMS; body worn cameras and in-vehicle camera systems; CCTV; chemical, biological, radioactive, nuclear sensors (CBRN); (ALPR); acoustic listening devices; and open-source intelligence (OSINT) all help to capture a detailed picture of the event.
  • Geo-spatial awareness: Event information, as well as objects of potential interest nearby, is mapped, providing an enhanced view of the environment. For example, additional sensors are monitored, and nearby schools and businesses identified, along with egress routes, traffic patterns, and hospitals.
  • Other relevant information and histories: By using address-specific identity and licensing data, past calls for service, and other active calls in the area, pertinent information about the residence (such as any weapons or chemicals on the premises) can be instantly surfaced. In the event of fire, chemical, or environmental disasters weather information can be overlaid to help predict at-risk areas.

Knowing after:

As any seasoned investigator can attest, reconstructing events afterwards can be a time-consuming process, with the potential to miss key evidence. Highly integrated data systems and machine learning can significantly reduce the man-hours required of public safety agencies to uncover evidence buried across disparate data pools.

The promise of technology—what’s next?

To learn more about the future of intelligence led policing and law enforcement in the twenty-first century download the free whitepaper.

NIST botnet security report recommendations open for comments

The Departments of Commerce and Homeland Security opened public comments on a draft of its botnet security report before the final product heads to the president.

The report was commissioned by the cybersecurity executive order published by the White House on May 11, 2017. DHS and the National Institute of Standards and Technology (NIST), a unit of the Department of Commerce, were given 240 days to complete a report on improving security against botnets and other distributed cyberattacks, and they took every minute possible, releasing the draft botnet security report on Jan. 5, 2018.

The public comment period ends Feb. 12, 2018 and industry experts are supportive of the contents of the report. According to a NIST blog post, the draft report was a collaborative effort.

“This draft reflects inputs received by the Departments from a broad range of experts and stakeholders, including private industry, academia, and civil society,” NIST wrote. “The draft report lays out five complementary and mutually supportive goals intended to dramatically reduce the threat of automated, distributed attacks and improve the resilience of the ecosystem. For each goal, the report suggests supporting activities to be taken by both government and private sector actors.”

The blog post listed the goals for stakeholders laid out by the draft botnet security report as:

  1. Identify a clear pathway toward an adaptable, sustainable, and secure technology marketplace.
  2. Promote innovation in the infrastructure for dynamic adaptation to evolving threats.
  3. Promote innovation at the edge of the network to prevent, detect, and mitigate bad behavior.
  4. Build coalitions between the security, infrastructure, and operational technology communities domestically and around the world.
  5. Increase awareness and education across the ecosystem.

Rodney Joffe, senior vice president, technologist and fellow at Neustar, Inc., an identity resolution company headquartered in Sterling, Va., said NIST and DHS took the right approach in putting together the report.

“The Departments of Commerce and Homeland Security worked jointly on this effort through three approaches — hosting a workshop, publishing a request for comment, and initiating an inquiry through the President’s National Security Telecommunications Advisory Committee (NSTAC),” Joffe told SearchSecurity. “We commend the administration for working with and continuing to seek private sector advice on the best path forward.”

A good start, but… 

Experts, like Michael Patterson, CEO of Plixer, a network traffic analysis company based in Kennebunk, Maine, generally applauded the draft botnet security report as being an in-depth starting point that is missing some key features.

“The report offers a comprehensive framework for threat intelligence sharing, and utilizing NIST to work with a variety of industry groups to establish tighter security protocols and best practices while outlining government and industry transformations to protect the internet,” Patterson told SearchSecurity. “However, it is missing the required teeth to propel industry action. Without a mechanism to define a specific compliance standard, service providers will not have enough incentive to take the steps required to mitigate these risks.”

Stephen Horvath, vice president of strategy and vision for Telos Corporation. a cybersecurity company located in Ashburn, Va., applauded the draft botnet security report for balancing “high level explanations along with some technical details of merit.”

“This report will hopefully drive improvements and awareness of the issues surrounding botnets. Given a few of the more important recommendations are taken and funded, the establishment of an IoT [cybersecurity framework] profile for example, a general overall improvement across all domains should be felt in the next few years,” Horvath told SearchSecurity. “I believe stronger improvements would be possible more quickly if the recommendations included greater focus on enforcing hard requirements rather than incentives.”

Gavin Reid, chief security architect at Recorded Future, a threat intelligence company headquartered in Somerville, Mass., said NIST’s goals are “laudable and the paper takes the approach of providing as comprehensive of a solution as is possible given the transient nature of attacks.”

“It does not address how the goals and technology approach keep up with and change to match changes to the attack vectors,” Reid told SearchSecurity. “The paper also conflates all botnets with IoT botnets. Bots resulting in automated controlled attacks and toolkits are not limited to IoT but have a much wider footprint covering all IT ecosystems.”

The IoT question

Following the highly publicized botnet attacks like Mirai which preyed on insecure IoT devices, the draft report focused on these issues and even noted “IoT product vendors have expressed desire to enhance the security of their products, but are concerned that market incentives are heavily weighted toward cost and time to market.”

Luke Somerville, manager of special investigations at Forcepoint Security Labs, said the goals and actions within the draft botnet security report are “a good starting point, but the effectiveness of ideas such as baseline security standards for IoT devices will depend entirely on the standards themselves and how they are implemented.”

“Any standards would need to be backed up robustly enough to overcome the strong market incentives against security which exist at present,” Somerville told SearchSecurity. “Increasing awareness and security education is also discussed — something that has been a goal of the security industry for a long time. Ultimately, insecure systems don’t fix themselves, and nor do they make themselves insecure in the first place. By focusing on the human point of contact with data and systems — be that point of contact the developers writing the code controlling the systems, the end-users configuring the systems, or even prospective users in the process of making a purchasing decision — we can attempt to build security in throughout the design and usage lifecycle of a product.”

Botnet security report outcomes

While experts were generally favorable to the draft botnet security report, some were less optimistic about real-world changes that might come from such a report.

Jeff Tang, senior security researcher at Cylance, said he was “not convinced this report will make any significant strides towards deterring the spread of botnets.”

“Trying to develop an accepted security baseline through a consensus-based process when one of your stakeholder’s primary goal is to sell you a new shiny IoT device every year is only going to result in watered-down standards that will be ineffective. As the recent spectacle of CPU bugs has shown, speed is the enemy of security. If you’re rushing to release a new device every year, security is going to be nonexistent,” Tang told SearchSecurity. “Additionally, secure development best practices haven’t changed much in the last decade, but judging by the reports of various device vulnerabilities, manufacturers have not voluntarily adopted these best practices.”

This is not the work of a moment; this is evolution over thousands of software design lifecycles.
Pam Dingleprincipal technical architect at Ping Identity

Pam Dingle, principal technical architect at Ping Identity, an identity security company headquartered in Denver, said “changing ecosystems is difficult” and it will take a concerted effort by vendors and CISOs alike to make the change real, otherwise “the effects will likely be limited.”

“It is up to those who see the value in the recommended actions to put the manpower into participating in standards groups, collaborating with adjacent vendor spaces to make integration easier and more pattern-based, and demanding that a shared defense strategy stay high in priority lists,” Dingle told SearchSecurity. “This is not the work of a moment; this is evolution over thousands of software design lifecycles, and even then, the mass of legacy devices out there with no update capabilities will be shackles on our collective legs for a long time to come. We have to start.”

Frost Science Museum IT DR planning braced for worst, survived Irma

When you open a large public facility right on the water in Miami, a good disaster recovery setup is an essential task for an IT team. Hurricane Irma’s assault on Florida in September 2017 made that clear to the Phillip and Patricia Frost Museum of Science team.

The expected Category 5 hurricane moving in on Florida had the new Frost Science Museum square in its sights. Irma turned out to be less threatening to Miami than feared, and the then-4-month-old building suffered no major damage. Still, the museum’s vice president of technology said he felt prepared for the worst with his IT DR planning.

When preparing to open the museum on a 250,000-square-foot location on the Miami waterfront, technology chief Brooks Weisblat installed a new Dell EMC SAN in a fully redundant data center and set up a colocation site in Atlanta as part of its disaster recovery plan. The downgraded Category 4 hurricane dumped water into the building, but did no serious damage and caused no downtime.

Frost Science Museum's Brooks WeisblatBrooks Weisblat

The new Frost Science Museum building features three diesel generators and redundant power, including 20 minutes of backup power in the battery room that should provide enough juice until the backup generators come online. While much of southern Florida lost power during Irma, the museum did not.

“We’re sitting right on the water. It was supposed to be a major hurricane coming straight through Miami. But six hours before hitting, it veered off, so it wasn’t a direct hit,” Weisblat said. “We have two weather stations on the building, and we recorded force winds of 90 to 95 miles per hour. It could have been 190 mile-per-hour winds, and that would have been a different story.”

Advance warning of the hurricane prompted the museum’s team to bolster its IT DR planning.

“The hurricane moved us to get all of our backups in order,” Weisblat said. “Opening the building was intensive. We had backups internally, but we didn’t have off-site backups yet. It pushed us to get a colocated data center in Atlanta when the hurricane warnings came about a week before. At least we had a lot of advance notice for this one. Except for some water here and there, the museum did well.”

The Frost Science Museum raised $330 million in funding to build the new center in downtown Miami, closing its Coconut Grove site in August 2015. Museum organizers said they hoped to attract 750,000 visitors in the first year at the new site. From its May opening through Oct. 31, more than 525,000 people visited the museum.

Shifting to SAN, all-flash

When moving, Frost Science installed a dual-controller Dell EMC SC9000 — formerly Compellent — all-flash array, with 112 TB of capacity connected to 10 Dell EMC PowerEdge servers virtualized with VMware. As part of its IT DR planning, the museum uses Veeam Software to back up virtual machines to a Dell PowerEdge R530 server, with 40 TB of hard disk drive storage on site, and it replicates those backups to another PowerEdge server in the Atlanta location.

The hurricane moved us to get all of our backups in order.
Brooks Weisblatvice president of technology, Frost Science Museum

“If something happens at this site, we’re able to launch a limited number of VMs to power finance, ticketing and reporting,” Weisblat said. “We can control those servers out of Atlanta if we’re unable to get into the building.”

Before opening the new building, Weisblat’s team migrated all VMs between the old and new sites. The process took three weeks. “We had to take down services, copy them to drives a few miles away, then bring those into the new environment and do an import into a new VM cluster,” he said.

The data center sits on the third floor of the new building, 60 feet above sea level. It takes up 16 full cabinets, plus eight racks for networking, Weisblat said.

Frost Science Museum had no SAN in the old building. Its IT ran on 23 servers. Weisblat said he migrated the stand-alone servers into the VMware cluster on the Compellent array before moving. “That way, when the new system came online, it would be easy to move those servers over as files, and we would not have to do migrations into VMware in the new building during the crush time for our opening,” he said.

The Dell EMC SAN runs all critical applications, including the customer relationship management system, exhibit content management, property management system software, the museum website, online ticketing and building security management systems. The security system controls electricity, lights, solar power, centralized antivirus deployments and network access control. “Everything is powered off this one system,” Weisblat said.

The SAN has two Brocade — now Broadcom — Fibre Channel switches for redundancy. “We can unplug hosts; everything keeps running,” Weisblat said. “We can unplug one of the storage arrays, and everything keeps running. The top-of-rack 10-gig [Extreme Avaya Ethernet] switches are also fully redundant. We can lose one of those.”

He said since installing the new array, one solid-state drive went out. “The SSD sent us an alert, and Dell had parts to us in two hours. Before I knew something was wrong, they contacted me.”

Whether it’s a failed SSD or an impending hurricane, early alerts and IT DR planning certainly help when dealing with disasters.

Google Cloud Platform services engage corporate IT

Google continues to pitch its public cloud as a hub for next-generation applications, but in 2017, the company took concrete steps to woo traditional corporations that haven’t made that leap.

Google Cloud Platform services still lag behind Amazon Web Services (AWS) and Microsoft Azure, and Google’s lack of experience with enterprise IT is still seen as GCP’s biggest weakness. But the company made important moves this year to address that market’s needs, with several updates around hybrid cloud, simplified migration and customer support.

The shift to attract more than just the startup crowd has steadily progressed since the hire of Diane Greene in 2015. In 2017, her initiatives bore their first fruit.

Google expanded its Customer Reliability Engineering program to help new customers — mostly large corporations — model their architectures after Google’s. The company also added tiered support services for technical and advisory assistance.

Other security features included Google Cloud Key Management Service and the Titan chip, which takes security down to the silicon. Dedicated Interconnect taps directly into Google’s network for consistent and secure performance. Several updates and additions highlighted Google’s networking capabilities, which it sees as an advantage over other platforms, such as a slower and cheaper networking tier Google claims is still on par with the competition’s best results for IT shops.

Google Cloud Platform services also expanded into hybrid cloud through separate partnerships with Cisco and Nutanix, with products from each partnership expected to be available in 2018. The Cisco deal involves a collection of products for cloud-native workloads and will lean heavily on open source projects Kubernetes and Istio. The Nutanix deal is closer to the VMware on AWS offering as a lift-and-shift bridge between the two environments.

And for those companies that want to move large amounts of data from their private data centers to the cloud, Google added its own version of AWS’ popular Snowball device. Transfer Appliance is a shippable server that can be used to transfer up to 1 TB of compressed data to Google cloud data centers.

In many ways, GCP is where Microsoft Azure was around mid-2014, as it tried to frame its cloud approach and put together a cohesive strategy, said Deepak Mohan, an analyst with IDC.

The price point is fantastic and the product offering is fantastic, but they need to invest in finding how they can approach the enterprise at scale.
Deepak Mohananalyst, IDC

“They don’t have the existing [enterprise] strength that Microsoft did, and they don’t have that accumulated size that AWS does,” he said. “The price point is fantastic and the product offering is fantastic, but they need to invest in finding how they can approach the enterprise at scale.”

To help strengthen its enterprise IT story, Google infused its relatively small partner ecosystem — a critical piece to help customers navigate the myriad low- and high-level services — through partnerships forged with companies such as SAP, Pivotal and Rackspace. Though still not in the league of AWS or Azure, Google also has stockpiled some enterprise customers of its own, such as Home Depot, Coca-Cola and HSBC, to help sell its platform to that market. And it also hired former Intel data center executive Diane Bryant as COO in November.

GCP also more than doubled its global footprint, with new regions in Northern Virginia, Singapore, Sydney, London, Germany, Brazil and India.

gcp services
Google Cloud Platform services

Price and features still matter for Google

Price is no longer the first selling point for Google Cloud Platform services, but it remained a big part of the company’s cloud story in 2017. Google continued to drop prices across various services, and it added a Committed Use Discount for customers that purchase a certain monthly capacity for one to three years. Those discounts were particularly targeted at large corporations, which prefer to plan ahead with spending when possible.

There were plenty of technological innovations in 2017, as well. Google Cloud Platform was the first to use Intel’s next-gen Skylake processors, and several more instance types were built with GPUs. The company also added features to BigQuery, one of its most popular services, and improved its interoperability with other Google Cloud Platform services.

Cloud Spanner, which sprang from an internal Google tool, addresses challenges with database applications on a global scale that require high availability. It provides the consistency of transactional relational databases with the distributed, horizontal scaling associated with NoSQL databases. Cloud Spanner may be too advanced for most companies, but it made enough waves that Microsoft soon followed with its Cosmos DB offering, and AWS upgraded its Aurora and DynamoDB services.

That illustrates another hallmark of 2017 for Google’s cloud platform: On several fronts, the company’s cloud provider competitors came around to Google’s way of thinking. Kubernetes, the open source tool spun out of Google in 2014, became the de facto standard in container orchestration. Microsoft came out with its own managed Kubernetes service this year, and AWS did the same in late November — much to the delight of its users.

Machine learning, another area into which Google has pushed headlong for the past several years, also came to the forefront, as Microsoft and Amazon launched — and heavily emphasized — their own new products that require varying levels of technical knowhow.

Coming into this year, conversations about the leaders in the public cloud centered on AWS and Microsoft, but by the end of 2017, Google managed to overtake Microsoft in that role, said Erik Peterson, co-founder and CEO of CloudZero, a Boston startup focused on cloud security and DevOps.

“They really did a good job this year of distinguishing the platform and trying to build next-generation architectures,” he said.

Azure may be the default choice for Windows, but Google’s push into cloud-native systems, AI and containers has planted a flag as the place to do something special for companies that don’t already have a relationship with AWS, Peterson said.

Descartes Labs, a geospatial analytics company in Los Alamos, N.M., jumped on Google Cloud Platform early on partly because of Google’s  activity with containers. Today, about 90% of its infrastructure is on GCP, said Tim Kelton, the company’s co-founder and cloud architect. He is pleased not only with how Google Container Engine manages its workloads and responds to new features in Kubernetes, but how other providers have followed Google’s lead.

“If I need workloads on all three clouds, there’s a way to federate that across those clouds in a fairly uniform way, and that’s something we never had with VMs,” Kelton said.

Kelton is also excited about Istio, an open source project led by Google, IBM and Lyft that sits on top of Kubernetes and creates a service mesh to connect, manage and secure microservices. The project looks to address issues around governance and telemetry, as well as things like rate limits, control flow and security between microservices.

“For us, that has been a huge part of the infrastructure that was missing that is now getting filled in,” he said.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

Azure feature updates in 2017 play catch up to AWS

Microsoft Azure already solidified its position as the second most popular public cloud, and critical additions in 2017 brought the Azure feature set closer to parity with AWS.

In some cases, Azure leapfrogged its competition. But a bevy of similar products bolstered the platform as a viable alternative to Amazon Web Services (AWS). Some Microsoft initiatives broadened the company’s database portfolio. Others lowered the barrier to entry for Azure, and pushed further into IoT and AI. And the long-awaited, on-premises machine, Azure Stack, seeks to tap surging interest to make private data centers obsolete.

Like all the major public cloud providers, Microsoft Azure doubled down on next-generation applications that rely on serverless computing and machine learning. Among the new products are Machine Learning Workbench, intended to improve productivity in developing and deploying AI applications, and Azure Event Grid, which helps route and filter events built in serverless architectures. Some important upgrades to Azure IoT Suite included managed services for analytics on data collected through connected devices, and Azure IoT Edge, which extends Azure functionality to connected devices.

Many of those Azure features are too advanced for most corporations that lack a team of data scientists. However, companies have begun to explore other services that rely on these underlying technologies in areas such as vision, language and speech recognition.

AvePoint, an independent software vendor in Jersey City, N.J., took note of the continued investment by Microsoft this past year in its Azure Cognitive Services, a turnkey set of tools to get better results from its applications.

“If you talk about business value that’s going to drive people to use the platform, it’s hard to find a more business-related need than helping people do things smartly,” said John Peluso, Microsoft regional director at AvePoint.

Microsoft also joined forces with AWS on Gluon, an open source, deep learning interface intended to simplify the use of machine learning models for developers. And the company added new machine types that incorporate GPUs for AI modeling.

Azure compute and storage get some love, too

Microsoft’s focus wasn’t solely on higher-level Azure services. In fact, the areas in which it caught up the most with AWS were in its core compute and storage capabilities.

The B-Series are the cheapest machines available on Azure and are designed for workloads that don’t always need great CPU performance, such as test and development or web servers. But more importantly, they provide an on-ramp to the platform for those who want to sample Azure services.

Another Azure feature addition was the M-Series machines that can support SAP workloads with up to 20 TBs of memory, a new bare-metal VM and the incorporation of Kubernetes into Azure’s container service.

“I don’t think anybody believes they are on par [with AWS] today, but they have momentum at scale and that’s important,” said Deepak Mohan, an analyst at IDC.

In storage, Managed Disks is a new Azure feature that handles storage resource provisioning as applications scale. Archive Storage provides a cheap option to house data as an alternative to Amazon Glacier, as well as a standard access model to manage data across all the storage tiers.

Reserved VM Instances emulate AWS’ popular Reserved Instances to provide significant cost-savings for advanced purchases and deeper discounts for customers that link the machines to their Windows Server licenses. Azure also added low-priority VMs– the equivalent to AWS Spot Instances — that can provide even further savings but should be limited to batch-type projects due to the fact that they can be pre-empted.

It looks to me like Azure is very much openly and shamelessly following the roadmap of AWS.
Jason McKaysenior vice president and CTO, Logicworks

The addition of Azure Availability Zones was a crucial update for mission-critical workloads that need high availability. It brings greater fault tolerance to the platform through the ability to spread workloads across regions and achieve a guaranteed 99.99% uptime.

“It looks to me like Azure is very much openly and shamelessly following the roadmap of AWS,” said Jason McKay, senior vice president and CTO at Logicworks, a cloud managed service provider in New York.

And that’s not a bad thing, because Microsoft has always been good at being a fast follower, McKay said. There’s a fair amount of parity in the service catalogs for Azure and AWS, though Azure’s design philosophy is a bit more tightly coupled between its services. That means potentially slightly less creativity, but more functionality out of the box compared to AWS, McKay said.

Databases and private data centers

Azure Database Migration Service has helped customers transition from their private data centers to Azure. Microsoft also added full compatibility between SQL Server and the fully managed Azure SQL database service.

Azure Cosmos DB, a fully managed NoSQL cloud database, may not see a wave of adoption any time soon, but has the potential to be an exciting new technology to manage databases on a global scale. And in Microsoft’s continued evolution to embrace open source technologies, the company added MySQL and PostgreSQL support to the Azure database lineup as well.

The company also improved management and monitoring, which incorporates tools from Microsoft’s acquisition of Cloudyn, as well as added security. Azure confidential computing encrypts data while in use, in addition to encryption options at rest and in transit, while Azure Policy added new governance capabilities to enforce corporate rules at scale.

Other important security upgrades include Azure App Service Isolated, which made it easier to install dedicated virtual networks in the platform-as-a-service layer. The Azure DDoS Protection service aims to protect against DDoS attacks, new capabilities put firewalls around data in Azure Storage, and end points within the Azure virtual network limit the exposure of data to the public internet to access various multi-tenant Azure services.

Azure Stack’s late arrival

Perhaps Microsoft’s biggest cloud product isn’t part of its public cloud. After two years of fanfare, Azure Stack finally went on sale in late 2017. It transfers many of the tools found on the Azure public cloud within private facilities, for customers that have higher regulatory demands or simply aren’t ready to vacate their data center.

“That’s a huge area of differentiation for Microsoft,” Mohan said. “Everybody wants true compatibility between services on premises and services in the cloud.”

Rather than build products that live on premises, AWS joined with VMware to build a bridge for customers that want their full VMware stack on AWS either for disaster recovery or extension of their data centers. Which approach will succeed depends on how protracted the shift to public cloud becomes — and a longer delay in that shift favors Azure Stack, Mohan said.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

Azure migration takes hostile approach to lure VMware apps

The two biggest public cloud providers have set their sights on VMware workloads, though they’re taking different approaches to accommodate the hypervisor heavyweight and its customers.

A little over a year after Amazon Web Services (AWS) and VMware pledged to build a joint offering to bridge customers’ public and private environments, Microsoft this week introduced a similar service for its Azure public cloud. There’s one important distinction, however: VMware is out of the equation, a hostile move met with equal hostility from VMware, which said it would not support the service.

Azure Migrate offers multiple ways to get on-premises VMware workloads to Microsoft’s public cloud. Customers now can move VMware-based applications to Azure with a free tool to assess their environments, map out dependencies and migrate using Azure Site Recovery. Once there, customers can optimize workloads for Azure via cost management tools Microsoft acquired from Cloudyn.

This approach eschews the VMware virtualization and adapts these applications into a more cloud-friendly architecture that can use a range of other Azure services. A multitude of third-party vendors offer similar capabilities. It’s the other part of the Azure migration service that has drawn the ire of VMware.

VMware virtualization on Azure is a bare-metal subset of Azure Migrate that can run a full VMware stack on Azure hardware. It’s expected to be generally available sometime next year. This offering is a partnership with unnamed VMware-certified partners and VMware-certified hardware, but it notably cuts VMware out of the process, and out of the revenue stream.

In response, VMware criticized Microsoft characterization of the Azure migration service as part of a transition to public cloud. In a blog post, Ajay Patel, VMware senior vice president, cited the lack of joint engineering between VMware and Microsoft and said the company won’t recommend or support the product.

This isn’t the first time these two companies have butted heads. Microsoft launched Hyper-V almost a decade ago with similar aggressive tactics to pull companies off VMware’s hypervisor, said Steve Herrod, who was CTO at VMware at the time. Herrod is currently managing director at venture capital firm General Catalyst.

Part of the motivation here could be Microsoft posturing either to negotiate a future deal with VMware or to ensure it doesn’t lose out on these types of migration, Herrod said. And of course, if VMware had its way, its software stack would be on all the major clouds, he added.

Jeff Kato, analystJeff Kato

VMware on AWS, which became generally available in late August, is operated by VMware, and through the company’s Cloud Foundation program ports its software-defined data centers to CenturyLink, Fujitsu, IBM Cloud, NTT Communications, OVH and Rackspace. The two glaring holes in that swath of partnerships are Azure and Google Cloud, widely considered to be the second and third most popular public clouds behind AWS.

Companies have a mix of applications, some of which are well-suited to transition to the cloud, and others must stay inside a private data center or can’t be re-architected for the cloud. Hence, a hybrid cloud strategy has become an attractive option, and VMware’s recent partnerships have made companies feel more comfortable with the public cloud and curb the management of their own data centers.

“I talk to a lot of CIOs and they love the fact that they can buy VMware and now feel VMware has given them the all-clear to being in the cloud,” Herrod said. “It’s purely promise that they’re not locked into running VMware in their own data center that has caused them to double down on VMware.”

The fact that they have to offer VMware bare metal to accelerate things tells you there are workloads people are reluctant to move to the public cloud, whether that’s on Hyper-V or even AWS.
Jeff Katoanalyst, Taneja Group

VMware virtualization on Azure is also an acknowledgement that some applications are not good candidates for the cloud-native approach, said Jeff Kato, an analyst at Taneja Group in Hopkinton, Mass.

“The fact that they have to offer VMware bare metal to accelerate things tells you there are workloads people are reluctant to move to the public cloud, whether that’s on Hyper-V or even AWS,” he said.

Some customers will prefer VMware on AWS, but it won’t be a thundering majority, said Carl Brooks, an analyst at 451 Research. There’s also no downside for Microsoft to support what customers already do, and the technical aspect of this move is relatively trivial, he added.

“It’s a buyer’s market, and none of the major vendors are going to benefit from trying to narrow user options — quite the opposite,” Brooks said.

Perhaps it’s no coincidence that Microsoft debuted the Azure migration service in the days leading up to AWS’ major user conference, re:Invent, where there is expected to be more talk about the partnership between Amazon and VMware. It’s also notable that AWS is only a public cloud provider, so it doesn’t have the same level of competitive friction as there has been historically between Microsoft and VMware, Kato said.

“Microsoft [is] trying to ride this Azure momentum to take more than their fair share of [the on-premises space], and in order to do that, they’re going to have to come up with a counter attack to VMware on AWS,” he said.

Despite VMware’s lack of support for the Azure migration service, it’s unlikely it can do anything to stop it, especially if it’s on certified hardware, Kato said. Perhaps VMware could somehow interfere with how well the VMware stack integrates with native Azure services, but big enterprises could prevent that, at least for their own environments.

“If the customer is big enough, they’ll force them to work together,” Kato said.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

Microsoft launches public preview of Azure Location Based Services with TomTom | ZDNet

Microsoft is launching a public preview of a location-based Azure cloud service that’s designed to integrate well with Internet of things deployments and asset tracking.

Azure Location Based Services will be powered by TomTom’s Online APIs, but can leverage other location technologies in the future. Azure LBS will use the same billion, account and APIs as other Azure services.

Microsoft’s aim is to give cloud developers geospatial data that can be integrated with smart city and Internet of things deployments. Target industries include manufacturing, automotive, logistics, smart cities and retail. A year ago, Microsoft laid out plans to integrate geographic data with Azure.

Sam George, partner director of Microsoft Azure IoT, said Microsoft LBS is aimed at providing one dashboard to manage services and templates enterprises can use to track assets. “As cloud and IoT transform businesses, geospatial data capabilities are needed for connected devices and assets,” said George. “Many of these assets move and monitoring and viewing them in a location is important. It’s part of a broader IoT digital feedback loop.”

The capabilities in Azure LBS–mapping, search, routing, traffic and time zones–are designed to be used for everything from asset tracking for transportation fleets as well as autonomous driving.

Cubic Telecom, an Irish telecommunications company for the automotive industry, built a proof of concept that uses Azure LBS to visualize existing locations of electric vehicle charging stations. Here’s a look at Cubic Telecom’s charging station finder.

cubic-telecom-azure.png

Fathym, an IoT company, is using Azure LBS to visualize road conditions for Alaska’s department of transportation. Fathym’s road and route weather forecasting will be introduced at the LA Auto Show.

Azure LBS can be used as part of a broader suite or as a standalone service. Azure LBS will have consumption based pricing and George noted that enterprise location data is private. For the public preview, Azure Location Based Services will offer a two tiered pricing model – a set of free transactions per account and then 1,000 transactions for $0.50.

General availability will be in calendar 2018.

Announcing Azure Location Based Services public preview

Today we announced the Public Preview availability of Azure Location Based Services (LBS). LBS is a portfolio of geospatial service APIs natively integrated into Azure that enable developers, enterprises and ISVs to create location aware apps and IoT, mobility, logistics and asset tracking solutions. The portfolio currently comprises of services for Map Rendering, Routing, Search, Time Zones and Traffic. In partnership with TomTom and in support of our enterprise customers, Microsoft has added native location capabilities to the Azure public cloud.

Azure LBS has a robust set of geospatial services atop a global geographic data set. These services are comprised of 5 primary REST services and a JavaScript Map Control. Each service has a unique set of capabilities atop of the base map data and are built in unison and in accordance with Azure standards making it easy to work interoperable between the services. Additionally, Azure LBS is fully hosted and integrated into the Azure cloud meaning the services are compliant with all Azure fundamentals for privacy, usability, global readiness, accessibility and localization. Users can manage all Azure LBS account information from within the Azure portal and billed like any other Azure service.

Azure LBS uses key-based authentication. To get a key, go to the Azure portal and create and Azure LBS account. By creating an Azure LBS account, you automatically generate two Azure LBS keys. Both keys will authenticate requests to the various Azure LBS services. Once you have your account and your keys, you’re ready to start accessing Azure Location Based Services. And, the API model is simple to use. Simply parameterize your URL request to get rich responses from the service:

Sample Address Search Request: atlas.microsoft.com/search/address/json?api-version=1&query=1 Microsoft Way, Redmond, WA

Azure LBS enters public preview with five distinct services. Render (for maps), Route (for directions), Search, Time Zones and Traffic and a JavaScript Map Control. Each of these services are described in more detail below.

Azure Map Control

The Azure Map Control is a JavaScript web control with built-in capabilities for fetching Azure LBS vector map tiles, drawing data atop of it and interacting with the map canvas. The Azure Map Control allows developers to layer their data atop of Azure LBS Maps in both vector and raster layers meaning if enterprise customers have coordinates for points, lines and polygons or if they have geo-annotated maps of a manufacturing plant, a shopping mall or a theme park they can overlay these rasterized maps as a new layer atop of the Azure Map Control. The map control has listeners for clicking the map canvas and getting coordinates from the pixels allowing customers to send those coordinates to the services for searching for businesses around that point, finding the nearest address or cross street to that point, generating a route to or from that point or even connecting to their own database of information to find geospatially referenced information important to their business that is near that point.

Azure Location Based Services Map Control

The Azure Map Control makes it simple for developers to jumpstart their development. By adding a few lines of code to any HTML document, you get a fully functional map.



  

   Hello Azure LBS
    
     
      
     

     

In the above code sample, be sure to replace [AZURE_LBS_KEY] with your actual Azure LBS Key created with your Azure LBS Account in the Azure portal.

Render Service

The Azure LBS Render Service is use for fetching maps. The Render Service is the basis for maps in Azure LBS and powers the visualizations in the Azure Map Control. Users can request vector-based map tiles to render data and apply styling on the client. The Render Service also provides raster maps if you want to embed a map image into a web page or application. Azure LBS maps have high fidelity geographic information for over 200 regions around the world and is available in 35 languages and two versions of neutral ground truth.

Azure Location Based Services Render Service

The Azure LBS cartography was designed from the ground up and created with the enterprise customer in mind. There are lower amounts of information at lower levels of delineation (zooming out) and higher fidelity information as you zoom in. The design is meant to inspire enterprise customers to render their data atop of Azure LBS Maps without additional detail bleeding through disrupting the value of customer data.

Routing Service

The Azure LBS Routing Service is used for getting directions, but not just point A to point B directions. The Azure LBS Routing Service has a slew of map data available to the routing engine allowing it to modify the calculated directions based on a variety of scenarios.  First, the Routing Service provides customers the standard routing capabilities they would expect with a step-by-step itinerary. The calculation of the route can use the faster, shortest or avoiding highly congested roads or traffic incidents. For traffic-based routing, this comes in two flavors: “historic” which is great for future route planning scenarios when users would like to have a general idea of what traffic tends to look like on a given route; and, “live” which is ideal for active routing scenarios when a user is leaving now and wants to know where traffic exists and the best ways to avoid it.

Azure LBS Routing will allow for commercial vehicle routing providing alternate routes made just for trucks. The commercial vehicle routing supports parameters such as vehicle height, weight, the number of axels and hazardous material contents all to choose the best, safest and recommend roads for transporting their haul. The Routing Service provides a variety of travel modes, including walking, biking, motorcycling, taxiing or van routing.

Azure Location Based Services Route Service

Customers can also specify up to 50 waypoints along their route if they have pre-determined stops to make. If customers are looking for the best order in which to stop along their route, they can have Azure LBS determine the best order in which to route to multiple stops by passing up to 20 waypoints into the Routing Service where an itinerary will be generated for them.

Using the Azure LBS Route Service, customers can also specify arrival times when they need to be at a specific location by a certain time. Using the massive amount of traffic data, almost a decade of probes captured per geometry and high frequency intervals Azure LBS can let customers know given day or the week and time when is the best time of departure. Additionally, Azure LBS can use current traffic conditions to notify customers of a road change that may impact their route and provide updated times and/or alternate routes.

Azure LBS can also take into considering the engine type being used. By default, Azure LBS assumes a combustion engine is being used; however, if an electrical engine is in use Azure LBS will accept input parameters for power settings and generate the most energy efficient route.

The Routing Services also allows for multiple, alternate routes to be generated in a single query. This will save on over the wire transfer. Customers can also specify that they would like to avoid specific route types such as toll roads, freeways, ferries or carpool roads.

Sample Commercial Vehicle Route Request: atlas.microsoft.com/route/directions/json?api-version=1&query=52.50931,13.42936:52.50274,13.43872&travelMode=truck

Search Service

The Azure LBS Search Service provides the ability for customers to find real world objects and their respective location. The Search Service provides for three major functions:

  1. Geocoding: Finding addresses, places and landmarks
  2. POI Search: Finding businesses based on a location
  3. Reverse Geocoding: Finding addresses or cross streets based on a location

Azure Location Based Services Search Service

With the Search Service, customers can find addresses and places from around the world. Azure LBS supports address level geocoding in 38 regions, cascading to house numbers, street-level and city level geocoding for other regions of the world. Customers can pass addresses into the service based in a structured address form; or, they can use an unstructured form when they want to allow for their customers to search for addresses, places or business in a single query. Users can restrict their searches by region or bounding box and can query for a specific coordinate to influence the search results to improve quality. Reverse the query to provide a coordinate, say from a GPS receiver, customers can get the nearest address or cross street returned from the service.

The Azure LBS Search Service also allows customers to query for business listings. The Search Service contains hundreds of categories and hundreds of sub-categories for finding businesses or points of interest around a specific point or within a bounding area. Customers can query for businesses based on brand name or general category and filter those results based on location, bounding box or region.

Sample POI Search Request (Key Required): atlas.microsoft.com/search/poi/category/json?api-version=1&query=electric%20vehicle%20station&countrySet=FRA

Time Zone Service

The Azure LBS Time Zone Service is a first of it’s kind providing the ability to query time zones and time for locations around the world. Customers can now submit a location to Azure LBS and receive the respective time zone, the respective time in that time zone and the offset to Coordinated Universal Time (UTC). The Time Zone Service provides access to historical and future time zone information including changes for daylight savings. Additionally, customers can query for a list of all the time zones and the current version of the data – allowing customers to optimize their queries and downloads. For IoT customers, the Azure LBS Time Zone Service allows for POSIX output, so users can download information to their respective devices that only infrequently access the internet. Additionally, for Microsoft Windows users, Azure LBS can transform Windows time zone IDs to IANA time zone IDs.

Sample Time Zone Request (Key Required): atlas.microsoft.com/timezone/byCoordinates/json?api-version=1&query=32.533333333333331,-117.01666666666667

Traffic Service

The Azure LBS Traffic Service provides our customers with the ability to overlay and query traffic flow and incident information. In partnership with TomTom, Azure LBS will have access to a best in class traffic product with coverage in 55 regions around the world. The Traffic Service provides the ability to natively overlay traffic information atop of the Azure Map Control for a quick and easy means of viewing traffic issues. Additionally, customers have access to traffic incident information – real time issues happening on the road and collected through probe information on the roads. The traffic incident information provides additional detail such as the type of incident and the exact location. The Traffic Service will also provide our customers with details of incidents and flow such as the distance and time from one’s current position to the “back of the line;” and, once a user is in the traffic congestion the distance and time until they’re out of it.

Azure Location Based Services Traffic Service

Sample Traffic Flow Segment Request: atlas.azure-api.net/traffic/flow/segment/json?api-version=1&unit=MPH&style=absolute&zoom=10&query=52.41072,4.84239

Azure Location Based Services are available now in public preview via the Azure portal. Get your account created today.

Datos IO RecoverX backup gets table-specific

Datos IO RecoverX software, designed to protect scale-out databases running on public clouds, now allows query-specific recovery and other features to restore data faster.

RecoverX data protection and management software is aimed at application architects, database administrators and development teams. Built for nonrelational databases, it protects and recovers data locally and on software-as-a-service platforms.

Datos IO RecoverX works with scale-out databases, including MongoDB, Amazon DynamoDB, Apache Cassandra, DataStax Enterprise, Google Bigtable, Redis and SQLite. It supports Amazon Web Services, Google Cloud Platform and Oracle Cloud. RecoverX also protects data on premises.

RecoverX provides semantic deduplication for storage space efficiency and enables scalable versioning for flexible backups and point-in-time recovery.

More security, faster recovery in Datos IO RecoverX 2.5

The newly released RecoverX 2.5 gives customers the ability recover by querying specific tables, columns and rows within databases to speed up the restore process. Datos IO calls this feature “queryable recovery.” The software’s advanced database recovery function also includes granular and incremental recovery by selecting specific points in time.

The latest Datos IO RecoverX version also performs streaming recovery for better error-handling. The advanced database recovery capability for MongoDB clusters enables global backup of sharded or partitioned databases. The geographically dispersed shards are backed up in sync to ensure consistent copies in the recovery. Administrators can do local restores of the shards or database partitions to speed recovery.

RecoverX 2.5 also supports Transport Layer Security and Secure Sockets Layer encryptions, as well as X.509 certificates, Lightweight Directory Access Protocol authentication and Kerberos authentication.

With the granular recovery, you can pick and choose what you are looking for. That helps the time to recovery.
Dave Russelldistinguished analyst, Gartner

Dave Russell, distinguished analyst at Gartner, said Datos IO RecoverX 2.5 focuses more on greater control and faster recovery with its advanced recovery features.

“Some of these next-generation databases are extremely large and they are federated. The beautiful thing about databases is they have structure,” Russell said. “Part of what Datos IO does is leverage that structure, so you can pull up the [exact] data you are looking for. Before, you had to back up large databases, and in some cases, you had to mount the entire database to fish out what you want.

“With the granular recovery, you can pick and choose what you are looking for,” he said. “That helps the time to recovery.”

Peter Smails, vice president of marketing and business development at Datos IO, based in San Jose, Calif., said the startup is trying to combine the granularity of traditional backup with the visibility into scale-out databases that traditional backup tools lack.

“With traditional backup, you can restore at the LUN level and the virtual machine level. You can get some granularity,” Smails said. “What you can’t do is have the visibility into the specific construct of the database, such as what is in each row or column. We know the schema.

“Backup is not a new problem,” Smails said. “What we want to do through [our] applications is fundamentally different.”