Tag Archives: service

Google Cloud security gets boost with Secret Manager

Google has added a new managed service called Secret Manager to its cloud platform amid a climate increasingly marked by high-profile data breaches and exposures.

Secret Manager, now in beta, builds on existing Google Cloud security services by providing a central place to store and manage sensitive data such as API keys or passwords.

The system employs the principle of least privilege, meaning only a project’s owners can look at secrets without explicitly granted permissions, Google said in a blog post. Secret Manager works in conjunction with the Cloud Audit Logging service to create access audit trails. These data sets can then be moved into anomaly detection systems to check for breaches and other abnormalities.

All data is encrypted in transit and at rest with AES-256-level encryption keys. Google plans to add support for customer-managed keys later on, according to the blog.

A secrets manager … is really no different than a database, but just with more audit logs and access checking.
Scott PiperAWS security consultant, Summit Route

Google Cloud customers have been able to manage sensitive data prior to now with Berglas, an open source project that runs from the command line, whereas Secret Manager adds a layer of abstraction through a set of APIs.

Berglas can be used on its own going forward, as well as directly through Secret Manager beginning with the recently released 0.5.0 version, Google said. Google also offers a migration tool for moving sensitive data out of Berglas and into Secret Manager.

Secret Manager builds on the existing Google Cloud security lineup, which also includes Key Management Service, Cloud Security Command Center and VPC Service Controls.

With Secret Manager, Google has introduced its own take on products such as HashiCorp Vault and AWS Secrets Manager, said Scott Piper, an AWS security consultant at Summit Route in Salt Lake City.

Scott Piper, an AWS security consultant at Summit Route Scott Piper

A key management service is used to keep an encryption key and perform encryption operations, Piper said. “So, you send them data, and they encrypt them. A secrets manager, on the other hand, is really no different than a database, but just with more audit logs and access checking. You request a piece of data from it — such as your database password — and it returns it back to you. The purpose of these solutions is to avoid keeping secrets in code.”

Doug Cahill, an analyst at Enterprise Strategy GroupDoug Cahill

Indeed, Google’s Key Management Service targets two different audiences within enterprise IT, said Doug Cahill, an analyst at Enterprise Strategy Group in Milford, Mass.

“The former is focused on managing the lifecycle of data encryption keys, while the latter is focused on securing the secrets employed to securely operate API-driven infrastructure-as-code environments,” Cahill said.

As such, data security and privacy professionals and compliance officers are the likely consumers of a key management offering, whereas secret management services are targeted toward DevOps, Cahill added.

Meanwhile, it is surprising that the Google Cloud security portfolio didn’t already have something like Secret Manager, but AWS only released its own version in mid-2018, Piper said. Microsoft released Azure Key Vault in 2015 and has positioned it as appropriate for managing both encryption keys and other types of sensitive data.

Pricing for Secret Manager during the beta period is calculated two ways: Google charges $0.03 per 10,000 operations, and $0.06 per active secret version per regional replica, per month.

Go to Original Article
Author:

Citrix’s performance analytics service gets granular

Citrix introduced an analytics service to help IT professionals better identify the cause of slow application performance within its Virtual Apps and Desktops platform.

The company announced the general availability of the service, called Citrix Analytics for Performance, at its Citrix Summit, an event for the company’s business partners, in Orlando on Monday. The service carries an additional cost.

Steve Wilson, the company’s vice president of product for workspace ecosystem and analytics, said many IT admins must deal with performance problems as part of the nature of distributed applications. When they receive a call from workers complaining about performance, he said, it’s hard to determine the root cause — be it a capacity issue, a network problem or an issue with the employee’s device.

Performance, he said, is a frequent pain point for employees, especially remote and international workers.

“There are huge challenges that, from a performance perspective, are really hard to understand,” he said, adding that the tools available to IT professionals have not been ideal in identifying issues. “It’s all been very technical, very down in the weeds … it’s been hard to understand what [users] are seeing and how to make that actionable.”

Part of the problem, according to Wilson, is that traditional performance-measuring tools focus on server infrastructure. Keeping track of such metrics is important, he said, but they do not tell the whole story.

“Often, what [IT professionals] got was the aggregate view; it wasn’t personalized,” he said.

When the aggregate performance of the IT infrastructure is “good,” Wilson said, that could mean that half an organization’s users are seeing good performance, a quarter are seeing great performance, but a quarter are experiencing poor performance.

Steve Wilson, vice president of product for workspace ecosystem and analytics, CitrixSteve Wilson

With its performance analytics service, Citrix is offering a more granular picture of performance by providing metrics on individual employees, beyond those of the company as a whole. That measurement, which Citrix calls a user experience or UX score, evaluates such factors as an employee’s machine performance, user logon time, network latency and network stability.

“With this tool, as a system administrator, you can come in and see the entire population,” Wilson said. “It starts with the top-level experience score, but you can very quickly break that down [to personal performance].”

Wilson said IT admins who had tested the product said this information helped them address performance issues more expeditiously.

“The feedback we’ve gotten is that they’ve been able to very quickly get to root causes,” he said. “They’ve been able to drill down in a way that’s easy to understand.”

A proactive approach

Eric Klein, analyst, VDC Research GroupEric Klein

Eric Klein, analyst at VDC Research Group Inc., said the service represents a more proactive approach to performance problems, as opposed to identifying issues through remote access of an employee’s computer.

“If something starts to degrade from a performance perspective — like an app not behaving or slowing down — you can identify problems before users become frustrated,” he said.

Mark Bowker, senior analyst, Enterprise Strategy GroupMark Bowker

Klein said IT admins would likely welcome any tool that, like this one, could “give time back” to them.

“IT is always being asked to do more with less, though budgets have slowly been growing over the past few years,” he said. “[Administrators] are always looking for tools that will not only automate processes but save time.”

Enterprise Strategy Group senior analyst Mark Bowker said in a press release from Citrix announcing the news that companies must examine user experience to ensure they provide employees with secure and consistent access to needed applications.

IT is always being asked to do more with less.
Eric KleinAnalyst, VDC Research Group

“Key to providing this seamless experience is having continuous visibility into network systems and applications to quickly spot and mitigate issues before they affect productivity,” he said in the release.

Wilson said the performance analytics service was the product of Citrix’s push to the cloud during the past few years. One of the early benefits of that process, he said, has been in the analytics field; the company has been able to apply machine learning to the data it has garnered and derive insights from it.

“We do see a broad opportunity around analytics,” he said. “That’s something you’ll see more and more of from us.”

Go to Original Article
Author:

Box vs. Dropbox outages in 2019

In this infographic, we present a timeline of significant service disruptions in 2019 for Box vs. Dropbox.

Box vs. Dropbox outages in 2019

Cloud storage providers Box and Dropbox self-report service disruptions throughout each year. In 2019, Dropbox posted publicly about eight incidents; Box listed more than 50. But the numbers don’t necessarily provide an apples-to-apples comparison, because each company gets to choose which incidents to disclose.

This infographic includes significant incidents that prevented users from accessing Box or Dropbox in 2019, or at least from uploading and downloading documents. It excludes outages that appeared to last 10 minutes or fewer, as well as incidents labeled as having only “minor” or “medium” impact.

To view the full list of 2019 incidents for Box vs. Dropbox, visit status.box.com and status.dropbox.com

Go to Original Article
Author:

Cisco 2020: Challenges, prospects shape the new year

Cisco finished 2019 with a blitz of announcements that recast the company’s service provider business. Instead of providing just integrated hardware and software, Cisco became a supplier of components for open gear.

Cisco enters the new decade with rearchitected silicon tailored for white box routers favored by cloud providers and other organizations with hyperscale data centers. To add punch to its new Silicon One chipset, Cisco plans to offer high-speed integrated optics from Acacia Communications. Cisco expects to complete its $2.6 billion acquisition of Acacia in 2020.

Cisco is aiming its silicon-optics combo at Broadcom. The chipmaker has been the only significant silicon supplier for white box routers and switches built on specifications from the Open Compute Project. The specialty hardware has become the standard within the mega-scale data centers of cloud providers like AWS, Google and Microsoft; and internet companies like Facebook.

I think the Silicon One announcement was a watershed moment.
Chris AntlitzPrincipal analyst, Technology Business Research Inc.

“I think the Silicon One announcement was a watershed moment,” said Chris Antlitz, principal analyst at Technology Business Research Inc. (TBR).

Cisco designed Silicon One so white box manufacturers could program the hardware platform for any router type. Gear makers like Accton Technology Corporation, Edgecore Networks and Foxconn Technology Group will be able to use the chip in core, aggregation and access routers. Eventually, they could also use it in switches.

Cisco 2020: Silicon One in the 5G market

Cisco is attacking the cloud provider market by addressing its hunger for higher bandwidth and lower latency. At the same time, the vendor will offer its new technology to communication service providers. Their desire for speed and higher performance will grow over the next couple of years as they rearchitect their data centers to deliver 5G wireless services to businesses.

For the 5G market, Cisco could combine Silicon One with low-latency network interface cards from Exablaze, which Cisco plans to acquire by the end of April 2020. The combination could produce exceptionally fast switches and routers to compete with other telco suppliers, including Ericsson, Juniper Networks, Nokia and Huawei. Startups are also targeting the market with innovative routing architectures.

“Such a move could give Cisco an edge,” said Tom Nolle, president of networking consultancy CIMI Corp., in a recent blog. “If you combine a low-latency network card with the low-latency Silicon One chip, you might have a whole new class of network device.”

Cisco 2020: Trouble with the enterprise

Cisco will launch its repositioned service provider business, while contending with the broader problem of declining revenues. Cisco could have difficulty reversing that trend, while also addressing customer unhappiness with the high price of its next-generation networking architecture for enterprise data centers. 

“I do think 2020 is likely to be an especially challenging year for Cisco,” said John Burke, an analyst at Nemertes Research. “The cost of getting new goodies is far too high.”

Burke said he had spoken to several people in the last few months who had dropped Cisco gear from their networks to avoid the expense. At the same time, companies have reported using open source network automation tools in place of Cisco software to lower costs.

Cisco software deemed especially expensive include its Application Centric Infrastructure (ACI) and DNA Center, Burke said. ACI and DNA Center are at the heart of Cisco’s modernized approach to the data center and campus network, respectively.

Both offer significant improvements over Cisco’s older network architectures. But they require businesses to purchase new Cisco hardware and retrain IT staff.

John Mulhall, an independent contractor with 20 years of networking experience, said any new generation of Cisco technology requires extra cost analyses to justify the price.

“As time goes on, a lot of IT shops are going to be a little bit reluctant to just go the standard Cisco route,” he said. “There’s too much competition out there.”

Cisco SD-WAN gets dinged

Besides getting criticized for high prices, Cisco also took a hit in 2019 for the checkered performance of its Viptela software-defined WAN, a centerpiece for connecting campus employees to SaaS and cloud-based applications. In November, Gartner reported that Viptela running on Cisco’s IOS-XE platform had “stability and scaling issues.”

Also, customers who had bought Cisco’s ISR routers during the last few years reported the hardware didn’t have enough throughput to support Viptela, Gartner said.

The problems convinced the analyst firm to drop Cisco from the “leaders” ranking of Gartner’s latest Magic Quadrant for WAN Edge Infrastructure.

Gartner and some industry analysts also knocked Cisco for selling two SD-WAN products — Viptela and Meraki — with separate sales teams and distinct management and hardware platforms.

The approach has made it difficult for customers and resellers to choose the product that best suits their needs, analysts said. Other vendors use a single SD-WAN to address all uses.

“Cisco’s SD-WAN is truly a mixed bag,” said Roy Chua, principal analyst at AvidThink. “In the end, the strategy will need to be clearer.”

Antlitz of TBR was more sanguine about Cisco’s SD-WAN prospects. “We see no reason to believe that Cisco will lose its status as a top-tier SD-WAN provider.”

Go to Original Article
Author:

Logically acquires Carolinas IT in geographic expansion

Logically Inc., a managed service provider based in Portland, Maine, has acquired Carolinas IT, a North Carolina MSP with cloud, security and core IT infrastructure skills.

The deal continues Logically’s geographic expansion. The company launched in April, building upon the 2018 merger of Portland-based Winxnet Inc. and K&R Network Solutions of San Diego. Logically in August added a New York metro area company through its purchase of Sullivan Data Management, an outsourced IT services firm.

Carolinas IT, based in Raleigh, provides a base from which Logically can expand in the region, Logically CEO Christopher Claudio said. “They are a good launching pad,” he said.

Carolinas IT’s security and compliance practice also attracted Logically’s interest. Claudio called security and compliance a growing market and one that will continue to expand, given the challenges organizations face with regulatory obligations and the risk of data breaches.

“You can’t be an MSP without a security and compliance group,” he said.

Carolinas IT’s security services include risk assessment, HIPAA consulting and auditing, security training and penetration testing. The company’s cloud offerings include Office 365 and private hosted cloud services. And its professional services personnel possess certifications from vendors such as Cisco, Citrix, Microsoft, Symantec and VMware.

Christopher ClaudioChristopher Claudio

Claudio cited Carolinas IT’s “depth of talent,” recurring revenue and high client retention rate as some of the business’s favorable attributes.

Mark Cavaliero, Carolinas IT’s president and CEO, will remain at the company for the near term but plans to move on and will not have a long-term leadership role, Claudio said. But Cavaliero will have an advisory role, Claudio added, noting “he has built a great business.”

Logically’s Carolinas IT purchase continues a pattern of companies pulling regional MSPs together to create national service providers. Other examples include Converge Technology Solutions Corp. and Mission.

Go to Original Article
Author:

AWS, Azure and Google peppered with outages in same week

AWS, Microsoft Azure and Google Cloud all experienced service degradations or outages this week, an outcome that suggests customers should accept that cloud outages are a matter of when, not if.

In AWS’s Frankfurt region, EC2, Relational Database Service, CloudFormation and Auto Scaling were all affected Nov. 11, with the issues now resolved, according to AWS’s status page.

Azure DevOps services for Boards, Repos, Pipelines and Test Plans were affected for a few hours in the early hours of Nov. 11, according to its status page. Engineers determined that the problem had to do with identity calls and rebooted access tokens to fix the system, the page states.

Google Cloud said some of its APIs in several U.S. regions were affected, and others experienced problems globally on Nov. 11, according to its status dashboard. Affected APIs included those for Compute Engine, Cloud Storage, BigQuery, Dataflow, Dataproc and Pub/Sub. Those issues were resolved later in the day.

Google Kubernetes Engine also went through some hiccups over the past week, in which nodes in some recently upgraded container clusters resulted in high levels of kernel panics. Known more colloquially as the “blue screen of death” and other terms, kernel panics are conditions wherein a system’s OS can’t recover from an error quickly or easily.

The company rolled out a series of fixes, but as of Nov. 13, the status page for GKE remained in orange status, which indicates a small number of projects are still affected.

AWS, Microsoft and Google have yet to provide the customary post-mortem reports on why the cloud outages occurred, although more information could emerge soon.

Move to cloud means ceding some control

The cloud outages at AWS, Azure and Google this week were far from the worst experienced by customers in recent years. In September 2018, severe weather in Texas caused a power surge that shut down dozens of Azure services for days.

Stephen ElliotStephen Elliot

Cloud providers have aggressively pursued region and zone expansions to help with disaster recovery and high-availability scenarios. But customers must still architect their systems to take advantage of the expanded footprint.

Still, customers have much less control when it comes to public cloud usage, according to Stephen Elliot, an analyst at IDC. That reality requires some operational sophistication.

It’s a myth that outages won’t happen.
Stephen ElliotAnalyst, IDC

“Networks are so interconnected and distributed, lots of partners are involved in making a service perform and available,” he said. “[Enterprises] need a risk mitigation strategy that covers people, process, technologies, SLAs, etc. It’s a myth that outages won’t happen. It could be from weather, a black swan event, security or a technology glitch.”

Jay LymanJay Lyman

This fact underscores why more companies are experimenting with and deploying workloads across hybrid and multi-cloud infrastructures, said Jay Lyman, an analyst at 451 Research. “They either control the infrastructure and downtime with on-premises deployments or spread their bets across multiple public clouds,” he said.

Ultimately, enterprise IT shops can weigh the challenges and costs of running their own infrastructure against public cloud providers and find it difficult to match, said Holger Mueller, an analyst at Constellation Research.

“That said, performance and uptime are validated every day, and should a major and longer public cloud outage happen, it could give pause among less technical board members,” he added.

Go to Original Article
Author:

Datrium opens cloud DR service to all VMware users

Datrium plans to open its new cloud disaster recovery as a service to any VMware vSphere users in 2020, even if they’re not customers of Datrium’s DVX infrastructure software.

Datrium released disaster recovery as a service with VMware Cloud on AWS in September for DVX customers as an alternative to potentially costly professional services or a secondary physical site. DRaaS enables DVX users to spin up protected virtual machines (VMs) on demand in VMware Cloud on AWS in the event of a disaster. Datrium takes care of all of the ordering, billing and support for the cloud DR.

In the first quarter, Datrium plans to add a new Datrium DRaaS Connect for VMware users who deploy vSphere infrastructure on premises and do not use Datrium storage. Datrium DraaS Connect software would deduplicate, compress and encrypt vSphere snapshots and replicate them to Amazon S3 object storage for cloud DR. Users could set backup policies and categorize VMs into protection groups, setting different service-level agreements for each one, Datrium CTO Sazzala Reddy said.

A second Datrium DRaaS Connect offering will enable VMware Cloud users to automatically fail over workloads from one AWS Availability Zone (AZ) to another if an Amazon AZ goes down. Datrium stores deduplicated vSphere snapshots on Amazon S3, and the snapshots replicated to three AZs by default, Datrium chief product officer Brian Biles said.

Speedy cloud DR

Datrium claims system recovery can happen on VMware Cloud within minutes from the snapshots stored in Amazon S3, because it requires no conversion from a different virtual machine or cloud format. Unlike some backup products, Datrium does not convert VMs from VMware’s format to Amazon’s format and can boot VMs directly from the Amazon data store.

“The challenge with a backup-only product is that it takes days if you want to rehydrate the data and copy the data into a primary storage system,” Reddy said.

Although the “instant RTO” that Datrium claims to provide may not be important to all VMware users, reducing recovery time is generally a high priority, especially to combat ransomware attacks. Datrium commissioned a third party to conduct a survey of 395 IT professionals, and about half said they experienced a DR event in the last 24 months. Ransomware was the leading cause, hitting 36% of those who reported a DR event, followed by power outages (26%).

The Orange County Transportation Authority (OCTA) information systems department spent a weekend recovering from a zero-day malware exploit that hit nearly three years ago on a Thursday afternoon. The malware came in through a contractor’s VPN connection and took out more than 85 servers, according to Michael Beerer, a senior section manager for online system and network administration of OCTA’s information systems department.

Beerer said the information systems team restored critical applications by Friday evening and the rest by Sunday afternoon. But OCTA now wants to recover more quickly if a disaster should happen again, he said.

OCTA is now building out a new data center with Datrium DVX storage for its VMware VMs and possibly Red Hat KVM in the future. Beerer said DVX provides an edge in performance and cost over alternatives he considered. Because DVX disaggregates storage and compute nodes, OCTA can increase storage capacity without having to also add compute resources, he said.

Datrium cloud DR advantages

Beerer said the addition of Datrium DRaaS would make sense because OCTA can manage it from the same DVX interface. Datrium’s deduplication, compression and transmission of only changed data blocks would also eliminate the need for a pricy “big, fat pipe” and reduce cloud storage requirements and costs over other options, he said. Plus, Datrium facilitates application consistency by grouping applications into one service and taking backups at similar times before moving data to the cloud, Beerer said.

Datrium’s “Instant RTO” is not critical for OCTA. Beerer said anything that can speed the recovery process is interesting, but users also need to weigh that benefit against any potential additional costs for storage and bandwidth.

“There are customers where a second or two of downtime can mean thousands of dollars. We’re not in that situation. We’re not a financial company,” Beerer said. He noted that OCTA would need to get critical servers up and running in less than 24 hours.

Reddy said Datrium offers two cost models: a low-cost option with a 60-minute window and a “slightly more expensive” option in which at least a few VMware servers are always on standby.

Pricing for Datrium DRaaS starts at $23,000 per year, with support for 100 hours of VMware Cloud on-demand hosts for testing, 5 TB of S3 capacity for deduplicated and encrypted snapshots, and up to 1 TB per year of cloud egress. Pricing was unavailable for the upcoming DRaaS Connect options.

Other cloud DR options

Jeff Kato, a senior storage analyst at Taneja Group, said the new Datrium options would open up to all VMware customers a low-cost DRaaS offering that requires no capital expense. He said most vendors that offer DR from their on-premises systems to the cloud force customers to buy their primary storage.

George Crump, president and founder of Storage Switzerland, said data protection vendors such as Commvault, Druva, Veeam, Veritas and Zerto also can do some form of recovery in the cloud, but it’s “not as seamless as you might want it to be.”

“Datrium has gone so far as to converge primary storage with data protection and backup software,” Crump said. “They have a very good automation engine that allows customers to essentially draw their disaster recovery plan. They use VMware Cloud on Amazon, so the customer doesn’t have to go through any conversion process. And they’ve solved the riddle of: ‘How do you store data in S3 but recover on high-performance storage?’ “

Scott Sinclair, a senior analyst at Enterprise Strategy Group, said using cloud resources for backup and DR often means either expensive, high-performance storage or lower cost S3 storage that requires a time-consuming migration to get data out of it.

“The Datrium architecture is really interesting because of how they’re able to essentially still let you use the lower cost tier but make the storage seem very high performance once you start populating it,” Sinclair said.

Go to Original Article
Author:

Experts on demand: Your direct line to Microsoft security insight, guidance, and expertise – Microsoft Security

Microsoft Threat Experts is the managed threat hunting service within Microsoft Defender Advanced Threat Protection (ATP) that includes two capabilities: targeted attack notifications and experts on demand.

Today, we are extremely excited to share that experts on demand is now generally available and gives customers direct access to real-life Microsoft threat analysts to help with their security investigations.

With experts on demand, Microsoft Defender ATP customers can engage directly with Microsoft security analysts to get guidance and insights needed to better understand, prevent, and respond to complex threats in their environments. This capability was shaped through partnership with multiple customers across various verticals by investigating and helping mitigate real-world attacks. From deep investigation of machines that customers had a security concern about, to threat intelligence questions related to anticipated adversaries, experts on demand extends and supports security operations teams.

The other Microsoft Threat Experts capability, targeted attack notifications, delivers alerts that are tailored to organizations and provides as much information as can be quickly delivered to bring attention to critical threats in their network, including the timeline, scope of breach, and the methods of intrusion. Together, the two capabilities make Microsoft Threat Experts a comprehensive managed threat hunting solution that provides an additional layer of expertise and optics for security operations teams.

Experts on the case

By design, the Microsoft Threat Experts service has as many use cases as there are unique organizations with unique security scenarios and requirements. One particular case showed how an alert in Microsoft Defender ATP led to informed customer response, aided by a targeted attack notification that progressed to an experts on demand inquiry, resulting in the customer fully remediating the incident and improving their security posture.

In this case, Microsoft Defender ATP endpoint protection capabilities recognized a new malicious file in a single machine within an organization. The organization’s security operations center (SOC) promptly investigated the alert and developed the suspicion it may indicate a new campaign from an advanced adversary specifically targeting them.

Microsoft Threat Experts, who are constantly hunting on behalf of this customer, had independently spotted and investigated the malicious behaviors associated with the attack. With knowledge about the adversaries behind the attack and their motivation, Microsoft Threat Experts sent the organization a bespoke targeted attack notification, which provided additional information and context, including the fact that the file was related to an app that was targeted in a documented cyberattack.

To create a fully informed path to mitigation, experts pointed to information about the scope of compromise, relevant indicators of compromise, and a timeline of observed events, which showed that the file executed on the affected machine and proceeded to drop additional files. One of these files attempted to connect to a command-and-control server, which could have given the attackers direct access to the organization’s network and sensitive data. Microsoft Threat Experts recommended full investigation of the compromised machine, as well as the rest of the network for related indicators of attack.

Based on the targeted attack notification, the organization opened an experts on demand investigation, which allowed the SOC to have a line of communication and consultation with Microsoft Threat Experts. Microsoft Threat Experts were able to immediately confirm the attacker attribution the SOC had suspected. Using Microsoft Defender ATP’s rich optics and capabilities, coupled with intelligence on the threat actor, experts on demand validated that there were no signs of second-stage malware or further compromise within the organization. Since, over time, Microsoft Threat Experts had developed an understanding of this organization’s security posture, they were able to share that the initial malware infection was the result of a weak security control: allowing users to exercise unrestricted local administrator privilege.

Experts on demand in the current cybersecurity climate

On a daily basis, organizations have to fend off the onslaught of increasingly sophisticated attacks that present unique security challenges in security: supply chain attacks, highly targeted campaigns, hands-on-keyboard attacks. With Microsoft Threat Experts, customers can work with Microsoft to augment their security operations capabilities and increase confidence in investigating and responding to security incidents.

Now that experts on demand is generally available, Microsoft Defender ATP customers have an even richer way of tapping into Microsoft’s security experts and get access to skills, experience, and intelligence necessary to face adversaries.

Experts on demand provide insights into attacks, technical guidance on next steps, and advice on risk and protection. Experts can be engaged directly from within the Windows Defender Security Center, so they are part of the existing security operations experience:

We are happy to bring experts on demand within reach of all Microsoft Defender ATP customers. Start your 90-day free trial via the Microsoft Defender Security Center today.

Learn more about Microsoft Defender ATP’s managed threat hunting service here: Announcing Microsoft Threat Experts.

Go to Original Article
Author: Microsoft News Center

Google Cloud tackles Spark on Kubernetes

An early version of a Google Cloud service that runs Apache Spark on Kubernetes is now available, but more work will be required to flesh out the container orchestration platform’s integrations with data analytics tools.

Kubernetes and containers haven’t been renowned for their use in data-intensive, stateful applications, including data analytics. But there are benefits to using Kubernetes as a resource orchestration layer under applications such as Apache Spark rather than the Hadoop YARN resource manager and job scheduling tool with which it’s typically associated. Developers and IT ops stand to gain advantages that containers bring to any application, such as portability across systems and consistency in configuration, along with automated provisioning and scaling for workloads that’s handled in the Kubernetes layer or by Helm charts, as well as container resource efficiency compared with virtual or bare metal machines.

“Analytical workloads, in particular, benefit from the ability to add rapidly scalable cloud capacity for spiky peak workloads, whereas companies might want to run routine, predictable workloads in a virtual private cloud,” said Doug Henschen, an analyst at Constellation Research in Cupertino, Calif. 

Google, which offers managed versions of Apache Spark and Apache Hadoop that run on YARN through its Cloud Dataproc service, would prefer to use its own Kubernetes platform to orchestrate resources — and to that end, released an alpha preview integration for Spark on Kubernetes within Cloud Dataproc this week. Other companies, such as Databricks (run by the creators of Apache Spark) and D2iQ (formerly Mesosphere), support Spark on Kubernetes, but Google Cloud Dataproc stands to become the first of the major cloud providers to include it in a managed service.

Customers don’t care about managing Hive or Pig, and want to use Kubernetes in hybrid clouds.
James MaloneProduct manager, Google Cloud Dataproc

Apache Spark has had a native Kubernetes scheduler since version 2.3, and Hadoop added native container support in Hadoop 3.0.3, both released in May 2018. However, Hadoop’s container support is still tied to HDFS and is too complex, in Google’s view.

“People have gotten Docker containers running on Hadoop clusters using YARN, but Hadoop 3’s container support is probably about four years too late,” said James Malone, product manager for Cloud Dataproc at Google. “It also doesn’t really solve the problems customers are trying to solve, from our perspective — customers don’t care about managing [Apache data warehouse and analytics apps] Hive or Pig, and want to use Kubernetes in hybrid clouds.”

Spark on Kubernetes only scratches the surface of big data integration

Cloud Dataproc’s Spark on Kubernetes implementation remains in a very early stage, and will require updates upstream to Spark as well as Kubernetes before it’s production-ready. Google also has its sights set on support for more Apache data analytics apps, including the Flink data stream processing framework, Druid low-latency data query system and Presto distributed SQL query engine.

“It’s still in alpha, and that’s by virtue of the fact that the work that we’ve done here has been split into multiple streams,” Malone said. One of those workstreams is to update Cloud Dataproc to run Kubernetes clusters. Another is to contribute to the upstream Spark Kubernetes operator, which remains in the experimental stage within Spark Core. Finally, Cloud Dataproc must brush up performance enhancement add-ons such as external shuffle service support, which aids in the dynamic allocation of resources.

For now, IT pros who want to run Spark on Kubernetes must assemble their own integrations among the upstream Spark Kubernetes scheduler, supported Spark from Databricks, and Kubernetes cloud services. Customers that seek hybrid cloud portability for Spark workloads must also implement a distributed storage system from vendors such as Robin Systems or Portworx. All of it can work, but without many of the niceties about fully integrated cloud platform services that would make life easier.

For example, the Spark Kubernetes executor uses Python, rather than the Scala programming language, which is a bit trickier to use.

“The Python experience of Spark in Kubernetes has always lagged the Scala experience, mostly because deploying a compiled artifact in Scala is just easier logistically than pulling in dependencies for Python jobs,” said Michael Bishop, co-founder and board member at Alpha Vertex, a New York-based fintech startup that uses machine learning deployed in a multi-cloud Kubernetes infrastructure to track market trends for financial services customers. “This is getting better and better, though.”

There also remain fundamental differences between Spark’s job scheduler and Kubernetes that must be smoothed out, Bishop said.

“There is definitely an impedance [between the two schedulers,” he said. “Spark is intimately aware of ‘where’ is for [nodes], while Kubernetes doesn’t really care beyond knowing a pod needs a particular volume mounted.”

Google will work on sanding down these rough edges, Malone pledged.

“For example, we have an external shuffle service, and we’re working hard to make it work with both YARN and Kubernetes Spark,” he said.

Go to Original Article
Author:

HashiCorp Consul plays to multi-platform strength with Azure

Microsoft Azure users will get a hosted version of the HashiCorp Consul service mesh as multi-platform interoperability becomes a key feature for IT shops and cloud providers alike.

Service mesh is an architecture for microservices networking that uses a sidecar proxy to orchestrate and secure network connections among complex ephemeral services. HashiCorp Consul is one among several service mesh control planes available, but its claim to fame for now is that it can connect multiple VM-based or container-based applications in any public cloud region or on-premises deployment, through the Consul Connect gateway released last year.

HashiCorp Consul Service on Azure (HCS), released to private beta this week, automatically provisions clusters that run Consul service discovery and service mesh software within Azure. HashiCorp site reliability engineers also manage the service behind the scenes, but it’s billed through Azure and provisioned via the Azure console and the Azure Managed Applications service catalog.

The two companies unveiled this expansion to their existing partnership this week at HashiConf in Seattle, and touted their work together on service mesh interoperability, which also includes the Service Mesh Interface (SMI), released in May. SMI defines a set of common APIs that connect multiple service mesh control planes such as Consul, Istio and Linkerd.

Industry watchers expect such interconnection — and coopetition — to be a priority for service mesh projects, at least in the near future, as enterprises struggle to make sense of mixed infrastructures that include legacy applications on bare metal along with cloud-native microservices in containers.

“The only way out is to get these different mesh software stacks to interoperate,” said John Mitchell, formerly chief platform architect at SAP Ariba, a HashiCorp Enterprise shop, and now an independent digital transformation consultant who contracts with HashiCorp, among others. “They’re all realizing they can’t try to be the big dog all by themselves, because it’s a networking problem. Standardization of that interconnect, that basic interoperability, is the only way forward — or they all fail.”

Nobody who’s serious about containers in production has just one Kubernetes cluster. Directionally, multiplatform interoperability is where everybody has to go, whether they realize it yet or not.
John MitchellIndependent consultant

Microsoft and HashiCorp talked up multi-cloud management as a job for service mesh, but real-world multi-cloud deployments are still a bleeding-edge scenario at best among enterprises. However, the same interoperability problem faces any enterprise with multiple Kubernetes clusters, or assets deployed both on premises and in the public cloud, Mitchell said.

“Nobody who’s serious about containers in production has just one Kubernetes cluster,” he said. “Directionally, multiplatform interoperability is where everybody has to go, whether they realize it yet or not.”

The tangled web of service mesh interop

For now, Consul has a slight edge over Google and IBM’s open source Istio service mesh control plane, in the maturity of its Consul Connect inter-cluster gateway and ability to orchestrate VMs and bare metal in addition to Kubernetes-orchestrated containers. Clearly, it’s pushing this edge with HashiCorp Consul Service on Azure, but it won’t be long before Istio catches up. Istio Gateway and Istio Multicluster projects both emerged this year, and the ability to integrate virtual machines is also in development. Linkerd has arguably the best production-use bona fides in VM-based service mesh orchestration. All the meshes use the same Envoy data plane, which will make differentiation between them in the long term even more difficult.

“Service mesh will become like electricity, just something you expect,” said Tom Petrocelli, an analyst at Amalgam Insights in Arlington, Mass. “The vast majority of people will go with what’s in their preferred cloud platform.”

HCS could boost Consul’s profile, given Microsoft’s strength as a cloud player — but it will depend more on how the two companies market it than its technical specifications, Petrocelli said. At this stage, Consul doesn’t interoperate with Azure Service Fabric, Microsoft’s original hosted service mesh, which is important if it’s to get widespread adoption, in Petrocelli’s view.

“I’m not really going to get excited about something in Azure that doesn’t take advantage of Azure’s own fabric,” he said.

Without Service Fabric integration to widen Consul’s appeal to Azure users, it’s likely the market for HCS will pull in many new customers, Petrocelli said. Also, whether Microsoft positions HCS as its service mesh of choice for Azure, or makes it one among many hosted service mesh offerings, will decide how widely used it will be, in his estimation.

“If [HCS] is one of many [service mesh offerings on Azure], it’s nice if you happen to be a HashiCorp customer that also uses Azure,” Petrocelli said.

Go to Original Article
Author: