Tag Archives: easier

8×8 Video Meetings replaces outdated offering in X Series

8×8 has replaced its old online meetings product with a new one that offers more features and is easier to use. The platform is available only to businesses subscribed to 8×8’s unified communications suite, but the company said it may make the service available as a stand-alone app in the future.

The old 8X8 video conferencing product — built with a mix of technologies, including some from Vidyo — didn’t cut it for many customers, executives said. The company used the Jitsi open source video conferencing software that it acquired from Atlassian last year to build the new product, called 8×8 Video Meetings.

“The feedback that we heard from our customers, and where we saw the market going, really necessitated that we … swap out our whole meetings product for a new and modern video communications solution,” said Meghan Keough, 8×8’s vice president of marketing.

The new platform lets guests join meetings without having to install a plug-in or download an app. 8×8 follows the lead of Cisco, BlueJeans, Highfive and others in embracing WebRTC, the latest standard for internet-based communications.

8×8 Video Meetings also gives users their own virtual meeting rooms and lets them live-stream meetings to YouTube. Other new features include more detailed analytics and the ability to remotely control a user’s desktop while in a meeting, which could be useful for IT troubleshooting.

“I like what 8×8 has done,” said Wayne Kurtzman, analyst at IDC. “They basically updated the system, made it more usable in more places in an enterprise and are not charging more for it.”

8×8 also rolled out an early access program for software to manage video conferencing hardware in conference rooms. The product offers a way to connect third-party video cameras from vendors like Logitech and Crestron (powered by minicomputers by Mac and Intel) to 8×8’s video services.

In July, Gartner named 8×8 one of four leaders in the unified communications as a service (UCaaS) market, alongside Microsoft, Cisco and RingCentral. But the research firm previously cautioned that 8×8 offered an unintuitive video conferencing platform with a limited set of features.

8×8 Video Meetings is part of the vendor’s X Series offering, which combines calling, messaging, meetings and contact center.

The company has attempted to differentiate itself from competitors by its own technology, rather than relying on partners. RingCentral, nearly twice as large as 8×8 by revenue, relies on Zoom for video calling and Nice inContact for contact center.

“8×8 is trying to be a complete one-cloud solution for communication and collaboration,” Kurtzman said.

Go to Original Article
Author:

Cohesity Agile Dev and Test quickly delivers clean test data

Cohesity wants to make it easier for developers to get copies of data quickly.

Cohesity Agile Dev and Test, released into beta this week, is an add-on to Cohesity DataPlatform.  The feature makes clones of data sets stored by DataPlatform, allowing test/dev teams to access data without going through other teams.

Cohesity Agile Dev and Test allows DevOps teams to provision backup data without having to go through a typical request-fulfill model.

Usually when developers need a copy of the business’s data for testing or development, they would have to request it from the production or backup teams. This data needs to be accurate and up-to-date for ideal test results, but it also has to be scrubbed of personally identifiable information (PII) and otherwise masked to prevent exposing the test/dev teams to compliance issues. The process could take weeks, which is too long for time-sensitive development such as Agile projects and anything to do with machine learning.

Cohesity claims its software performs data masking before it is pulled to ensure test/dev teams have a “clean” copy to work with.

Similar products that use backup copies of data for test/dev purposes already exist, such as Actifio Sky and Cohesity’s own CloudSpin. The difference with Cohesity Agile Dev and Test is the data doesn’t need to be stood up in its own environment — it doesn’t create a separate silo of data. Sky and CloudSpin spin up the data into another environment, such as a physical server or virtual machine.

screenshot of Cohesity Agile Dev and Test
Cohesity Agile Dev and Test creates data clones, which don’t need to be stood up in another environment, rather than data copies.

Old idea, new implementation

Christophe Bertrand, senior analyst at IT research firm Enterprise Strategy Group, said Cohesity Agile Dev and Test doesn’t solve a technology problem so much as a workflow one. Making copies is nothing new, but a streamlined way to get those copies into the hands of test/dev people is.

“We’ve known for a very long time how to replicate data, but the workflow behind it is not really in place,” said Bertrand. “This is just the next step. It’s exactly what they [Cohesity] should be doing.”

Bertrand said his research shows that many enterprises want to make use of copies of data, to transform from simply having backup copies toward a model of intelligent data management. Businesses want interconnectivity across the organization to ensure any IT operation can access and make use of data. Bertrand said the market is headed in that direction as organizations that can develop faster are inherently at an advantage.

The test/dev people want data that is fresh, compliant, secure and not corrupted. They want to be able to access this data quickly and independently, in order to rapidly turn around their projects. It’s a different audience from the backup admin group, said Bertrand, as they’re not worried about things like availability and RTO/RPO.

The DevOps person doesn’t know or care about the intricacies of DR or backup.
Christophe BertrandSenior analyst, Enterprise Strategy Group

“The DevOps person doesn’t know or care about the intricacies of DR or backup,” Bertrand said.

Bertrand said Cohesity Agile Dev and Test’s main value proposition is that it gives test/dev teams the ability to get great data instantly. They don’t need to know everything about that data.

Cohesity Agile Dev and Test is scheduled for release in the Pegasus 6.4.1 update, expected in late 2019. It will be sold as an add-on capability on Cohesity DataPlatform, and customers will be charged on a usage-based pricing model.

Go to Original Article
Author:

New integration brings Fuze meetings to Slack app

An integration unveiled this week will make it easier for Slack users to launch and join Fuze meetings. Zoom Video Communications Inc. rolled out a similar integration with Slack over the summer.

Slack is increasingly making it clear that it intends to incorporate voice and video capabilities into its team messaging app through integrations and partnerships, rather than by attempting to build the technology on its own.

Fuze Inc.’s announcement also underscores how big a player Slack has become in the business collaboration industry. Fuze, a cloud unified communications (UC) provider, opted to partner with Slack, even though it sells a team messaging app with the same core capabilities.

The integration lets users launch Fuze meetings by clicking on the phone icon in Slack, instead of typing a command. They will also see details about an ongoing meeting, such as how long it’s been going on and who’s participating.

Furthermore, Slack’s Microsoft Outlook and Google Calendar apps will let users join scheduled Fuze meetings with one click. Slack previously announced support for that capability with Zoom, Cisco Webex and Skype for Business.

No formal partnership

Slack gave Fuze special access to the set of APIs that made the latest integrations possible, said Eric Hanson, Fuze’s vice president of marketing intelligence. But the companies later clarified there was no formal partnership between them.

The vendors apparently miscommunicated about how to frame this week’s announcement. Within hours on Tuesday, Fuze updated a blog post to remove references to a “partnership” with Slack, instead labeling it as an “integration.”

In contrast, Slack and Zoom signed a contract to align product roadmaps and marketing strategies earlier this year.

In the future, Fuze hopes to give users the ability to initiate phone calls through Slack. Previously, Slack said it would enable such a feature with Zoom Phone, the video conferencing provider’s new cloud calling service.

Slack declined to comment on any plans to expand the Fuze integration.

“There are still some things that Slack hasn’t made available through this set of APIs yet,” Hanson said. “They have a roadmap in terms of where they want to take this.”

Making it easier for users to pick and choose

The voice and video capabilities natively supported in Slack are far less advanced than those available from main rival Microsoft Teams, an all-in-one suite for calling, messaging and meetings. But users want to be able to easily switch between messaging with someone and talking to them in real time.

By integrating with cloud communications vendors like Fuze and Zoom, Slack can focus on what it does best — team-based collaboration — while still connecting to the real-time communications services that customers need, said Mike Fasciani, analyst at Gartner.

“One of Slack’s advantages over Microsoft Teams is its ability and willingness to integrate with many business and communications applications,” Fasciani said.

Fuze also competes with Microsoft Teams. Integrations with Slack should help cloud UC providers sell to the vendor’s rapidly expanding customer base. Slack now has more than 100,000 paid customers, including 720 enterprises that each contribute more than $100,000 per year in revenue.

“Even though Fuze has its own [messaging] app, it doesn’t have anywhere near the market share of Slack,” said Irwin Lazar, analyst at Nemertes Research. “I think this shows Slack’s continued view that they don’t want to compete directly with the voice/meeting vendors.”

Go to Original Article
Author:

Data ethics issues create minefields for analytics teams

GRANTS PASS, Ore. — AI technologies and other advanced analytics tools make it easier for data analysts to uncover potentially valuable information on customers, patients and other people. But, too often, consultant Donald Farmer said, organizations don’t ask themselves a basic ethical question before launching an analytics project: Should we?

In the age of GDPR and like-minded privacy laws, though, ignoring data ethics isn’t a good business practice for companies, Farmer warned in a roundtable discussion he led at the 2019 Pacific Northwest BI & Analytics Summit. IT and analytics teams need to be guided by a framework of ethics rules and motivated by management to put those rules into practice, he said.

Otherwise, a company runs the risk of crossing the line in mining and using personal data — and, typically, not as the result of a nefarious plan to do so, according to Farmer, principal of analytics consultancy TreeHive Strategy in Woodinville, Wash. “It’s not that most people are devious — they’re just led blindly into things,” he said, adding that analytics applications often have “unforeseen consequences.”

For example, he noted that smart TVs connected to home networks can monitor whether people watch the ads in shows they’ve recorded and then go to an advertiser’s website. But acting on that information for marketing purposes might strike some prospective customers as creepy, he said.

Shawn Rogers, senior director of analytic strategy and communications-related functions at vendor Tibco Software Inc., pointed to a trial program that retailer Nordstrom launched in 2012 to track the movements of shoppers in its stores via the Wi-Fi signals from their cell phones. Customers complained about the practice after Nordstrom disclosed what it was doing, and the company stopped the tracking in 2013.

“I think transparency, permission and context are important in this area,” Rogers said during the session on data ethics at the summit, an annual event that brings together a small group of consultants and vendor executives to discuss BI, analytics and data management trends.

AI algorithms add new ethical questions

Being transparent about the use of analytics data is further complicated now by the growing adoption of AI tools and machine learning algorithms, Farmer and other participants said. Increasingly, companies are augmenting — or replacing — human involvement in the analytics process with “algorithmic engagement,” as Farmer put it. But automated algorithms are often a black box to users.

Mike Ferguson, managing director of U.K.-based consulting firm Intelligent Business Strategies Ltd., said the legal department at a financial services company he works with killed a project aimed at automating the loan approval process because the data scientists who developed the deep learning models to do the analytics couldn’t fully explain how the models worked.

We’ve gone from a bottom-up approach of everybody grabbing data and doing something with it to more of a top-down approach.
Mike FergusonManaging director, Intelligent Business Strategies Ltd.

And that isn’t an isolated incident in Ferguson’s experience. “There’s a loggerheads battle going on now in organizations between the legal and data science teams,” he said, adding that the specter of hefty fines for GDPR violations is spurring corporate lawyers to vet analytics applications more closely. As a result, data scientists are focusing more on explainable AI to try to justify the use of algorithms, he said.

The increased vetting is driven more by legal concerns than data ethics issues per se, Ferguson said in an interview after the session. But he thinks that the two are intertwined and that the ability of analytics teams to get unfettered access to data sets is increasingly in question for both legal and ethical reasons.

“It’s pretty clear that legal is throwing their weight around on data governance,” he said. “We’ve gone from a bottom-up approach of everybody grabbing data and doing something with it to more of a top-down approach.”

Jill Dyché, an independent consultant who’s based in Los Angeles, said she expects explainable AI to become “less of an option and more of a mandate” in organizations over the next 12 months.

Code of ethics not enough on data analytics

Staying on the right side of the data ethics line takes more than publishing a corporate code of ethics for employees to follow, Farmer said. He cited Enron’s 64-page ethics code, which didn’t stop the energy company from engaging in the infamous accounting fraud scheme that led to bankruptcy and the sale of its assets. Similarly, he sees such codes having little effect in preventing ethical missteps on analytics.

“Just having a code of ethics does absolutely nothing,” Farmer said. “It might even get in the way of good ethical practices, because people just point to it [and say], ‘We’ve got that covered.'”

Instead, he recommended that IT and analytics managers take a rules-based approach to data ethics that can be applied to all three phases of analytics projects: the upfront research process, design and development of analytics applications, and deployment and use of the applications.

Go to Original Article
Author:

End users will make or break an Office 365 migration

An Office 365 migration can improve an end user’s experience by making it easier to work in a mobile environment while also keeping Office 365 features up to date. But if the migration is done without the end users in mind, it can lead to headaches for IT admins.

At a Virtual Technology User Group (VTUG) event in Westbrook, Maine, about 30 attendees piled into a Westbrook Middle School classroom to hear tips on how to transition to Office 365 smoothly.

Office 365 is Microsoft’s subscription-based line of Office applications, such as Word, PowerPoint, Outlook, Teams and Excel. Rather than downloaded onto a PC, Office 365 apps are run in the cloud, enabling users to access their files wherever they are.

“As IT admins, we need to make the digital transformation technology seem easy,” said Jay Gilchrist, business development manager for Presidio Inc., a cloud, security and digital infrastructure vendor in New York and a managed service provider for Microsoft. Gilchrist and his Presidio colleague, enterprise software delivery architect Michael Cessna, led the session, outlining lessons they’ve learned from previous Office 365 migrations.

Importance of communication and training

Their first lessons included communicating with end users, keeping a tight migration schedule and the importance of training.

“You want to make it clear that you’re not just making a change for change’s sake,” Gilchrist said. “Communicate these changes as early as possible and identify users who may need a little more training.”

One practical tip he offered is to reserve the organization’s name in Office 365 early to ensure it’s available.

You want to make it clear that you’re not just making a change for change’s sake.
Jay GilchristBusiness development manager, Presidio

Conducting presentations, crafting targeted emails and working to keep the migration transparent can help IT admins keep end users up to date and enthused about the transition.

“End users are not information professionals,” Cessna said. “They don’t understand what we understand and these changes are a big deal to them.”

Cessna and Gilchrist said that if IT admins want end users to adopt apps in Office 365, they’ll need to provide the right level of training. IT admins can do that by providing internal training sessions, using external resources such as SharePoint Training Sites, as well as letting users work with the apps in a sandbox environment. Training will help end users get used to how the apps work and address questions end users may have in real time, thereby reducing helpdesk tickets once the Office 365 migration is completed. 

Governance and deployment

Before an Office 365 migration, IT admins need to have a governance of applications and deployment plan in place.

“Governance built within Microsoft isn’t really there,” Cessna said. “You can have 2,000 users and still have 4,500 Team sessions and now you have to manage all that data. It’s good to take care of governance at the beginning.”

Deployment of Office 365 is another aspect that IT admins need to tackle at the start of an Office 365 migration. They need to determine what versions are compatible with the organization’s OS and how the organization will use the product.

“It’s important to assess the digital environment, the OSes, what versions of Office are out there and ensure the right number of licenses,” Cessna said.

Securing and backing up enterprise data

One existing concern for organizations migrating from on-premises to an Office 365 cloud environment is security.

Microsoft provides tools that can help detect threats and secure an organization’s data. Microsoft offers Office 365 Advanced Threat Protection (ATP), a cloud-based email filtering service that helps protect against malware, Windows Defender ATP, an enterprise-grade tool to detect and respond to security threats, and Azure ATP, which accesses the on-premises Active Directory to identify threats.

Microsoft has also added emerging security capabilities such as passwordless log in, single-sign-on and multi-factor authentication to ensure data or files don’t get compromised or stolen during an Office 365 migration.

Regulated organizations such as financial institutions that need to retain data for up to seven years will need to back up Office 365 data, as Microsoft provides limited data storage capabilities, according to Cessna.

Microsoft backs up data within Office 365 for up to two years in some cases, and only for one month in other cases, leaving the majority of data backup to IT.

“[Microsoft] doesn’t give a damn about your data,” he said. “Microsoft takes care of the service, but you own the data.”

Picking the right license

Once the organization is ready for the migration, it’s important to choose the right Office 365 license, according to Gilchrist.

There are several ways for an organization to license an Office 365 subscription. Gilchrist said choosing the right one depends on the size of the organization and the sophistication of the organization’s IT department.

The subscription choices for Office 365.
When deciding on which Office 365 subscription to license, it’s important to examine the size and scope of your organization and decide which offering works best for you.

Smaller businesses can choose an option of licenses for 300 or less users, as well as options for add-ons like a desktop version of Office and advanced security features. The cost for enterprise licenses differs depending on the scope of the licenses and number of licenses needed, and educational and non-profit discounts on licenses are offered as well.

Other licensing options include Microsoft 365 bundles, which combine Office 365 with a Windows 10 deployment, or organizations could use Microsoft as a Cloud Solution Provider and have the company handle the heavy lifting of the Office 365 migration.

“There are different ways to do it. You just have to be aware of the best way to license for your business,” Gilchrist said.

Measuring success and adoption

Once completed, IT still has one more objective, and that’s to prove the worth of an Office 365 migration.

“This is critical and these migrations aren’t cheap,” Cessna said. “You want to show back to the business the ROI and what this new world looks like.”

To do that, IT admins will have to circle back to their end users. They can use tools such as Microsoft’s Standard Office 365 Usage Reports, Power BI Adoption reports or other application measurement software to pin down end user adoption and usage rates. They can provide additional training, if necessary.

“Projects fail because the end users aren’t happy,” Cessna said. “We don’t take them into account enough. Our end users are our customers and we need to make sure they’re happy.”

Go to Original Article
Author:

Netflix launches tool for monitoring AWS credentials

LAS VEGAS — A new open source tool looks to make monitoring AWS credentials easier and more effective for large organizations.

The tool, dubbed Trailblazer, was introduced during a session at Black Hat USA 2018 on Wednesday by William Bengtson, senior security engineer at Netflix, based in Los Gatos, Calif. During his session, Bengtson discussed how his security team took a different approach to reviewing AWS data in order to find signs of potentially compromised credentials.

Bengtson said Netflix’s methodology for monitoring AWS credentials was fairly simple and relied heavily on AWS’ own CloudTrail log monitoring tool. However, Netflix couldn’t rely solely on CloudTrail to effectively monitor credential activity; Bengtson said a different approach was required because of the sheer size of Netflix’s cloud environment, which is 100% AWS.

“At Netflix, we have hundreds of thousands of servers. They change constantly, and there are 4,000 or so deployments every day,” Bengtson told the audience. “I really wanted to know when a credential was being used outside of Netflix, not just AWS.”

That was crucial, Bengtson explained, because an unauthorized user could set up infrastructure within AWS, obtain a user’s AWS credentials and then log in using those credentials in order to “fly under the radar.”

However, monitoring credentials for usage outside of a specific corporate environment is difficult, he explained, because of the sheer volume of data regarding API calls. An organization with a cloud environment the size of Netflix’s could run into challenges with pagination for the data, as well as rate limiting for API calls — which AWS has put in place to prevent denial-of-service attacks.

“It can take up to an hour to describe a production environment due to our size,” he said.

To get around those obstacles, Bengtson and his team crafted a new methodology that didn’t require machine learning or any complex technology, but rather a “strong but reasonable assumption” about a crucial piece of data.

“The first call wins,” he explained, referring to when a temporary AWS credential makes an API call and grabs the first IP address that’s used. “As we see the first use of that temporary [session] credential, we’re going to grab that IP address and log it.”

The methodology, which is built into the Trailblazer tool, collects the first API call IP address and other related AWS data, such as the instance ID and assumed role records. The tool, which doesn’t require prior knowledge of an organization’s IP allocation in AWS, can quickly determine whether the calls for those AWS credentials are coming from outside the organization’s environment.

“[Trailblazer] will enumerate all of your API calls in your environment and associate that log with what is actually logged in CloudTrail,” Bengtson said. “Not only are you seeing that it’s logged, you’re seeing what it’s logged as.”

Bengtson said the only requirement for using Trailblazer is a high level of familiarity with AWS — specifically how AssumeRole calls are logged. The tool is currently available on GitHub.

Vendors race to adopt Google Contact Center AI

Google has released a development platform that will make it easier for businesses to deploy virtual agents and other AI technologies in the contact center. The tech giant launched the product in partnership with several leading contact center vendors, including Cisco and Genesys. 

The Google Contact Center AI platform includes three main features: virtual agents, AI-powered assistance for human agents and contact center analytics. Google first released a toolkit for building conversational AI bots in November and updated the platform this week, with additional tools for contact centers.

The virtual agents can help resolve common customer inquiries using Google’s natural language processing platform, which recognizes voice and textual inputs. Genesys, for example, demonstrated how the chatbot could help a customer return ill-fitting shoes before passing the phone call to a human agent, who could help the customer order a new pair.

Google’s agent assistance system scans a company’s knowledge bases, such as FAQs and internal documents, to help agents answer customer questions faster. The analytics tool reviews chats and call recordings to identify customer trends, assisting in the training of live agents and the development of virtual agents.

Vendors rush to adopt Google Contact Center AI

Numerous contact center vendors that directly compete with one another sent out strikingly similar press releases on Tuesday about their adoption of Google Contact Center AI. The Google platform is available through partners Cisco, Genesys, Mitel, Five9, RingCentral, Vonage, Twilio, Appian and Upwire.

“I don’t think I’ve ever heard of a launch like this, where almost every player — except Avaya — is announcing something with the same company,” said Jon Arnold, principal analyst of Toronto-based research and analysis firm J Arnold & Associates.

Avaya was noticeably absent from the list of partners. The company spent most of 2017 in bankruptcy court and was previously faulted by critics for failing to pivot to the cloud quickly enough. The company said at a conference earlier this year it was developing AI capabilities internally, said Irwin Lazar, an analyst at Nemertes Research, based in Mokena, Ill.

An Avaya spokesperson said its platforms integrated with a range of AI technologies from vendors, including Google, IBM, Amazon and Nuance. “Avaya does have a strong relationship with Google, and we continue to pursue opportunities for integration on top of what already exists today,” the spokesperson said.

Google made headlines last month with the release of Google Duplex, a conversational AI bot targeting the consumer market. The company demonstrated how the platform could pass as human during short phone conversations with a hair salon and restaurant. Google’s Contact Center AI was built on some of the same infrastructure, but it’s a separate platform, the company said.

“Google has been pretty quiet. They are not a contact center player. But as AI keeps moving along the curve, everyone is trying to figure out what to do with it. And Google is clearly one of the strongest players in AI, as is Amazon,” Arnold said.

Because it relies overwhelmingly on advertising revenue, Google doesn’t need its Contact Center AI to make a profit. Google will be able to use the data that flows through contact centers to improve its AI capabilities. That should help it compete against Amazon, which entered the contact center market last year with the release of Amazon Connect.

The contact center vendors now partnering with Google had already been racing to develop or acquire AI technologies on their own, and some highlighted how their own AI capabilities would complement Google’s offering. Genesys, for example, said its Blended AI platform — which combines chatbots, machine learning and analytics — would use predictive routing to transfer calls between Google-powered chatbots and live agents.  

“My sense with AI is that it will be difficult for vendors to develop capabilities on their own, given that few can match the computing power required for advanced AI that vendors like Amazon, Google and Microsoft can bring to the table,” Lazar said.

SaaS activity alerts can mitigate manual misconfigurations

External threats can actually be the easier security issue to combat compared to the potential of an insider stealing data, which makes access management and awareness vital for IT.

More and more sensitive data is being stored in the cloud and improper access controls or limited visibility can lead to unintended data exposures or even insider theft. However, better SaaS activity alerts can help mitigate these issues.

BetterCloud CEO and founder David Politis spoke with SearchSecurity about the dangers of cloud misconfigurations and having too many admins, as well as how SaaS activity can be monitored automatically to avoid security breaches.

Editor’s note: This conversation has been edited for length and clarity.

You have said that it is functionally impossible to monitor SaaS activity manually, so what are the programmatic options for security?

David Politis: The most important thing we have is this framework that we recreated with our customers. The first step is centralizing all of the data that you have across these applications because data sprawl is one of the biggest issues.

David Politis, CEO and founder, BetterCloudDavid Politis

Once you’ve centralized that data, programmatically you have to go into all the different APIs that are available from these applications and you need to bring all the settings and the configuration and the entitlements and everything into a single place because part of the problem is going app by app. That’s not scalable.

Once you’ve centralized all of that, you need to be able to go and discover against that centralized repository of all the entitlements and settings you have, because once you centralize, what you’ll find is you have, depending on the size of your organization, millions — I’m not exaggerating — millions of data points that you’re having to report against or audit.

So you centralize then you do discovery and discovery means: Let me look at all my groups or email distribution lists that are set like this, or I have a rule in my organization where I need to be able to see all the files that are shared in this way. Now, still, that’s a massive data set and somehow you need that to be surfaced more real time because the changes in the settings and the entitlements are changing all the time. They’re literally changing every day, all day. People are working in these applications; they’re sharing files; they’re creating Slack channels; they’re adding folders in Dropbox; they’re doing X, Y, Z in Salesforce. It’s changing on a regular basis.

So after centralizing and being able to discover — that really helps you retroactively — then you need something that surfaces the insights on a more regular basis that says, ‘Hey, when we catch this needle in the haystack, surface that.’

The last step is you want to be able to do something about that because if you’re just surfacing data all day long, what we hear from IT is that they have this kind of fatigue of alerts, they have a fatigue of trying to put out fires all day long. And so there needs to be a system that not only brings all the data, centralizes it, makes it discoverable, surfaces insight and the items that need to get the exposures, the risk, and then ultimately be able to remediate that and take some kind of an action against that and enforce that.

What are the new features BetterCloud is introducing to enable SaaS activity monitoring?

Politis: The new service that we’re launching now, that we just started layering into the product, is our activity-based alerting. Basically, all the things that you and I just talked about the last 20 minutes, that’s all based on what I would call ‘state-based’ settings or configurations are entitlements — is a user set as an end user or an admin? Is this email distribution set to public or is it set to private? — that’s the state that is in.

We are now starting to do ‘activity-based’ monitoring and alerting and triggers for our workflows, and that is at a completely different level. If somebody just downloads 500 files in a matter of 30 minutes, that’s a next level deeper in terms of looking at user behavior and user activities within these platforms. Did somebody just create 100 users that are all super admin? Were there suspicious logins to this platform outside of the IP range?

So, you start getting more into the activity-based stuff, which is either a faster indicator of misconfigurations that are mistakes, or that’s actually a faster indication — and probably more likely, frankly — of malicious behavior. And so we really extended the platform to start looking at user behavior, user activity in these platforms.

The number one request I’ve gotten for the last year from customers is: I want to know when people are downloading files from Dropbox, Box, Slack, Salesforce [and/or] Google. File downloads has been the number one requested activity to monitor since I can’t even remember because as you can imagine, that starts to be a little bit more malicious. And that’s when IT can really be taken out of an organization.

I think the Uber/Waymo example is a great one. That is just someone at Waymo, at Google downloading a bunch of files out of Google Drive and leaving. Now, if you were looking at their activity in Google Drive, you would have noticed that they downloaded all the files from the confidential folder, and you can flag that, you could block, you could follow up with security.

It’s as it’s happening versus the states that things are in. File download is not a state the file has. So by looking at all the states of the file, you don’t know that it was downloaded 100 times by this person in a 30-minute window by seeing that someone successfully logged in, you don’t see that has 100 failed logins from 100 different IP addresses.

What platforms do you support with these SaaS activity alerts?

Politis: We have it fully integrated for Okta, Dropbox and Google. We’re layering it in for Box and Salesforce, so over the next couple months we’ll have the same functionality available across all the applications that we support.

And, this is actually an interesting indication because a lot of the SaaS platforms that we work with, five years ago, three years ago, they didn’t make this kind of activity streams available via their API. Now they’re making it available because how do companies protect themselves against this stuff? The only way is for the platforms themselves to make this information available via API, make this information available programmatically to their customers, to their partners. And so we’re taking advantage of that. Dropbox’s API that we’re using is a new API available for their enterprise customers for exactly this purpose, but their customers don’t know how to utilize that. What we’re doing is we’re doing that for the customer, we’re going out to the different SaaS platforms connecting to these activity streams, and then making sense of them. Otherwise, it’s just a stream of data.

But to that first part of the discussion: People keying in on this is what I’ve been waiting for, for many years. Because people have been [saying], ‘OK, I don’t see this problem in the news. And now it’s starting.’

I think it’s only the beginning. I think you’re going to see what I’m seeing with some of our really large organizations that these misconfigurations are going to come out more and more and more and the impact that they’re having on organizations is bigger than people know yet.

Manage all your Hyper-V snapshots with PowerShell


It’s much easier to manage Hyper-V snapshots using PowerShell than a GUI because PowerShell offers the greater…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

flexibility. Once you’re familiar with the basic commands, you’ll be equipped to oversee and change the state of the VMs in your virtual environment.

PowerShell not only reduces the time it takes to perform a task using a GUI tool, but it also reduces the time it takes to perform repeated tasks. For example, if you want to see the memory configured on all Hyper-V VMs, a quick PowerShell command or script is easier to execute than checking VMs one by one. Similarly, you can perform operations related to Hyper-V snapshots using PowerShell.

A snapshot — or checkpoint, depending on which version of Windows Server you have — is a point-in-time picture of a VM that you can use to restore that VM to the state it was in when the snapshot was taken. For example, if you face issues when updating Windows VMs and they don’t restart properly, you can restore VMs to the state they were in before you installed the updates.

Similarly, developers can use checkpoints to quickly perform application tests.

Before Windows Server 2012 R2, Microsoft didn’t support snapshots for production use. But starting with Windows Server 2012 R2, snapshots have been renamed checkpoints and are well-supported in a production environment.

PowerShell commands for Hyper-V snapshots and checkpoints

Microsoft offers a few PowerShell commands to work with Hyper-V checkpoints and snapshots, such as Checkpoint-VM, Get-VMSnapshot, Remove-VMSnapshot and Restore-VMSnapshot

If you want to retrieve all the Hyper-V snapshots associated with a particular VM, all you need to do is execute the Get-VMSnapshot -VMName PowerShell command. For example, the PowerShell command below lists all the snapshots associated with SQLVM:

Get-VMSnapshot -VMName SQLVM

There are two types of Hyper-V checkpoints available: standard and production checkpoints. If you just need all the production checkpoints for a VM, execute the PowerShell command below:

Get-VMSnapshot -VMName SQLVM -SnapshotType Production

To list only the standard checkpoints, execute the following PowerShell command:

Get-VMSnapshot -VMName SQLVM -SnapshotType Standard

When it comes to creating Hyper-V checkpoints for VMs, use the Checkpoint-VM PowerShell command. For example, to take a checkpoint for a particular VM, execute the command below:

Checkpoint-VM -Name TestVM -SnapshotName TestVMSnapshot1

The above command creates a checkpoint for TestVM on the local Hyper-V server, but you can use the following command to create a checkpoint for a VM located on a remote Hyper-V server:

Get-VM SQLVM -ComputerName HyperVServer | Checkpoint-VM

There are situations where you might want to create Hyper-V checkpoints of VMs in bulk. For example, before installing an update on production VMs or upgrading line-of-business applications in a VM, you might want to create checkpoints to ensure you can successfully restore VMs to ensure business continuity. But if you have several VMs, the checkpoint process might take a considerable amount of time.

You can design a small PowerShell script to take Hyper-V checkpoints for VMs specified in a text file, as shown in the PowerShell script below:

$ProdVMs = “C:TempProdVMs.TXT”
Foreach ($ThisVM in Get-Content $ProdVMs)
{   
$ChkName = $ThisVM+”_BeforeUpdates”
Checkpoint-VM -Name $ThisVM -SnapshotName $ChkName
}
Write-Host “Script finished creating Checkpoints for Virtual Machines.”

The above PowerShell script gets VM names from the C:TempProdVMs.TXT file one by one and then runs the Checkpoint-VM PowerShell command to create the checkpoints.

To remove Hyper-V snapshots from VMs, use the Remove-VMSnapshot PowerShell command. For example, to remove a snapshot called TestSnapshot from a VM, execute the following PowerShell command:

Get-VM SQLVM | Remove-VMSnapshot -Name TestSnapshot

To remove Hyper-V checkpoints from bulk VMs, use the same PowerShell script you used to create the checkpoints. Let’s assume all the VMs are working as expected after installing the updates and you would like to remove the checkpoints. Simply execute the PowerShell script below:

$ProdVMs = “C:TempProdVMs.TXT”
Foreach ($ThisVM in Get-Content $ProdVMs)
{   
$ChkName = $ThisVM+”_BeforeUpdates”
Get-VM $ThisVM | Remove-VMSnapshot $ChkName
}
Write-Host “Script finished removing Checkpoints for Virtual Machines.”

To restore Hyper-V snapshots for VMs, use the Restore-VMSnapshot PowerShell cmdlet. For example, to restore or apply a snapshot to a particular VM, use the following PowerShell command:

Restore-VMSnapshot -Name “TestSnapshot1” -VMName SQLVM -Confirm:$False

Let’s assume your production VMs aren’t starting up after installing the updates and you would like to restore the VMs to their previous states. Use the PowerShell script below and perform the restore operation:

$ProdVMs = “C:TempProdVMs.TXT”
Foreach ($ThisVM in Get-Content $ProdVMs)
{   
$ChkName = $ThisVM+”_BeforeUpdates”
Restore-VMSnapshot -Name $ChkName -VMName $ThisVM -Confirm:$False
}
Write-Host “Script finished Restoring Checkpoints for Virtual Machines.”

Note that, by default, when restoring a checkpoint for a VM, the command asks for confirmation. To avoid the confirmation prompt, add the Confirm:$False parameter to the command, as shown above.

Recruiting on LinkedIn adds analytics and pointed questions

It will be easier for someone recruiting on LinkedIn to poach talent once the social network giant releases its new analytics platform near the end of September. This may seem startling, but LinkedIn is not shying away from this outcome.

The platform, LinkedIn Talent Insights, is intended to simplify the ability of recruiters to get competitive intelligence and target potential candidates. Users will be able to look at, for instance, the number of software engineers employed by a firm, parse it down by city, see the growth in hiring and note the attrition rate.

In beta testing for LinkedIn Talent Insights, some of the participating users were able to identify firms in some markets that have people with sought-after skill sets, such as software engineers, and then target them. The workers can then be identified using tools for recruiting on LinkedIn.

Poaching talent questioned

Eric Owski, the head of product for Talent Insights at LinkedIn, outlined the forthcoming tool at the recent Society for Human Resource Management conference in Chicago. Before his audience, he used a live demo to demonstrate, in minutes, how to assemble a competitive analysis.

During an audience Q&A, one woman in attendance asked Owski about the ethics of using this analytics tool to raid a competitor.

The world is becoming more transparent.
Eric Owskihead of product for Talent Insights, LinkedIn

The attendee asked: “Does that set up an environment for poaching talent?” And then she immediately answered her own question. “I think the answer is yes. And so why would I sign off on that?”

Owski agreed that using the new tool for recruiting on LinkedIn made poaching possible but argued that there was nothing wrong with making this data available.

Internally, the LinkedIn team on the project had many “philosophical” discussions about the use of this data, Owski said. But the team concluded that “the world is becoming more transparent,” and “very sophisticated teams at large companies were able to figure out a lot of the calculations that we’re making available in this product,” he said. 

“We think by packaging it up nicely, it levels the playing field,” Owski said. “We feel like we’re on safe ground.”

LinkedIn draws line on available data

But LinkedIn is drawing a line on what data it makes available.

Owski said LinkedIn can determine with up to 93% accuracy the gender diversity of workers at a firm by analyzing the first name. But the company isn’t making company-specific gender data available in the search tool because it is “very highly sensitive data” that can open up questions of discrimination. LinkedIn will make that information available at a market or broader level.

LinkedIn Talent Insights uses data from its 560 million global members. The site has 15 million open jobs at any given time and some 23,000 standardized job titles that it recognized. The analytics platform is global and not dependent on government data, Owski said.

The tool’s ease of use was a key point for Owski. The interface appeared to be no more complicated than the advanced search feature on Google. It asked the user to input skills to include and exclude job title, location and industry. It then quickly produced a list of firms with employees who have those skills, hiring trends and attrition rate.

One attendee, Kevin Cottingim, senior vice president of HR at Employbridge, a staffing firm, said in an interview he was “excited” about trying the analytics platform for recruiting on LinkedIn.

Cottingim said his firm has 500 branches around the country and the recruiting analytics tool will help them understand if there are more positions available than candidates in any given market. With that data, he can strategize his plans for more targeted advertising, as well as consider paying a salary premium.

In terms of seeing the attrition rates at other firms, Cottingim said, “I would love to be able to benchmark that against my competitors.”

Quality of data questioned

Some in the audience raised questions about the quality of the data, and whether, for instance, profile changes are a good enough indicator of attrition. An attendee asked if LinkedIn continued to appeal to a full demographic range of people, particularly millennials.

Owski said there’s a potential for noise in the data, but he believes they have enough representation of professionals to “cancel out the noise.”

As far as competitors to LinkedIn, Owski said, unlike Facebook, it doesn’t have Snapchat-type rivals. Some industry observers believe Snapchat, which tends to appeal to younger users, is a potential Facebook threat. Owski’s point is that LinkedIn doesn’t have similar competitors.

Product pricing will be available in July, and the vendor may bundle LinkedIn Talent Insights for people who are already recruiting on LinkedIn. An upcoming feature will be an API that allows users to take the data and use it in their own dashboards.

Another attendee, Melvin Jones, the workforce strategy branch chief at the National Oceanic and Atmospheric Administration (NOAA), said the LinkedIn Talent Insights tool may help the agency improve the targeting of its job advertising and figure out what job markets are best for certain skills. 

It will also enable the agency to know how private sector firms view NOAA’s workforce, Jones said, in an interview.

“It’s good to have validation of the data and see how other people are viewing us,” Jones said. “In military terms, it’s good to see what the enemy sees.”