Tag Archives: plans

Ytica acquisition adds analytics to Twilio Flex cloud contact center

Twilio has acquired the startup Ytica and plans to embed its workforce optimization and analytics software into Twilio Flex, a cloud contact center platform set to launch later this year. Twilio will also sell Ytica’s products to competing contact center software vendors.

Twilio declined to disclose how much it paid for Ytica, but said the deal wouldn’t significantly affect its earnings in 2018. Twilio plans to open its 17th branch office in Prague, where Ytica is based.  

The acquisition comes as AI analytics has emerged as a differentiator in the expanding cloud contact center market and as Twilio — a leading provider of cloud-based communications tools for developers — prepares for the general release of its first prebuilt contact center platform, Twilio Flex.

Founded in 2017, Ytica sells a range of real-time analytics, reporting and performance management tools that contact center vendors can add to their platforms. In addition to Twilio, Ytica has partnerships with Talkdesk and Amazon Connect that are expected to continue.

Twilio is targeting Twilio Flex at large enterprises looking for the flexibility to customize their cloud contact centers. The platform launched in beta in March and is expected to be commercially released later this year.

The vendor’s communications platform as a service already supports hundreds of thousands of contact center agents globally. Twilio Flex places those same developer tools into the shell of a contact center dashboard preconfigured to support voice, text, video and social media channels.

The native integration of Ytica’s software should boost Twilio Flex’s appeal as businesses look for ways to save money and increase sales by automating the monitoring and management of contact center agents. 

Ytica’s portfolio includes speech analytics, call recording search, and real-time monitoring of calls and agent desktops. Businesses could use the technology to identify customer trends and to give feedback to agents.

Contact center vendors tout analytics in cloud

The marketing departments of leading contact center vendors have placed AI at the center of sales pitches this year, even though analysts say much of the technology is still in the early stages of usefulness.

This summer, Google unveiled an AI platform for building virtual agents and automating contact center analytics. Twilio was one of nearly a dozen vendors to partner with Google at launch, along with Cisco, Genesys, Mitel, Five9, RingCentral, Vonage, Appian and Upwire.

Within the past few months Avaya and Nice inContact have also updated their workforce optimization suites for contact centers with features including speech analytics and real-time trend reporting.

Enterprise technology buyers say analytics will be the most important technology for transforming customer experiences in the coming years, according to a recent survey of 700 IT and business leaders by Nemertes Research Group Inc., based in Mokena, Ill.

For Sale – STRIX RX 570 OC 4GB/ROG STRIX-GTX1050TI-4G

Hi, this a new card for a friends build but change of plans. It has been tested to confirm working and then back in box. It still has the protective wrap on the card. Comes boxed as new. The best 1050ti you can get, RGB lighting, back plate, Excellent quiet cooler. I will assist with any warranty issues.

Added a RX570 4GB. In mint condition and comes boxed with accessories. Just over a years warranty left which I’m happy to help out with.

Price and currency: £115 for 1050ti £140 for 570 sold to davidaw
Delivery: Delivery cost is included within my country
Payment method: BT
Location: Carlisle
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Windstream SD-WAN gets help connecting to the cloud

Network service provider Windstream Communications plans to release in August a service for connecting the Windstream SD-WAN to applications running on Microsoft Azure. The product, called SD-WAN Cloud Connect, is designed to provide a reliable connection to public clouds.

Windstream introduced the service in July, with initial support limited to Amazon Web Services. Windstream plans to add support for other cloud providers over time.

Connecting corporate employees to application services running in a public cloud is not a trivial matter. Corporate IT has to know the performance requirements of cloud-based applications and the expected usage patterns to estimate network bandwidth capacity. Engineers also have to identify potential bottlenecks and plan for monitoring network traffic and network connection endpoints after deploying applications in the cloud.

Windstream’s virtual edge device

Windstream’s latest Cloud Connect service is designed to eliminate some of the hassles of connecting to the public cloud. The service connects through a virtual edge device that communicates with the Windstream SD-WAN Concierge offering, which is a premise-based version of VMware’s VeloCloud.

Windstream can deploy the edge device in its data center or on a customer’s virtualized server. After installing the software, Windstream activates it and handles all management chores as part of the customer’s Windstream SD-WAN service.

Windstream provides an online portal for creating, deploying and managing SD-WAN routing and security policies. The site includes a console for accessing real-time intelligence on link performance.

Windstream’s partnership with an SD-WAN vendor is not unique. Many service providers have announced such deals to compete for a share of the fast-growing market. Other alliances include Comcast Business and CenturyLink with Versa Networks; Verizon with Viptela, which is owned by Cisco; and AT&T and Sprint with VeloCloud.

Windstream, which serves mostly small and midsize enterprises, has grown its network service business through acquisition. In January, Windstream announced it would acquire Mass Communications, a New York-based competitive local exchange carrier. In 2017, Windstream completed the acquisitions of Broadview and EarthLink.

Juniper preps 400 GbE across PTX, MX and QFX hardware

Juniper plans to add 400 Gigabit Ethernet across its PTX and MX routers and QFX switches as internet companies and cloud providers gear up for the higher throughput needed to meet global demand from subscribers.

Juniper said this week it would roll out higher speed ports in the three product series over the next 12 months. The schedule is in line with analysts predictions that vendors would start shipping 400 GbE devices this year.

Juniper will market the devices for several uses, including a data center backbone, internet peering, data center interconnect, a metro core, telecommunication services and a hyperscale data center IP fabric.

The announcement follows by a month Juniper’s release of the 400 GbE-capable Penta, a 16 nanometer (nm) packet-forwarding chipset that consumes considerably less energy than Juniper’s other silicon. Juniper designed the Penta for carriers rearchitecting their data centers to deliver 5G services.

Penta is destined for some of the new hardware, which will help Juniper meet carrier demand for more speed, said Eric Hanselman, an analyst at New York-based 451 Research.

“Juniper has such a strong base with service providers and network operators and they’re already seeing strong pressure for higher capacity,” Hanselman said. “Getting the Penta silicon out into the field on new platforms could help to move Juniper forward [in the market].”

The upcoming hardware will also use a next-generation ExpressPlus chipset and Q5 application-specific integrated circuit. The Juniper silicon will provide better telemetry and support for VXLAN and EVPN, the company said.

Cloud developers use EVPN, VXLAN and the Border Gateway Protocol to set up a multi-tenancy network architecture that supports multiple customers. The design isolates customers so data and malware can’t travel between them.

For the IP transport layer, Juniper plans to introduce in the second half of the year the 3-RU PTX10003 Packet Transport Router for the backbone, internet peering and data center interconnect applications. The hardware supports 100 and 400 GbE and plugs into an existing multirate QSFP-DD fiber connector system for a more straightforward speed upgrade. The Juniper system provides MACSec support for 160 100 GbE interfaces and FlexE support for 32 400 GbE interfaces. The upcoming ExpressPlus silicon powers the device.

Also, in the second half of the year, Juniper plans to release for the data center the QFX10003 switch. The system packs 32 400 GbE interfaces in 3-RU hardware that can scale up to 160 100 GbE. The next-generation Q5 chip will power the system.

In the first half of next year, Juniper expects to release the QFX5220 switch, which will offer up to 32 400 GbE interfaces in a 1-RU system. The Q5-powered hardware also supports a mix of 50, 100 and 400 GbE for server and inter-fabric connectivity.

Finally, for wide-area network services, Juniper plans to release Penta-powered 400 GbE MPC10E line cards for the MX960, MX480 and MX240. The vendor plans to release the products on the first of next year.

Juniper is likely to face stiff competition in the 400 GbE market from Cisco and Arista. Initially, prices for the high-speed interfaces will be too high for many companies. However, Hanselman expects that to change over time.

“The biggest challenge with 400 GbE is getting interface prices to a point where they can open up new possibilities,” he said. “[But] healthy competition is bound to make this happen.”

Indeed, in 2017, competition for current hardware drove Ethernet bandwidth costs down to a six-year low, according to analyst firm Crehan Research Inc., based in San Francisco. By 2022, 400 GbE will account for the majority of Ethernet bandwidth from switches, Crehan predicts.

Missions acquisition will simplify Slack integrations

Slack plans to use the technology gained from its acquisition of Missions, a division of the startup Robots & Pencils, to make it easier for non-developers to customize workflows and integrations within its team collaboration app.

A Slack user with no coding knowledge can use Missions to build widgets for getting more work done within the Slack interface. For example, a human resources department could use a Missions widget to track and approve interviews with job applicants.

The Missions tool could also power an employee help desk system within Slack, or be used to create an onboarding bot that keeps new hires abreast of the documents they need to sign and the orientations they must attend. 

“In the same way that code libraries make it easier to program, Slack is trying to make workflows easier for everyone in the enterprise,” said Wayne Kurtzman, an analyst at IDC. “Without training, users will be able to create their own automated workflows and integrate with other applications.”

Slack said it would take a few months to add Missions to its platform. It will support existing Missions customers for free during that time. In a note to its 200,000 active developers, Slack said the Missions purchase would benefit them too, by making it easier to connect their Slack integrations to other apps.

Slack integrations help startup retain market leadership

The acquisition is Slack’s latest attempt to expand beyond its traditional base of software engineers and small teams. More than 8 million people in 500,000 organizations now use the platform, which was launched in 2013, and 3 million of those users have paid accounts.

With more than 1,500 third-party apps available in its directory, Slack has more outside developers than competitors such as Microsoft Teams and Cisco Webex Teams. The vendor has sought to capitalize on that advantage by making Slack integrations more useful.

Earlier this year, Slack introduced a shortcut that lets users send information from Slack to business platforms like Zendesk and HubSpot. Slack could be used to create a Zendesk ticket asking the IT department for a new desktop monitor, for example.

The automation of workflows, including through chatbots, is becoming increasingly important to enterprise technology buyers, according to Alan Lepofsky, an analyst at Constellation Research, based in Cupertino, Calif.

But it remains to be seen whether the average Slack user with no coding experience will take advantage of the Missions tool to build Slack integrations.

“I believe the hurdle in having regular knowledge workers create them is not skill, but rather even knowing that they can, or that they should,” Lepofsky said.

New Elastifile CEO intensifies startup’s cloud focus

New Elastifile CEO Erwan Menard said he plans to intensify the startup’s focus on scale-out, enterprise-grade file storage for the public cloud, as he tries to fuel the company’s growth phase.

The stronger public cloud emphasis will mean changes to the product strategy that Elastifile initially laid out when emerging from stealth in April 2017. For instance, Elastifile designed its distributed file system to run on flash storage. But, Menard said, Elastifile’s software will be available with spinning HDDs and SSDs in public clouds, although on-premises deployments will continue to require flash.

Prior to joining Elastifile, Menard was president and COO at object storage vendor Scality. He previously held the same positions at DataDirect Networks, a storage vendor that caters to high-performance computing. Menard also served as vice president and general manager of Hewlett-Packard’s communications and media solutions business unit and in various leadership roles at Alcatel-Lucent.

The newly appointed Elastifile CEO recently replaced founder Amir Aharoni, who remains with the startup as chairman. Aharoni was unable to relocate from Israel to the United States, “where we want the growth to be led from,” Menard said as part of this Q&A. Elastifile’s sales and marketing office is located in Santa Clara, Calif., and its research and development arm is in Herzliya, Israel.

What are your primary areas of focus for the next year and beyond?

Erwan Menard: We were born upon the idea that file storage is here to stay, because a number of workloads in enterprises rely on it, and that file storage should be addressed in a software-defined manner designed for flash. That was the initial DNA of the company, from a product point of view.

Elastifile CEO Erwan MenardErwan Menard

Now, if we look at the market, we’re observing a growing demand for enterprise-class file storage in the cloud. If you look at the data that’s going into public clouds, there’s either very cold data for archival or disaster recovery purposes, or there’s hot data in very small quantities for workloads that are compute-centric. But there is a huge piece missing, which is all the data residing on NAS in the data center. Why aren’t those data and associated workloads in the cloud yet? Because there’s no decent enterprise-grade file storage service in public clouds.

At Elastifile, we spent four years developing a modern-age, software-defined file system for flash. And we’re taking that intellectual property and focusing on adding a strong, enterprise-grade file system to Amazon and Google and Azure. It’s two clicks on Google Launcher, which is their marketplace. We automatically provision a scale-out file system. We definitely aim at doing the same thing on the other clouds if customers choose Azure and Amazon. This is going to happen in the next few months.

Elastifile has a flash requirement with on-premises deployments. Is flash a requirement in the public cloud?

Menard: We designed for flash, because silicon is taking over infrastructure. But you can effectively run it on classic disk. In Google terminology, you can run on so-called PDs, [or] persistent disks, which are groups of SSDs at Google Cloud. Or, you can run it on standard PDs, [or] standard persistent disks, which are effectively classic HDDs. We run on both.

The good thing about designing for flash is that we’re able to provide significantly better performance than other solutions out there in the cloud. For example, we are able to provide much better performance than Amazon Elastic File [System] storage. I want to think that’s because we designed for the flash era.

Does the ability to run on HDDs extend to on-premises Elastifile deployments?

Menard: No. The on-prem deployment option is to run on bare-metal SSDs.

What significant features are in the Elastifile 2.7 release?

Menard: We are updating the [Google] Launcher experience. That experience is going to be significantly simpler in the way you install. The people who are touching our products in the data center are typically storage admins. In the cloud, sometimes it’s an application developer who happens to need storage, or someone who is even less technical. And the first impression people have with the product is extremely important in their decision to adopt it or not.

Also part of the package is what we call CloudConnect. It’s a tool that allows you to migrate your data from any NAS in your data center to any cloud. When people are absolutely convinced about the benefits of running stuff in the cloud, they often struggle with moving the data to the cloud. Most of the tools on the market tend to go from one certain type of NAS to one certain type of cloud destination. We’ve done a tool to go from any to any, and that tool is part of the subscription to our product.

Can users buy CloudConnect as a separate product?

Menard: No. Our goal isn’t to become a data-mover company. Our goal is to facilitate adoption of the cloud. The Elastifile software is available as a subscription. And, as part of that, you get CloudConnect.

Can we expect more partnerships, such as Elastifile’s OEM deal with Dell EMC signed last year? Do customers want to pick their own hardware and take a do-it-yourself approach, or do they prefer to buy your storage software bundled with hardware?

Menard: I think people want to buy software only, because that unlocks the value chain and allows them to commoditize the hardware and separate software and hardware from a procurement point of view. I think there’s a market for software only in the data center — do it yourself — that is for sophisticated organizations who decided to continue developing their data center for whatever strategic or regulatory reasons.

That being said, I think the overall trend is effectively slightly different. At the whole market level, the trend is to go to the cloud. The data center is less and less an area where you want to experiment with complicated things. If anything, you want to consume very simple offerings.

So, I think those two trends coexist — sometimes in the same enterprise account. Frankly, our focus is on the cloud, because this is the next frontier. We’re much more involved in conversations around lifting and shifting stuff to the cloud.

Do people want to move everything to the cloud, or do you think the hybrid model will win out?

Menard: I’m not comfortable with the word ‘hybrid,’ because I’m not sure people are clear on what it means. If hybrid means I have a full stack — application, infrastructure — that’s delivering a certain business outcome in the data center, and I want to replicate that in the cloud, that scenario does exist.

We have a customer in common with Google, called eSilicon. They are doing chipset design. They’ve augmented the capacity of their data center on a per-project basis. They don’t size for the peak. They size for a lower load. And they run the peak activities in the cloud. They did it with us because they didn’t need to modify their application at all when running it in the cloud. That’s a bursting scenario. I run peak activities in the cloud and continue running baseline activities in the data center.

Another scenario we see happening is people who are lifting and shifting an entire workload to the cloud. And that creates a period of time where both workloads are in the data center and in the cloud — the target being to run everything in the cloud. If we want to call that hybrid, then hybrid does exist.

Do you think you may have customers that run your software only in the cloud?

Menard: Absolutely. Four years ago, when we were all focusing on the software-defined data center, we were all undersizing the speed at which workloads could move to the cloud.

Is that why you plan to focus less on OEM partnerships and more on getting your software to work better with more clouds?

Menard: Absolutely.

Are customers moving their applications to the public cloud? Or, are they just moving their data and leaving the applications running on premises?

Menard: I think the only case where it makes sense to move the data without the application is when you’re looking at archiving or disaster recovery. The object stores of public clouds do a great job at that. When you talk about hot data, having an application running in the data center and tapping into a data pool in the cloud may look great on a slide, but I don’t think it makes economic sense.

Which vendors do you go up against in competitive scenarios?

Menard: In the cloud right now, the de facto standard — but it’s a fairly low one — is Amazon EFS [Elastic File System]. Another option is, of course, the status quo: using the same vendor you’ve been using for decades in the data center and trying to make that work in the cloud. We’ve seen announcements by the likes of NetApp in that regard. While it’s probably a good defensive play, it’s very hard with products designed many years ago for the data center to truly take advantage of the cloud. It’s going to come with a level of complexity and cost that’s probably not viable in the long run.

Cisco to merge Viptela, DNA Center for campus networking

ORLANDO, Fla. — Cisco plans to merge its Viptela SD-WAN management software into DNA Center over the next 18 months, providing customers with a single view of their LAN, WAN and campus networks.

During interviews this week at the Cisco Live conference, company executives said the integration would take place after Cisco builds a cloud-based version of DNA Center for campus networking. Companies would then have the option of accessing DNA Center as a service from Cisco or a managed service provider. DNA Center is a centralized software console for managing campus networks built on top of Cisco’s Catalyst 9000 switches.

“At that point, it may make logical sense to bring the two solutions together,” said Scott Harrell, general manager of Cisco’s enterprise networking business.

Waiting for a cloud-based version of DNA Center makes sense, because Viptela’s management application, vManage, is an online service. In a separate interview, Kiran Ghodgaonkar, senior marketing manager for Cisco’s enterprise products, said integrating vManage into DNA Center would occur over the next 12 to 18 months.

Merging the two products will tie the Viptela SD-WAN into other technologies wrapped into DNA Center, such as SD-Access, which lets engineers set access policies that follow employees wherever and however they want to enter the corporate network, Ghodgaonkar said. The SD-Access integration is essential, because Viptela routes traffic to and from business applications running on SaaS and IaaS platforms.

One view of LAN, WAN and campus networking

Overall, merging Viptela technology into DNA Center would simplify network management by treating the LAN, WAN and campus networking as a “single entity,” Ghodgaonkar said. Cisco wants to make SD-WAN management part of a single workflow within DNA Center.

Until then, development of Viptela’s SD-WAN and vManage products would continue “full-bore,” Harrell said. Slowing down the current pace of upgrades would risk falling behind rivals adding security, analytics, load balancing and other features to their software.

“Right now, we want to be able to iterate and make innovations as fast as possible,” Harrell said.

Enhancements planned for Viptela include making the 4000 Series Integrated Services Routers for the branch manageable through vManage, Harrell said. “That’ll be this summer.”

To make that happen, Viptela would run as a software image on ISR, Ghodgaonkar said. Cisco plans to release the image as a software upgrade for the router starting in July.

Cisco customers currently use ISR to run its legacy SD-WAN product, Intelligent WAN. IWAN’s complexity prevented it from becoming a successful product, so many analysts have predicted Cisco would slowly migrate customers to Viptela.

Since acquiring Viptela a year ago, Cisco has increased sales of the company’s SD-WAN product to more than 800 customers globally, according to Ghodgaonkar. He declined to say how many customers Viptela had when Cisco bought the company.

The global market for SD-WAN, which includes revenue from vendors and managed service providers, will grow by nearly 70% annually through 2021, when it could reach $8 billion, according to IDC.

LiveAction buys Savvius to combine packet monitoring and NPM

LiveAction has acquired Savvius and plans to combine its packet monitoring software with LiveAction’s technology for measuring network performance.

LiveAction announced the acquisition this week, but it did not release financial terms. The vendor said it would use Savvius’ products to broaden LiveAction’s offerings for enterprise networks.

“LiveAction and Savvius will deliver a powerful set of capabilities in a single platform that will simplify our customers’ ability to manage their networks, while preparing for the ever-greater demands of software-defined infrastructure,” Brooks Borcherding, CEO at LiveAction, based in Palo Alto, Calif., said in a statement.

Buying Savvius will make it possible for LiveAction to combine two types of products many network operators typically buy from separate vendors, said Shamus McGillicuddy, an analyst at Enterprise Management Associates, based in Boulder, Colo. Engineers often use a network performance monitor to spot problems and then switch to a packet monitoring product to perform in-depth analyses to pinpoint causes.

“A combined solution can deliver a lot of value,” McGillicuddy said. “Having flow and packet monitoring side by side in one console will be very valuable to LiveAction users.”

The majority of network managers use more than four separate tools to monitor and troubleshoot networks, McGillicuddy said. “This means that they spend a lot of time going from one tool to the next, trying to piece together answers.”

Flow monitoring — the core feature in LiveAction — refers to tools that tap into the NetFlow data collection component built into routers and switches from Cisco and other manufacturers. The software uses the data to determine packet loss and delay and round-trip time, while also showing network administrators how well the network is delivering applications and services.

LiveAction is known for premium pricing

LiveAction analyzes NetFlow records and also sells a module called LiveSensor, which provides packet analysis and performance metrics for network components without the data collection feature. In July, LiveAction launched machine-learning-driven analytics and expanded its device support to 107 networking vendors.

The company’s annual revenue from its network performance monitoring products is between $11 million and $25 million, Gartner reported in its February Magic Quadrant report on the NPM market. The report pointed out that LiveAction mostly focuses on Cisco infrastructure and “is frequently cited by end users as offering a premium-priced solution.”

Formerly called WildPackets, Savvius, based in Walnut Creek, Calif., had embarked on a channel turnaround after being primarily a direct seller for more than 25 years. The vendor had set a goal of generating up to 95% of its revenue from channel partners by the end of the third quarter of 2018. LiveAction sales are mostly through channel partners.

Companies use Savvius packet monitoring for more than break-fix scenarios, according to the company. Its technology is also used to bolster security investigations. As a result, the company has linked its products to security offerings from Cisco, Fortinet and Palo Alto Networks.

Apple plans to disable Facebook web tracking capabilities

Apple plans to disable some Facebook web tracking capabilities in the next version of iOS and Mac operating systems.

At the Apple Worldwide Developers Conference (WWDC), the company’s senior vice president of software engineering Craig Federighi explained the new antitracking features that will be rolled out in the next iteration of Apple’s web browser Safari. The features are meant to prevent Facebook and other companies from collecting user data automatically.

Specifically, Federighi called out the “Like” and “Share” buttons that appear on countless websites. In order to use either of those buttons, or leave a comment in the comments section, the user has to be logged into Facebook. But even if the user doesn’t click on the buttons, they can still be used to track that person just because they loaded the webpage.

“We’ve all seen these like buttons and share buttons,” Federighi said on stage at WWDC. “Well, it turns out these can be used to track you, whether you click on them or not. So this year, we’re shutting that down.”

With the Facebook web tracking features disabled, Safari users will see a pop-up on sites with the Facebook buttons that will ask if they want to allow ‘facebook.com’ — or any other site with web trackers enabled — to use cookies and website data. Users will be able to opt out of tracking and keep their browsing activity private. Safari will change how it loads websites so that it requires users to consent to their data being tracked.

Facebook web tracking was called out specifically by Federighi, but Google has similar tracking abilities and will also be affected. Both Facebook and Google use web tracking to deliver targeted ads to users and collect data.

In the next version of the macOS Mojave, Apple will also disable what it calls “fingerprinting” by data companies. The companies collect information on the configuration of a particular device, including the fonts it has installed and the plug-ins that are enabled, to create a unique device profile and then use that to track the device from site to site.

“With Mojave, we’re making it much harder for trackers to create a unique fingerprint,” Federighi said. “We’re presenting webpages with only a simplified configuration system. We show them only built-in fonts. And legacy plug-ins are no longer supported, so those can’t contribute to a fingerprint. And as a result, your Mac will look like everyone else’s Mac, and it will be dramatically more difficult for data companies to uniquely identify your device and track you.”

These are not the first steps Apple has taken to reduce web tracking. At the 2017 WWDC, the company introduced Intelligent Tracking Prevention, which limited the capabilities of third-party trackers and their use of cookies. However, this is the first time Apple has directly called out and taken steps to prevent tracking by Facebook and Google specifically.

In other news

  • The U.S. Department of Defense (DoD) is looking to purchase and set up a cloud browser for its employees. According to a request for information (RFI) from the Defense Information Systems Agency, the DoD intends to have its 3.1 million employees move to a cloud browser because the department believes it would be more secure to have employees browse the web via a remote server that operates outside the DoD network than to have it happen on their own devices. This is a technique the RFI called “cloud-based internet isolation” and has been gaining interest among enterprises. In 2017, security company Symantec acquired the company Fireglass with the intention of bolstering its browser isolation capabilities.
  • The email and password data of 92 million users of the genealogy website MyHeritage was exposed in a data breach, according to the company. A security researcher found a file named ‘myheritage’ on a private server not connected to MyHeritage that contained the email addresses and hashed passwords of users who had signed up before October 26, 2017, which is the date of the data breach. In a statement, MyHeritage said that the hackers don’t have the actual passwords and there was no evidence that any of the information had been used. “We believe the intrusion is limited to the user email addresses. We have no reason to believe that any other MyHeritage systems were compromised,” the blog post MyHeritage said credit card data is stored with third-party providers and actual DNA and family-related data are all on segregated systems, so they weren’t affected by the breach.” We have no reason to believe those systems have been compromised.”
  • The malware VPNFilter targets more devices than previously thought, according to updated research from Cisco Talos. VPNFilter was previously found to be infecting small office and home office routers and network-attached storage devices from several different vendors. Now, the researchers at Cisco Talos believe the malware is targeting more makes and models of those devices, and doing so with additional capabilities. New vendors now affected by VPNFilter are Asus, D-Link, Huawei, Ubiquiti, Upvel, ZTE, Linksys, MikroTik, Netgear and TP-Link. VPNFilter also now has the ability to deliver exploits to endpoints using a man-in-the-middle attack. “With this new finding, we can confirm that the threat goes beyond what the actor could do on the network device itself, and extends the threat into the networks that a compromised network device supports,” Cisco Talos’ William Largent wrote in the blog post detailing the new findings.