Tag Archives: further

8×8 X Series combines UC and contact center

Unified-communications-as-a-service vendor 8×8 pushed further into the cloud contact center market this week with the release of X Series, an offering that combines voice, video, collaboration and contact center functions in a single platform.

Combining UC and contact center makes it easier for agents to get in touch with the right people when handling customer queries, said Meghan Keough, vice president of product marketing at 8×8, based in San Jose, Calif. For example, a company could set up shared rooms within a team collaboration app where agents and knowledge workers can chat or video conference.

The 8×8 X Series will also help companies better track customer contacts, because the same back-end infrastructure will handle calls to a local retail store and the customer service line at headquarters, Keough said. 

8×8 highlighted the platform’s ability to federate chats between leading team collaboration apps, such as Slack, Microsoft Teams and Cisco Webex Teams, allowing users of those cloud services to communicate with each other from their respective interfaces.

Technology acquired in 8×8’s 2017 acquisition of Sameroom is powering that federation and is available as a stand-alone product. The vendor also released its collaboration platform, 8×8 Team Messaging, in beta this week, with features such as persistent chat rooms, presence and file sharing.

The vendor is offering several subscription tiers for the 8×8 X Series. The more expensive plans include calling capabilities in 47 countries, as well as AI features, such as speech analytics.

Cloud fuels convergence of UC, contact center in 8×8 X Series

UC and contact center technologies used to live in “parallel universes,” said Jon Arnold, principal analyst of Toronto-based research and analysis firm J Arnold & Associates. But the cloud delivery model has made it easier to combine the platforms, which lets customers use the same over-the-top service for geographically separate office locations.

Many UCaaS vendors have added contact centers to their cloud platforms in recent years. While some, including 8×8, developed or acquired contact center suites, others — such as RingCentral and Fuze — partner with contact-center-as-a-service specialists, like Five9 and Nice InContact.

Legacy vendors are also taking steps to enhance their cloud contact center offerings. Cisco is planning to use the CC-One cloud platform it recently acquired from BroadSoft to target the midmarket, for example. Avaya, meanwhile, bought contact-center-as-a-service provider Spoken Communications earlier this year to fill a gap in its portfolio.

For many businesses, a cloud subscription to the 8×8 X Series will be cheaper than purchasing UC and contact center platforms separately, analysts said. Also, 8×8’s multi-tiered pricing model should appeal to organizations that are looking to transition to the cloud gradually.

8×8 is not the only vendor capable of offering integrated UC and contact center services, Arnold said. But the vendor has done a good job of marketing and packaging its products to make it easy for buyers and channel partners, he said.

“It’s all part of one large integrated family of services, and you can cherry-pick along the way what level is best for you,” Arnold said of the 8×8 X Series. “So, it kind of simplifies the roadmap [to the cloud] for companies.”

Google’s partner ecosystem remains a work in progress

To gauge the success of a cloud vendor, look no further than the company it keeps.

Public cloud providers have a symbiotic relationship with their partner ecosystems — a marriage of marketing and technological convenience that links these massive platforms to thousands of ancillary products. These arrangements helped propel AWS deeper into the enterprise market and helped Microsoft parlay its precloud partnerships into more revenue from its cloud.

For Google, however, these relationships remain a work in progress.

Google Cloud Platform (GCP) lags behind AWS and Azure in a number of areas, and its partner ecosystem’s relative lack of depth and breadth is no exception. This is a particularly acute shortcoming for a company known more for its technology than its rapport with traditional corporate IT, but there are signs that Google has addressed the problem.

The turning point for GCP, according to industry observers, was the late-2015 hiring of VMware co-founder Diane Greene, who has emphasized success in the enterprise market. Partners and other vendors said it was difficult to coordinate any collaborative efforts with GCP in the past, but that process has slowly improved since Greene took over.

CenturyLink has worked with Google for more than two years to provide networking and application migration services that link to GCP. In the early stages of that relationship, discussions centered on how many petabytes of data GCP could ingest or the size of a Hadoop cluster.

“That would resonate maybe with data scientists working on big data analytics or cutting-edge app developers, but it didn’t resonate with the broader IT department,” said Chris McReynolds, vice president of core network services at CenturyLink.

Google has made great strides in this area, but it must continue to translate its broad set of technologies into products and services that meet customers’ specific needs, he added.

“I don’t think Google can do that alone,” McReynolds said.

Google’s scored some victories in this space in the past 18 months, such as prominent deals with Cisco, SAP and Salesforce. It also streamlined its partner application process, added incentives for partners and provided categories for customers to find particular types of partners. But some notable gaps remain in its partner ecosystem, including Oracle and VMware. And its 13,000 partners are far short of those for AWS, which added more than 10,000 partners in 2017 alone.

Google also has increased its financial commitment to this effort. The company’s channel and partner team is as much as 10 times the size it was 18 months ago, said Nan Boden, head of global alliances for Google Cloud. That’s a significant leap, but it’s hard to gauge the exact impact, because Google wouldn’t provide details about the actual size of the organization.

Part of that extended outlay is proactive outreach to partners to speed up adoption. Rather than wait for third parties to come to them, Google has invested to train channel providers, so they can get certified and well-versed in the platform. That avoids a scenario where independent software vendors and enterprises wait for the other to jump first.

Clouds and their partners: Can’t have one without the other

We watched the early interest in Google, and there wasn’t enough momentum for us to spend time and effort … but there’s been a lot of fast-growing momentum lately.
Joe KinsellaCTO and founder, CloudHealth Technologies

By some estimates, a public cloud partner ecosystem will generate as much revenue as the clouds themselves in the coming years, mainly driven by technology or consulting companies. The vibrancy of a given cloud’s marketplace is particularly important to enterprise clients. Corporations want assurances that their third-party software licenses are supported on the platform, or they require assistance to architect and manage their cloud assets. If a cloud lacks the appropriate scaffolding to support either scenario, IT shops may look elsewhere.

IT vendors tend to follow their customers’ lead with public clouds and extend to other platforms only when it becomes worth the investment in software and staff. Most third parties started with AWS and later expanded their support to Microsoft Azure. That shift to Azure several years ago was an early indicator that Azure was firmly established as the No. 2 public cloud on the market.

By contrast, Google’s mantra, “if you build it, they will come,” in the early years of GCP emphasized the company’s technical prowess and spoke directly to developers and data scientists with the message that they could operate just like Google. That grassroots momentum ultimately stalled, and Google remained an afterthought for most large enterprises.

“We watched the early interest in Google, and there wasn’t enough momentum for us to spend time and effort developing a value proposition there,” said Joe Kinsella, CTO and founder at CloudHealth Technologies. “But there’s been a lot of fast-growing momentum lately. We’re kind of being pulled into Google [by our customers].”

CloudHealth, based in Boston, provides cloud management and optimization and has worked with Google for two years. It already has support for AWS and Azure and plans to roll out a GCP service in the coming months. In conversations with corporate executives this past year, Kinsella noticed a curious trend in how the three hyperscale platforms crossed paths within an enterprise.

A given company likely deploys AWS and Azure, but the usage is often disconnected, as different teams work independently. However, a company typically operates with Google’s cloud in concert with AWS, which could give GCP an edge going forward, Kinsella said.

“What most enterprises are realizing is there is a lot more compatibility across stacks, from the migration services that Google offers to compute, database, storage and even their developer tools,” he said. “They’re more apples-to-apples compatible.”

AWS clearly has a broader, more mature set of tools available to its users, but Google has checked off equivalents for the top 10 or so features, which account for 95% of the revenue for these vendors anyway, Kinsella said. He said he sees GCP in the same position AWS was in 2015, when it started to turn the corner with partners and enterprise clients.

Ping adds AI-driven API protection with Elastic Beam acquisition

BOSTON — Ping Identity is moving beyond single sign-on and further into API security with its latest acquisition.

At the Identiverse 2018 conference on Tuesday, the Denver-based identity and access management (IAM) provider announced the acquisition of Elastic Beam, a Redwood City, Calif., cybersecurity startup that uses artificial intelligence to monitor and protect APIs. Terms of the deal were not disclosed.

Ping CEO Andre Durand discussed the importance of API protection in the past as part of the company’s “intelligent identity” strategy. The company, which specializes in IAM services such as single sign-on, had previously introduced PingAccess for API management and security.

Elastic Beam, which was founded in 2014, will become part of Ping’s new API protection offering, dubbed PingIntelligence for APIs. Elastic Beam’s API Behavioral Security (ABS) automatically discovers an organization’s APIs and monitors the activity using AI-driven behavioral analysis.

“The moment it detects abnormal activity on an API, it automatically blocks that API,” said Bernard Harguindeguy, founder of Elastic Beam.

Harguindeguy, who joined Ping as its new senior vice president of intelligence, said ABS’ use of AI is ideal for API monitoring and defense, because there are simply too many APIs and too much data around them for human security professionals to effectively track and analyze on their own.

“API security is a very hard problem. You cannot rely on roles and policies and attacker patterns,” he said. “We had to use AI in a very smart way.”

Durand said the explosion of APIs in both cloud services and mobile applications has expanded the attack surface for enterprises and demanded a new approach to managing and securing APIs. While Durand acknowledged the potential for AI systems to make mistakes, he said improving API protection can’t be done without the help of machine learning and AI technology.

“We’re in the early stages of applying AI to the enormity of traffic that we have access to today,” he said. “We want to limit the space and time that users have access to, but there’s no policy that can do that. I don’t think there’s a way to have that breakthrough without machine learning, big data and AI.”

PingIntelligence for APIs is currently in private preview, and it will be generally available in the third quarter this year.

Microsoft Research Montreal welcomes Fernando Diaz, Principal Researcher and lead of the new Montreal FATE Research Group – Microsoft Research

Fernando Diaz – Principal Research Manager

Microsoft Research Montreal further bolsters its research force this month, welcoming Fernando Diaz to the Montreal FATE (Fairness, Accountability, Transparency and Ethics in AI) research group as Principal Researcher.

Diaz, whose research area is the design of information access systems, including search engines, music recommendation services and crisis response platforms is particularly interested in understanding and addressing the societal implications of artificial intelligence more generally. Immediately previous to joining Microsoft Research Montreal, he was the Director of Research at Spotify Research in New York, New York. He was previously a senior researcher with Microsoft Research New York City where he founded the FATE Research Group alongside Kate Crawford and Hanna Wallach. Joining Microsoft Research reunites him with many former FATE collaborators.

The world is beginning to harness the power of AI, machine learning, and data science across many aspects of society. Indeed, these research areas form core components of many Microsoft systems and products.

But these techniques also raise complex ethical and social questions: How can we best use AI to assist users and offer people enhanced insights, while avoiding exposing them to different types of discrimination in health, housing, law enforcement, and employment? How can we balance the need for efficiency and exploration with fairness and sensitivity to users? As we move toward relying on intelligent agents in our everyday lives, how do we ensure that individuals and communities can trust these systems?

The FATE research group at Microsoft studies the complex social implications of AI, machine learning, data science, large-scale experimentation and increasing automation. The aim is to develop computational techniques that are both innovative and ethical while drawing on the deeper context surrounding these issues from sociology, history and science and technology studies. A relatively new group, FATE currently is working on collaborative research projects that address the need for transparency, accountability, and fairness in AI and ML systems. Fate publishes across a variety of disciplines, including machine learning, information retrieval, systems, sociology, political science and science and technology studies.

“I’m thrilled to welcome Fernando back to Microsoft Research. Fernando is an immensely talented leader in information retrieval, machine learning and the new field of FATE,” said Jennifer Chayes, Technical Fellow and Managing Director of Microsoft Research New England, New York City and Montreal labs. “I’m also excited and proud to announce the creation of the Montreal FATE research group. This group will work on how to increase the fairness of data sets and AI algorithms, transparency and interpretability of the output of AI algorithms, accountability of this output in fairness and transparency, and ethical questions on AI and society.”

In addition to an impressive research and academic portfolio (including a PhD and Masters in Computer Science from the University of Massachusetts Amherst), Diaz brings a passion for disseminating his work outside of the research community. He works closely with product teams at Microsoft, focusing on relevant and impactful research. He also has taught graduate level courses at New York University, introducing students to the realities of production systems.

Very attractive to Diaz about Microsoft Research was the promise of considerable freedom to work on a wide range of interesting problems. While his research will continue to include fundamental work on information access algorithms, Diaz will also focus on building a multidisciplinary group studying the societal implications of artificial intelligence.

“Increasingly, we are noticing the profound societal implications of integrating artificial intelligence into everyday life. MSR Montreal—and Montreal as a city—has amongst the strongest researchers in artificial intelligence, making it the ideal location to study and understand its societal implications from a technical perspective. At the same time, this research requires a broad, multidisciplinary strength found both in Canada and at Microsoft Research, more generally,” said Diaz.

“Work in FATE is crucial for ensuring that artificial intelligence becomes an essential and positive part of our lives, and Fernando is a leader both in FATE and in connecting FATE to other disciplines,” added Geoff Gordon, Microsoft Research Montreal Lab Director. “I am thrilled about the opportunity to work closely with him on a daily basis.”

Yoshua Bengio, Scientific Director at the Montreal Institute for Learning Algorithms (MILA) also expressed his encouragement. “Ethical and social issues associated with AI are really important and that is why MILA has put it in its mission to contribute to AI for the benefits of all and to collective discussions about the use of AI,” he said. “There already are strong collaborations between MILA and Microsoft Research Montreal and I’m delighted at the perspective of expanding this collaboration with the new FATE group which Fernando Diaz will head. This is clearly a great move for Microsoft as well as for the Montreal AI community.”

Indeed the Microsoft Research FATE team will continue to expand with impressive post-doctoral researcher talent joining the group across the summer including Canadian Luke Stark, returning to his native Canada following fellowship tours at the Department of Sociology at Dartmouth College and the Berkman Klein Center for Internet and Society at Harvard University.

The French version of this blog post can be found on the Microsoft News Center Canada.

Dell i7 920, W10, 128Gb SSD, 500Gb, 6Gb, 20″ Monitor

Need a computer for the kids or the office? Look no further as this is a complete system, with everything you need to get you started and so much faster than a standard desktop due to the SSD. For sale as a package only.

A Dell Studio XPS 435mt configured with the following:

Intel Core i7-920 @ 2.67GHz
6Gb DDR3 Crucial RAM
ATI Radeon 4550
OCZ Agility 3 128Gb SSD
Samsung 500Gb SATA HDD
Optiarc SATA CD/DVD Reader/Writer
Wifi Card
Multi Card Reader
MS Intellimouse
Dell Keyboard

Dell i7 920, W10, 128Gb SSD, 500Gb, 6Gb, 20″ Monitor

Kaspersky sheds more light on Equation Group malware detection

Kaspersky Lab published a lengthy report that shed further light on its discovery of Equation Group malware and its possession of classified U.S. government materials.

The antivirus company, which has been under intense scrutiny by government officials and lawmakers this year, disclosed that classified materials were transmitted to Kaspersky’s network between September 11, 2014 and November 17, 2014. In a previous explanation, the company said Kaspersky antivirus software detected malware on a computer located in the greater Baltimore area. Kaspersky later discovered a 7zip archive on the computer that had Equation Group malware and other materials with U.S. government classified markings.

Kaspersky’s new investigation details were issued in response to several media reports that claimed Russian state-sponsored hackers used Kaspersky’s antivirus software to identify and locate U.S. government data. The reports claimed that in 2015 an NSA contractor’s system was compromised by Russian hackers using Kaspersky antivirus scans, which led to a massive leak of confidential NSA files and Equation Group malware. The news reports also claimed Israeli intelligence penetrated Kaspersky’s network in 2014 and found classified NSA materials on its network.

The Equation Group was an APT group that was first identified by Kaspersky researchers in 2015 and later linked to the U.S. National Security Agency (NSA) in 2016 following disclosures by the hacking group known as the Shadow Brokers.

New details in Kaspersky’s investigation

Thursday’s report provided new details about the computer with Equation Group malware, which was believed to be the NSA contractor’s system. Kaspersky did not confirm or deny these reports, saying its software anonymizes users’ information and divulging details about the specific user in this case would violate its ethical and privacy standards.

The Kaspersky investigation revealed the suspected NSA contractor’s computer was “compromised by a malicious actor on October 4, 2014” as a result of a backdoor Trojan known as Smoke Loader or Smoke Bot. The compromise occurred during the nearly two-month span Kaspersky identified and scanning the computer from Sept. 11 to Nov. 17, 2014.

Kaspersky said it believes the user turned Kaspersky’s antivirus software off at some point during that time frame in order to install a pirated version of Microsoft Office, which allowed Smoke Loader to activate. The report also noted Smoke Loader was attributed to a Russian Hacker in 2011 and was known to be distributed on Russian hacker forums.

Kaspersky said once the classified markings were discovered in the 7zip archive materials, all data except the malware binaries was deleted under order of CEO Eugene Kaspersky. The company also said it “found no indication the information ever left our corporate networks.”

Kaspersky’s report appeared to suggestthe threat actors who reportedly found the classified NSA data and Equation Group malware likely did so by hacking the computer directly with Smoke Loader and not, as media reports claimed, by hacking into Kaspersky’s network and abusing the company’s antivirus technology.

The company also said it’s possible the computer had other malware on it that Kaspersky didn’t detect.

“Given that system owner’s potential clearance level, the user could have been a prime target of nation states,” the report stated. “Adding the user’s apparent need for cracked versions of Windows and Office, poor security practices, and improper handling of what appeared to be classified materials, it is possible that the user could have leaked information to many hands. What we are certain about is that any non-malware data that we received based on passive consent of the user was deleted from our storage.”

Thursday’s report followed comments from Jeanette Manfra, assistant secretary for cybersecurity and communications at the U.S. Department of Homeland Security, who told the House Science, Space and Technology Oversight Subcommittee earlier this week that there was no conclusive evidence that Kaspersky software had been exploited to breach government systems.

Policy changes

The report also contained new information about how Kaspersky responded to the 2014 Equation Group malware discovery and the company policy changes that followed.

“The reason we deleted those files and will delete similar ones in the future is two-fold; We don’t need anything other than malware binaries to improve protection of our customers and secondly, because of concerns regarding the handling of potential classified materials,” the report states. “Assuming that the markings were real, such information cannot and will not [be] consumed even to produce detection signatures based on descriptions.”

Kaspersky said that those concerns led to the adoption of a new policy for the company that requires all analysts to “delete any potential classified materials that have been accidentally collected during anti-malware research or received from a third party.”

The report didn’t say whether or not Kaspersky ever notified the NSA or other government agencies about the Equation Group malware it discovered or the classified data contained in the 7zip archive. In a previous statement on the situation, the company stated, “As a routine procedure, Kaspersky Lab has been informing the relevant U.S. government institutions about active APT infections in the USA.” It’s also unclear why, after finding the classified U.S. government files, the company never disclosed Equation Group was connected to the NSA.

Kaspersky has not responded to requests for comment on these questions.

The company responded to media reports that claimed threat actors used Kaspersky antivirus scans to hunt for classified markings.

“We have done a thorough search for keywords and classification markings in our signature databases,” Kaspersky said. “The result was negative: we never created any signatures on known classification markings.”

Kaspersky did, however, acknowledged that a malware analyst created a signature for the word “secret” based on the discovery of the TeamSpy malware in 2013, which used a wildcard string pattern based on several keywords, including “secret.” The company hypothesized that a third party may have either misinterpreted the malware signature or maliciously used it against Kaspersky to spread false allegations.