Tag Archives: available

MemVerge pushes big memory computing for every app

Startup MemVerge is making its “Big Memory” software available in early access, allowing organizations to test-drive AI applications in fast, scalable main memory.

MemVerge’s Memory Machine software virtualizes underlying dynamic RAM (DRAM) and Intel Optane devices into large persistent memory lakes. It provides data services from memory, including replication, snapshots and tiered storage.

The vendor, based in Milpitas, Calif., also closed institutional funding of $19 million led by Intel Capital, with participation from Cisco Investments, NetApp and SK Hynix. That adds to $24.5 million MemVerge received from a consortium of venture funds at launch.

The arrival of Intel Optane memory devices drives moves storage class memory (SCM) from a fringe technology to one showing up in major enterprise storage arrays. Other than in-memory databases, most applications are not designed to run efficiently in volatile DRAM. The application code first needs to be rewritten for memory, which does not natively include data services or enable data sharing by multiple servers.

MemVerge Memory Machine at work

Memory Machine software will usher in big memory computing to serve legacy and modern applications and break memory-storage bottlenecks, MemVerge CEO Charles Fan said.

Charles FanCharles Fan

“MemVerge Memory Machine is doing to persistent memory what VMware vSphere did to CPUs,” he said.

Prior to launching MemVerge, Fan spent seven years as head of VMware’s storage business unit. He helped create VMware vSAN hyper-converged software. He previously started file virtualization vendor Rainfinity and sold it to EMC in 2005. MemVerge cofounder Shuki Bruck helped start XtremIO, an early all-flash storage array that now is part of Dell EMC’s midrange storage portfolio. Bruck was also a founder of Rainfinity.

MemVerge revised its product since emerging from stealth in 2019. The startup initially planned to virtualize memory and storage in Intel two-socket servers and scale up to 128 nodes. Fan said the company decided instead to offer Memory Machine solely as a software subscription for x86 servers. Financial services, AI, big data and machine learning are expected use cases.

MemVerge plans to introduce full storage services in 2021. That would allow programming of Intel Optane cards as low-latency block storage and tiering of data to back-end SAS SSDs.

“Our first step is to target in-memory applications and memory-intensive applications that have minimal access to storage. And in this case, we intercept all of the memory services and declare a value through the memory interface for existing applications,” Fan said.

Phase two of Memory Machine’s development will include a software development kit to program modern applications that require “zero I/O persistence,” Fan said.

The combination of Intel Optane with Memory Machine vastly increases the byte-addressable storage capacity of main memory, said Eric Burgener, a vice president of storage at IT analysis firm IDC.

“This is super interesting for AI, big data analytics, artificial intelligence and things like that, where you can load a large working set in main memory and run it much faster than [using] used block-addressable NVMe flash storage,” Burgener said.

“As long as you have a bunch of Optane cards and the MemVerge software layer running on the server, you can take any application and run it at memory speeds, without rewrites.”

Memory as storage: Gaining traction?

The MemVerge release underscores a flurry of new activity surrounding the use of persistent memory for disaggregated compute and storage.

Startup Alluxio in April reached a strategic partnership with Intel to implement its cloud orchestration file software with Intel Optane cards in Intel Xeon Scalable-powered servers. The combination allows disaggregated cloud storage to efficiently use file system semantics, as well as tap into DRAM or SSD media as buffer or page caches, said Alluxio CEO Haoyuan Li said.

Meanwhile, semiconductor maker Micron Technology — which partnered with Intel to initially develop the 3D XPoint media used in Optane devices — recently introduced an open source object storage engine geared for flash and persistent memory. Micron said Red Hat is among the partners helping to fine-tune Heterogeneous Memory Storage Engine for upstream inclusion in the Linux kernel.

Go to Original Article
Author:

New AI tools in the works for ThoughtSpot analytics platform

The ThoughtSpot analytics platform only has been available for six years, but since 2014 the vendor has quickly gained a reputation as an innovator in the field of business intelligence software.

ThoughtSpot, founded in 2012 and based in Sunnyvale, Calif., was an early adopter of augmented intelligence and machine learning capabilities, and even as other BI vendors have begun to infuse their products with AI and machine learning, the ThoughtSpot analytics platform has continued to push the pace of innovation.

With its rapid rise, ThoughtSpot attracted plenty of funding, and an initial public offering seemed like the next logical step.

Now, however, ThoughtSpot is facing the same uncertainty as most enterprises as COVID-19 threatens not only people’s health around the world, but also organizations’ ability to effectively go about their business.

In a recent interview, ThoughtSpot CEO Sudheesh Nair discussed all things ThoughtSpot, from the way the coronavirus is affecting the company to the status of an IPO.

In part one of a two-part Q&A, Nair talked about how COVID-19 has changed the firm’s corporate culture in a short time. Here in part two, he discusses upcoming plans for the ThoughtSpot analytics platform and when the vendor might be ready to go public.

One of the main reasons the ThoughtSpot analytics platform has been able to garner respect in a short time is its innovation, particularly with respect to augmented intelligence and machine learning. Along those lines, what is a recent feature ThoughtSpot developed that stands out to you?

ThoughtSpot CEO Sudheesh NairSudheesh Nair

Sudheesh Nair: One of the main changes that is happening in the world of data right now is that the source of data is moving to the cloud. To deliver the AI-based, high-speed innovation on data, ThoughtSpot was really counting on running the data in a high-speed memory database, which is why ThoughtSpot was mostly focused on on-premises customers. One of the major changes that happened in the last year is that delivered what we call Embrace. With Embrace we are able to move to the cloud and leave the data in place. This is critical because as data is moving, the cost of running computations will get higher because computing is very expensive in the cloud.

With ThoughtSpot, what we have done is we are able to deliver this on platforms like Snowflake, Amazon Redshift, Google BigQuery and Microsoft Synapse. So now with all four major cloud vendors fully supported, we have the capability to serve all of our customers and leave all of their data in place. This reduces the cost to operate ThoughtSpot — the value we deliver — and the return on investment will be higher. That’s one major change.

Looking ahead, what are some additions to the ThoughtSpot analytics platform customers can expect?

Nair: If you ask people who know ThoughtSpot — and I know there are a lot of people who don’t know ThoughtSpot, and that’s OK — … if you ask them what we do they will say, ‘search and AI.’ It’s important that we continue to augment on that; however, one thing that we’ve found is that in the modern world we don’t want search to be the first thing that you do. What if search became the second thing you do, and the first thing is that what you’ve been looking for comes to you even before you ask?

What if search became the second thing you do, and the first thing is that what you’ve been looking for comes to you even before you ask?
Sudheesh NairCEO, ThoughtSpot

Let’s say you’re responsible for sales in Boston, and you told the system you’re interested in figuring out sales in Boston — that’s all you did. Now the system understands what it means to you, and then runs multiple models and comes back to you with questions you’ll be interested in, and most importantly with insights it thinks you need to know — it doesn’t send a bunch of notifications that you never read. We want to make sure that the insights we’re sending to you are so relevant and so appropriate that every single one adds value. If one of them doesn’t add value, we want to know so the system can understand what it was that was not valuable and then adjust its algorithms internally. We believe that the right action and insight should be in front of you, and then search can be the second thing you do prompted by the insight we sent to you.

What tools will be part of the ThoughtSpot analytics platform to deliver these kinds of insights?

Nair: There are two features we are delivering around it. One is called Feed, which is inspired by our social media curating insights, and conversations and opinions around facts. Right now social media is all opinion, but imagine a fact-driven social media experience where someone says they had a bad a quarter and someone else says it was great and then data shows up so it doesn’t become an opinion based on another opinion. It’s important that it should be tethered to facts. The second one is Monitor, which is the primary feature where the thing you were looking for shows up even before you ask in the format that you like — could be mobile, could be notifications, could be an image.

Those two features are critical innovations for our growth, and we are very focused on delivering them this year.

The last time we spoke we talked about the possibility of ThoughtSpot going public, and you were pretty open in saying that’s something you foresee. It’s about seven months later, where do plans for going public currently stand?

Nair: If you had asked me before COVID-19 I would have had a bit of a different answer, but the big picture hasn’t changed. I still firmly believe that a company like ThoughtSpot will tremendously benefit from going public because our customers are massive customers, and those customers like to spend more with a public company and the trust that comes with it.

Having said that, I talked last time about building a team and predictability, and I feel seven months later that we have built the executive team that can be the best in class when it comes to public companies. But going public also requires being predictable, and we’re getting in that right spot. I think that the next two quarters will be somewhat fluid, which will maybe set us back when it comes to building a plan to take the company public. But that is basically it. I think taken one by one, we have a good product market, we have good business momentum, we have a good team, and we just need to put together the history that is necessary so that the business is predictable and an investor can appreciate it. That’s what we’re focused on. There might be a short-term setback because of what the coronavirus might throw at us, but it’s going to definitely be a couple of more quarters of work.

Does the decline in the stock market related to COVID-19 play into your plans at all?

Nair: It’s absolutely an important event that’s going on and no one knows how it will play out, but when I think about a company’s future I never think about an IPO as a few quarters event. It’s something we want to do, and a couple of quarters here or there is not going to make a major difference. Over the last couple of weeks, we haven’t seen any softness in the demand for ThoughtSpot, but we know that a lot of our customers’ pipelines are in danger from supply impacts from China, so we will wait and see. We need to be very close to our customers right now, helping them through the process, and in that process we will learn and make the necessary course corrections.

Editor’s note: This interview has been edited for clarity and conciseness.

Go to Original Article
Author:

Announcing PowerShell 7.0 | PowerShell

Joey Aiello

Joey

Today, we’re happy to announce the Generally Available (GA) release of PowerShell 7.0! Before anything else, we’d like to thank our many, many open-source contributors for making this release possible by submitting code, tests, documentation, and issue feedback. PowerShell 7 would not have been possible without your help.

slew of new cmdlets/APIs and bug fixes, we’re introducing a number of new features, including:

  • Pipeline parallelization with ForEach-Object -Parallel
  • New operators:
    • Ternary operator: a ? b : c
    • Pipeline chain operators: || and &&
    • Null coalescing operators: ?? and ??=
  • A simplified and dynamic error view and Get-Error cmdlet for easier investigation of errors
  • A compatibility layer that enables users to import modules in an implicit Windows PowerShell session
  • Automatic new version notifications
  • The ability to invoke to invoke DSC resources directly from PowerShell 7 (experimental)

For a more complete list of features and fixes, check out the PowerShell 7.0 release notes.

The shift from PowerShell Core 6.x to 7.0 also marks our move from .NET Core 2.x to 3.1. .NET Core 3.1 brings back a host of .NET Framework APIs (especially on Windows), enabling significantly more backwards compatibility with existing Windows PowerShell modules. This includes many modules on Windows that require GUI functionality like Out-GridView and Show-Command, as well as many role management modules that ship as part of Windows. For more info, check out our module compatibility table showing off how you can the latest, up-to-date modules that work with PowerShell 7.

If you weren’t able to use PowerShell Core 6.x in the past because of module compatibility issues, this might be the first time you get to take advantage of some of the awesome features we already delivered since we started the Core project!

Windows, macOS, or Linux. Depending on the version of your OS and preferred package format, there may be multiple installation methods.

If you already know what you’re doing, and you’re just looking for a binary package (whether it’s an MSI, ZIP, RPM, or something else), hop on over to our latest release tag on GitHub.

Additionally, you may want to use one of our many Docker container images. For more information on using those, check out our PowerShell-Docker repo.

following operating systems on x64, including:

  • Windows 7, 8.1, and 10
  • Windows Server 2008 R2, 2012, 2012 R2, 2016, and 2019
  • macOS 10.13+
  • Red Hat Enterprise Linux (RHEL) / CentOS 7+
  • Fedora 29+
  • Debian 9+
  • Ubuntu 16.04+
  • openSUSE 15+
  • Alpine Linux 3.8+

Additionally, we support ARM32 and ARM64 flavors of Debian and Ubuntu, as well as ARM64 Alpine Linux.

While not officially supported, the community has also provided packages for Arch and Kali Linux.

If you need support for a platform that wasn’t listed here, please file a distribution request on GitHub (though it should be noted that we’re ultimately limited by what’s supported by .NET Core 3.1).

.NET decided to do with .NET 5, we feel that PowerShell 7 marks the completion of our journey to maximize backwards compatibility with Windows PowerShell. To that end, we consider PowerShell 7 and beyond to be the one, true PowerShell going forward.

PowerShell 7 will still be noted with the edition “Core” in order to differentiate 6.x/7.x from Windows PowerShell, but in general, you will see it denoted as “PowerShell 7” going forward.

Import-Module documentation.

For those modules still incompatible, we’re working with a number of teams to add native PowerShell 7 support, including Microsoft Graph, Office 365, and more.

Azure Cloud Shell has already been updated to use PowerShell 7, and others like the .NET Core SDK Docker container images and Azure Functions will be updated soon.

will be supported for approximately 3 years from December 3, 2019 (the release date of .NET Core 3.1).

You can find more info about PowerShell’s support lifecycle at https://aka.ms/pslifecycle

filing an issue on the main PowerShell repository. For issues related to specific modules (e.g. PSReadline or PowerShellGet), make sure to file in the appropriate repository.

Go to Original Article
Author: Microsoft News Center

Apache Kafka version 2.4 improves streaming data performance

Apache Kafka version 2.4 became generally available this week, bringing with it a host of new features and improvements for the widely deployed open source distributed streaming data technology.

The popularity of Kafka has put it at the center of event processing infrastructure, which is used by organizations of all sizes to stream messages and data. Kafka is often used as a technology that brings data into a database or a data lake, where additional processing and analytics occur. Optimizing performance for globally distributed Kafka deployments has long been a challenge, but the new features in Apache Kafka 4.2 could also help to further its popularity, with improved performance and lower latency.

“Kafka has become the default for new messaging selection decisions,” Gartner analyst Merv Adrian said. “Legacy message broker choices are in place in some shops, but even then, some people are switching. “

Apache Kafka version 2.4 improves replication features

Adrian added that, from his perspective, the improvements to the MirrorMaker functionality in Kafka 2.4 are valuable additions. MirrorMaker is used to replicate topics between clusters, a key component for both performance and scalability. It has been challenging to handle replication in multi-cluster enterprise environments, he said, and that’s where the commercially useful growth for Kafka is likely to be.

According to Tim Berglund, senior director of developer experience at Confluent, one of the key features in the Apache Kafka 2.4 release is the ability to allow consumers to fetch data from the closest replica. The feature is formally known as Kafka Improvement Proposal (KIP) 392.

Kafka has become the default for new messaging selection decisions. Legacy message broker choices are in place in some shops, but even then, some people are switching.
Merv AdrianAnalyst, Gartner

“For the many organizations that are distributing application functionality across data centers, clouds and between regions and availability zones, the capability of closest-replica fetching makes Kafka better equipped to handle cloud-native deployments,” Berglund said. Confluent is one of the leading commercial backers of Kafka and has its own enterprise platform that makes use of the open source project.

Another interesting feature in Apache Kafka version 2.4 is the ability to create an administrative API for replica reassignment. This feature makes APIs within Kafka more familiar, Berglund explained, and thus easier to use for the everyday developer. The feature replaces an existing Apache ZooKeeper-based API that had some limitations and complexity.

“It consists of new methods added onto the AdminClient class, which is a popular API that’s likely already in most complex applications,” Berglund said. “The old way required interfacing with Zookeeper directly, which involved more moving parts, imposed burdens on unit and integration testing, and needed a separate, specialized API that solely existed to talk to ZooKeeper.”

What’s coming to Kafka in 2020 

Application architectures and deployment modalities have changed significantly since Kafka’s original release nearly a decade ago, according to Berglund. As such, he expects that Kafka will continue evolving to be more compatible with today’s emerging trends in IoT, machine learning and hybrid cloud.

“In the near term, look out for more KIPs that tackle the substantial task of eliminating ZooKeeper and ones that make Kafka feel more at home in the cloud,” Berglund said. “As the second-most active Apache Software Foundation project, we see it constantly improving, thanks to the passionate community of committers backing it.

Go to Original Article
Author:

How should organizations approach API-based SIP services?

Many Session Initiation Protocol features are now available through open APIs for a variety of platforms. While voice over IP only refers to voice calls, SIP encompasses the set up and release of all calls, whether they are voice, video or a combination of the two.

Because SIP establishes and tears down call sessions, it brings multiple tools into play. SIP services enable the use of multimedia, VoIP and messaging, and can be incorporated into a website, program or mobile application in many ways.

The APIs available range from application-specific APIs to native programming languages, such as Java or Python, for web-based applications. Some newer interfaces are operating system-specific for Android and iOS. SIP is an open protocol, which makes most features available natively regardless of the SIP vendor. However, the features and implementations for SIP service APIs are specific to the API vendor. 

Some of the more promising features include the ability to create a call during the shopping experience or from the shopping cart at checkout. This enables customer service representatives and customers to view the same product and discuss and highlight features within a browser, creating an enhanced customer shopping experience.

The type of API will vary based on which offerings you use. Before issuing a request for a quote, issue a request for information (RFI) to learn what kinds of SIP service APIs a vendor has to offer. While this step takes time, it will allow you to determine what is available and what you want to use. You will want to determine the platform or platforms you wish to support. Some APIs may be more compatible with specific platforms, which will require some programming to work with other platforms.

Make sure to address security in your RFI.  Some companies will program your APIs for you. If you don’t have the expertise, or aren’t sure what you’re looking for, then it’s advantageous to meet with some of those companies to learn what security features you need. 

Go to Original Article
Author:

Citrix brings Workspace and micro apps to Google Cloud

Citrix Workspace platform for Google Cloud is now generally available. In an announcement, Citrix said the move would simplify tasks for IT professionals and users alike by using micro apps and unifying tasks in a single work feed.

The partnership underscores Citrix’s commitment to keep its services agnostic to support its customers’ choice in cloud providers, according to analysts.

Eric Kenney, a senior product manager at Citrix, said IT professionals are, at present, responsible for wrangling a variety of disparate products. These applications may, for example, govern security, file synchronization, file sharing and virtual desktops, and all of them could have different portals and login screens. Citrix Workspace is designed to make it easier to administer a range of end-user computing applications.

“It’s really difficult to manage all of these different vendors and resources,” he said. “With Workspace, IT professionals are able to bring these solutions together, with one partner, to deliver them to users.”

Putting these solutions and the options to manage them in one place helps both desktop administrators and users, Kenney said.

Although Workspace provides a centralized place through which Citrix products, such as Citrix Virtual Apps and Desktops, Citrix Virtual Desktops and Citrix ADC, may be launched, Kenney said the platform goes beyond that. The intent, he said, is to provide a home for whatever application a company wants to deliver to its users, including homegrown and cloud-hosted offerings.

One way Workspace acts to simplify employee workloads is through the use of micro apps, or small programs that can accomplish simple tasks quickly, according to Kenney.

“An analogy we use is the office copier; it has a ton of buttons on it,” he said, noting that, with knowledge of those functions, people can collate, print double-sided copies and perform any number of specialized tasks. Most people, though, only use the big green button. “That’s a way of looking at enterprise applications; you’re using them a lot, but only for a small sliver of their functionality.”

Employees approving an expense report, for example, typically must go into a separate application to review and OK the document. Kenney said that process is less streamlined than it could be and that micro apps can integrate multiple tasks of approving an expense report into one feed, enabling workers to accomplish in seconds what used to take minutes.

“You could review and approve [the report] and never have to leave Workspace,” he said.

Workspace’s new availability also provides Citrix greater integration with Google Cloud services, among them Google’s G Suite, a collection of productivity apps. Kenney said a new cloud service, Citrix Access Control, provides administrators additional control over user actions on Google Drive documents.

For example, if a malware link is inadvertently added to a document, the Access Control settings could ensure the link is opened in an isolated browser that is safely disposed of at the end of a user session. Access Control can also restrict “copy and paste” functionality in certain documents.

Workspace isn’t just for IT

Ulrik Christensen, principal infrastructure engineer at Oncology Venture, said Citrix services, including Workspace, have made things easier for his firm. The drug development company is a global operation with offices and labs in both Denmark and the U.S., and manufacturing operations in India.

“I have four to five people in the U.S., and they’re not even in the same office,” he said, adding that the complexity of supporting the different hardware they use, including Apple machines, Windows machines and Chromebooks, has proven difficult in the past.

Moving to the kind of standardized system offered by Citrix has improved the user experience and lessened the burden on IT, Christensen said.

“It’s a lot easier if something doesn’t work,” he said. “We can help because we know the whole platform… It also made it a lot easier for IT to provide users new applications and updates.”

Security had improved as well, Christensen said. With only one way to access the company’s network, it is at less risk and the firm can be more confident that its data is protected.

Citrix continues to support cloud choice

Andrew Hewitt, an analyst at Forrester Research, said the partnership with Google Cloud makes sense for Citrix, as it bolsters one of the key tenets of its pitch to customers.

Andrew HewittAndrew Hewitt

“Citrix’s core messaging is around experience, choice and security,” he said. “This announcement sits squarely in its desire to be an agnostic player in the [end-user computing market] that can enable enterprises to pick and choose whatever technologies they want to deploy to their end users.”

Citrix’s core messaging is around experience, choice and security.
Andrew HewittAnalyst, Forrester Research

The move, Hewitt said, seems like a logical extension of past partnerships with Google.

“For example, Citrix has full API access to manage Chromebooks; it supports all the management models for Android Enterprise and provides Citrix Receiver for virtualization support on Chromebooks,” he said. “This announcement is just further deepening of the relationship with Google.”

Mark BowkerMark Bowker

Enterprise Strategy Group senior analyst Mark Bowker said the partnership is good for Google as well.

“Google is trying to make inroads into the enterprise,” he said, noting pushes with Chromebooks and the Chrome browser.

Bowker added, though, that enterprises must still interact with Windows frequently. By working with Citrix, then, Google can provide its users with easier access to Windows-based services.

Citrix recognizes the importance of being able to provide its services on its customers’ cloud of choice, including a recent announcement of deeper ties with AWS. Still, its closest ties are with Microsoft, Bowker said. “The strength of their integration is ultimately with Microsoft, and always has been,” he said.

Go to Original Article
Author:

From search to translation, AI research is improving Microsoft products

The evolution from research to product

It’s one thing for a Microsoft researcher to use all the available bells and whistles, plus Azure’s powerful computing infrastructure, to develop an AI-based machine translation model that can perform as well as a person on a narrow research benchmark with lots of data. It’s quite another to make that model work in a commercial product.

To tackle the human parity challenge, three research teams used deep neural networks and applied other cutting-edge training techniques that mimic the way people might approach a problem to provide more fluent and accurate translations. Those included translating sentences back and forth between English and Chinese and comparing results, as well as repeating the same translation over and over until its quality improves.

“In the beginning, we were not taking into account whether this technology was shippable as a product. We were just asking ourselves if we took everything in the kitchen sink and threw it at the problem, how good could it get?” Menezes said. “So we came up with this research system that was very big, very slow and very expensive just to push the limits of achieving human parity.”

“Since then, our goal has been to figure out how we can bring this level of quality — or as close to this level of quality as possible — into our production API,” Menezes said.

Someone using Microsoft Translator types in a sentence and expects a translation in milliseconds, Menezes said. So the team needed to figure out how to make its big, complicated research model much leaner and faster. But as they were working to shrink the research system algorithmically, they also had to broaden its reach exponentially — not just training it on news articles but on anything from handbooks and recipes to encyclopedia entries.

To accomplish this, the team employed a technique called knowledge distillation, which involves creating a lightweight “student” model that learns from translations generated by the “teacher” model with all the bells and whistles, rather than the massive amounts of raw parallel data that machine translation systems are generally trained on. The goal is to engineer the student model to be much faster and less complex than its teacher, while still retaining most of the quality.

In one example, the team found that the student model could use a simplified decoding algorithm to select the best translated word at each step, rather than the usual method of searching through a huge space of possible translations.

The researchers also developed a different approach to dual learning, which takes advantage of “round trip” translation checks. For example, if a person learning Japanese wants to check and see if a letter she wrote to an overseas friend is accurate, she might run the letter back through an English translator to see if it makes sense. Machine learning algorithms can also learn from this approach.

In the research model, the team used dual learning to improve the model’s output. In the production model, the team used dual learning to clean the data that the student learned from, essentially throwing out sentence pairs that represented inaccurate or confusing translations, Menezes said. That preserved a lot of the technique’s benefit without requiring as much computing.

With lots of trial and error and engineering, the team developed a recipe that allowed the machine translation student model — which is simple enough to operate in a cloud API — to deliver real-time results that are nearly as accurate as the more complex teacher, Menezes said.

Arul Menezes standing with arms folded in front of green foliage in the background
Arul Menezes, Microsoft distinguished engineer and founder of Microsoft Translator. Photo by Dan DeLong.

Improving search with multi-task learning

In the rapidly evolving AI landscape, where new language understanding models are constantly introduced and improved upon by others in the research community, Bing’s search experts are always on the hunt for new and promising techniques. Unlike the old days, in which people might type in a keyword and click through a list of links to get to the information they’re looking for, users today increasingly search by asking a question — “How much would the Mona Lisa cost?” or “Which spider bites are dangerous?” — and expect the answer to bubble up to the top.

“This is really about giving the customers the right information and saving them time,” said Rangan Majumder, partner group program manager of search and AI in Bing. “We are expected to do the work on their behalf by picking the most authoritative websites and extracting the parts of the website that actually shows the answer to their question.”

To do this, not only does an AI model have to pick the most trustworthy documents, but it also has to develop an understanding of the content within each document, which requires proficiency in any number of language understanding tasks.

Last June, Microsoft researchers were the first to develop a machine learning model that surpassed the estimate for human performance on the General Language Understanding Evaluation (GLUE) benchmark, which measures mastery of nine different language understanding tasks ranging from sentiment analysis to text similarity and question answering. Their Multi-Task Deep Neural Network (MT-DNN) solution employed both knowledge distillation and multi-task learning, which allows the same model to train on and learn from multiple tasks at once and to apply knowledge gained in one area to others.

Bing’s experts this fall incorporated core principles from that research into their own machine learning model, which they estimate has improved answers in up to 26 percent of all questions sent to Bing in English markets. It also improved caption generation — or the links and descriptions lower down on the page — in 20 percent of those queries. Multi-task deep learning led to some of the largest improvements in Bing question answering and captions, which have traditionally been done independently, by using a single model to perform both.

For instance, the new model can answer the question “How much does the Mona Lisa cost?” with a bolded numerical estimate: $830 million. In the answer below, it first has to know that the word cost is looking for a number, but it also has to understand the context within the answer to pick today’s estimate over the older value of $100 million in 1962. Through multi-task training, the Bing team built a single model that selects the best answer, whether it should trigger and which exact words to bold.

Screenshot of a Bing search results page showing an enhanced answer of how much the Mona Lisa costs, with a snippet from Wikipedia
This screenshot of Bing search results illustrates how natural language understanding research is improving the way Bing answers questions like “How much does the Mona Lisa cost?” A new AI model released this fall understands the language and context of the question well enough to distinguish between the two values in the answer — $100 million in 1962 and $830 million in 2018 — and highlight the more recent value in bold. Image by Microsoft.

Earlier this year, Bing engineers open sourced their code to pretrain large language representations on Azure.  Building on that same code, Bing engineers working on Project Turing developed their own neural language representation, a general language understanding model that is pretrained to understand key principles of language and is reusable for other downstream tasks. It masters these by learning how to fill in the blanks when words are removed from sentences, similar to the popular children’s game Mad Libs.

You take a Wikipedia document, remove a phrase and the model has to learn to predict what phrase should go in the gap only by the words around it,” Majumder said. “And by doing that it’s learning about syntax, semantics and sometimes even knowledge. This approach blows other things out of the water because when you fine tune it for a specific task, it’s already learned a lot of the basic nuances about language.”

To teach the pretrained model how to tackle question answering and caption generation, the Bing team applied the multi-task learning approach developed by Microsoft Research to fine tune the model on multiple tasks at once. When a model learns something useful from one task, it can apply those learnings to the other areas, said Jianfeng Gao, partner research manager in the Deep Learning Group at Microsoft Research.

For example, he said, when a person learns to ride a bike, she has to master balance, which is also a useful skill in skiing. Relying on those lessons from bicycling can make it easier and faster to learn how to ski, as compared with someone who hasn’t had that experience, he said.

“In some sense, we’re borrowing from the way human beings work. As you accumulate more and more experience in life, when you face a new task you can draw from all the information you’ve learned in other situations and apply them,” Gao said.

Like the Microsoft Translator team, the Bing team also used knowledge distillation to convert their large and complex model into a leaner model that is fast and cost-effective enough to work in a commercial product.

And now, that same AI model working in Microsoft Search in Bing is being used to improve question answering when people search for information within their own company. If an employee types a question like “Can I bring a dog to work”? into the company’s intranet, the new model can recognize that a dog is a pet and pull up the company’s pet policy for that employee — even if the word dog never appears in that text. And it can surface a direct answer to the question.

“Just like we can get answers for Bing searches from the public web, we can use that same model to understand a question you might have sitting at your desk at work and read through your enterprise documents and give you the answer,” Majumder said.

Top image: Microsoft investments in natural language understanding research are improving the way Bing answers search questions like “How much does the Mona Lisa cost?” Image by Musée du Louvre/Wikimedia Commons. 

Related:

Jennifer Langston writes about Microsoft research and innovation. Follow her on Twitter.

Go to Original Article
Author: Microsoft News Center

Wanted – portable hard drives

I have a Toshiba Canvio 3TB available. It’s opened but unused. Was planning to use for archiving purposes but went cloud-based instead.

Also have a 500GB Seagate drive that’s been swapped into a 2TB Seagate portable enclosure. It was an immediate swap into a new laptop, and has been used only as an iTunes library backup so only written to a couple of times.

Both USB3.

£60 for the 3TB + £10 for the 500GB if bought together.

Go to Original Article
Author:

Salesforce Trailhead to roll out live training videos

Salesforce is promoting customer success by rolling out two new Trailhead features that will be available by the end of this year.

Salesforce will introduce live video trainings on Trailhead Live and new features to Trailblazer.me, the online resume feature designed to help job-seekers show off their skills and accomplishments using Trailhead.

Trailblazer.me already features badges and certifications achieved using Trailhead. The new version will also highlight a person’s activity throughout the Salesforce ecosystem, such as contributions to user groups, what apps users download from the Salesforce AppExchange and reviews that users have posted.

Trailblazer.me should help employers that want to be able to quantify whether job applicants have the skills they say they have, said Maribel Lopez, founder and principal analyst at Lopez Research.

“People used to be able to just say, ‘I know Salesforce,’ on their resume,” Lopez said. “I think one of the hardest things for employers is to understand whether anyone they hire is actually qualified in the things they say they are qualified in.”

Trailhead Live brings video instruction

Trailhead Live offers a new way for Salesforce users to learn with additional elements of community. Like other Trailhead courses, Trailhead Live courses are free.

The initial set of courses will include live coding and Salesforce certification preparation for administrators and others. Within two months of launch later this year, Salesforce said it expects Trailhead Live to offer more than 100 live and on-demand training courses. This will also include courses in so-called “soft skills,” such as how to interview for a job and public speaking.

Salesforce Trailhead screenshot
Salesforce plans to roll out live video training on Trailhead Live by year’s end.

Salesforce plans to have a big Trailhead presence at Dreamforce in San Francisco from Nov. 19 to 22, where the new Trailhead features will be on display.

Salesforce is doing this an acknowledgment that people learn differently, Lopez said.

“There are multiple ways people like to engage,” Lopez said. “It used to be you had a whiteboard and people took notes, but now we’re in a much more visual era and you want to be sure you’re reaching everyone.”

Inspired by Peloton

Salesforce said the design of Trailhead Live was inspired in part by Peloton, the company that offers live on-demand fitness courses via an internet-connected bicycle.

Seeing how people can engage with others without having to go to a classroom was an inspiration.
Kris LandeVice president of marketing, Salesforce

“We definitely looked at consumer applications like Peloton,” said Kris Lande, vice president of marketing at Salesforce. “Seeing how people can engage with others without having to go to a classroom was an inspiration.”

There is a community aspect to Trailhead Live, as users will able to see who else is taking the class with them, Lande said. It’s also more personalized, as the instructor verbally welcomes each participant by name.

Like Peloton, which features certified trainers, Trailhead Live will feature experts in different topic areas from the Salesforce community. If you miss a class or need more time to complete different skills tests, each class will also be available online. If there are 15 people taking an hour-long course on how to create Lightning Web Components, the instructor will give a set period of time for users to complete tasks in their own virtual workspace. The user can return and learn in an on-demand review of the course if he or she needs to finish any parts of it for certification.

Earlier this year, there were 1.2 million people using the Trailhead platform, according to Salesforce. That number has grown to 1.7 million and is expected to grow to 1.8 million by Dreamforce, with a total of 17 million badges earned since its launch. Trailhead users earn badges each time they show mastery of specific skills.

New Salesforce Trailhead trainings introduced this past year include cybersecurity and Apple iOS.

Go to Original Article
Author:

Windows Virtual Desktop is now generally available worldwide

Today, we’re excited to announce that Windows Virtual Desktop is now generally available worldwide. Windows Virtual Desktop is the only service that delivers simplified management, a multi-session Windows 10 experience, optimizations for Office 365 ProPlus, and support for Windows Server Remote Desktop Services (RDS) desktops and apps. With Windows Virtual Desktop, you can deploy and scale your Windows desktops and apps on Azure in minutes.

Since we announced Windows Virtual Desktop last September, and through the public preview announced in March, thousands of customers have piloted the service and taken advantage of the Windows 10 multi-session capability—validating the importance of this feature as a core part of the service. Customers also represented, all major industries and geographies, helping us get feedback from different customer types and locations. As a result, as of today the service is now available in all geographies. In addition, the Windows Virtual Desktop client is available across Windows, Android, Mac, iOS, and HTML 5.

“Windows Virtual Desktop allows our employees to work in a secure manner wherever they are. Windows Virtual Desktop provides the Windows 10 desktop experience that our employees are familiar with across a variety of devices or web browsers.”
—Jake Hovermale, Chief Technical Officer, BEI Networks

With the end of extended support for Windows 7 coming in January 2020, we also understand some customers need to continue to support Windows 7 legacy applications as they migrate to Windows 10. To support this need, you can use Windows Virtual Desktop to virtualize Windows 7 desktops with free Extended Security Updates (ESU) until January 2023. If you’re in the process of migrating to Windows 10 and need app compatibility assistance, read more about how we can help with the Desktop App Assure program.

To help increase productivity, we invested heavily in the Office experience in a virtualized environment with native improvements, as well as through the acquisition of FSLogix. In July, we made the FSLogix technology available to Microsoft 365, Windows 10 Enterprise, and RDS customers. Today, all FSLogix tools are fully integrated into Windows Virtual Desktop, enabling you to have the smoothest, most performant Office virtualization experience available today.

In addition to the significant architectural improvements for deployment and management, we’re also simplifying app delivery by supporting MSIX packaged apps to be dynamically “attached” to a virtual machine instead of installing it permanently. This is important because it significantly decreases storage and makes it easier for the admin to manage and update the apps, while creating a seamless experience for the user.

Check out the new video from Scott Manchester, Principal Engineering Lead for Windows Virtual Desktop, where he does a great job of walking you through the app “attach” experience.

Microsoft Mechanics

Windows Virtual Desktop is now released and ready for production!

Watch the video

Extending Windows Virtual Desktop

We also worked closely with our partner ecosystem to help our customers extend Windows Virtual Desktop and get the most out of existing virtualization investments.

  • Starting today, Citrix can extend Windows Virtual Desktop worldwide, including support for Windows 10 multi-session, Windows 7 with free Extended Security Updates for up to three years, and support for Windows Server 2008 R2 with free Extended Security Updates on Azure.
  • Later this year, VMware Horizon Cloud on Microsoft Azure will extend Windows Virtual Desktop and its benefits, such as Windows 10 Enterprise multi-session and support for Windows 7 with free Extended Security Updates for up to three years. Preview will be available by the end of the calendar year.
  • We also engaged with hardware partners, system integrators (SI), who provide turnkey desktop-as-a-service (DaaS) offerings, and value-added solution providers, who add capabilities such as printing, application layering, assessment, and monitoring on Azure Marketplace. Learn more about Windows Virtual Desktop partners on the documentation page.

General availability of Windows Virtual Desktop is just the beginning. We’ll continue to rapidly innovate and invest in desktop and app virtualization. We look forward to sharing more with you in the coming months. In the meantime, learn more on our product page and get started with Windows Virtual Desktop today.

If you’re a partner and want to learn more about Windows Virtual Desktop, visit the Azure Partner Zone page for Windows Virtual Desktop.

Go to Original Article
Author: Microsoft News Center