Tag Archives: Microsoft

Microsoft Dynamics 365 AI going hard after Salesforce

Microsoft and Salesforce are attacking each other again. Microsoft Dynamics 365 AI tools are coming that will beef up sales, marketing and — most of all — service and support, unveiled the day after Salesforce announced Quip Slides, a PowerPoint competitor.

Salesforce appears to be annexing Microsoft’s business-productivity territory, while Microsoft is rolling its forces deeper into Salesforce’s CRM domain by more tightly connecting Teams collaboration with its CRM suite, freshened up with new AI capabilities.

“You’ve got Salesforce announcing Quip Slides, and you’ve got Microsoft doing a whole bunch of integration between Teams and Dynamics … who’s going after whose market?” said Alan Lepofsky, analyst at Constellation Research.

In a media briefing ahead of its Ignite user conference, the tech giant took some direct shots at rival Salesforce in introducing Microsoft Dynamics 365 AI tools that buttress CRM processes. Of particular note was Dynamics 365 AI for Customer Service, which adds out-of-the-box virtual agents.

Assistive AI for contact centers

Who’s going after whose market?
Alan Lepofskyanalyst, Constellation Research

Virtual agents can take several forms, two of which include chatbots that do the talking on behalf of humans, or assistive bots that prompt humans with suggested answers for engaging live with customers either on voice or text channels.

New Microsoft bots, built on Azure Cognitive Services, won’t require the code-intensive development or consultant services that other vendors’ CRM tools do, claimed Alysa Taylor, Microsoft corporate vice president of business applications and global industry. She singled out Salesforce as a CRM competitor in her comments.

“Many vendors offer [virtual agents] in a way that is very cumbersome for organizations to adopt,” Taylor said. “It requires a large services engagement; Salesforce partners with IBM Watson to be able to deliver this.”

Either way, the bots will require training. Microsoft Dynamics 365 AI-powered bots can be trained by call center managers, asserted Navrina Singh, Microsoft AI principal product lead, during a demo.

Microsoft CEO Satya Nadella
Microsoft CEO Satya Nadella’s taking on Salesforce with new CRM AI tools

The bots can tap into phone log transcriptions, email and other contact center data stores to shape answers to customer problems and take some of the workload off of overburdened contact center agents, Singh said.

The virtual agent introductions were significant enough that Microsoft brought out CEO Satya Nadella for a cameo with Singh during the briefing.

“The thing that’s most exciting to me,” Nadella said, “… is that [Microsoft] can make every company out there an AI-first company. They already have customers, they already have data. If you can democratize the use of AI tools, every company can harness the power of AI.”

Other Dynamics 365 AI tools for CRM

Sales and marketing staffs get their own Dynamics 365 AI infusion, too.

Microsoft brings Dynamics 365 AI for Sales in line with Salesforce Einstein tools that use AI to prioritize lead pipelines and sales-team performance management.

Microsoft Dynamics 365 AI for Market Insights plumbs marketing, social media and other customer engagement data to improve customer relations and “engage in relevant conversations and respond faster to trends,” Taylor wrote in a blog post announcing the new system.

While the Microsoft moves appear effective, industry observers questioned whether they can Microsoft make an impression in Salesforce’s massive market footprint, even if they are easier to use, more economical and more intuitive than Salesforce’s.

Lepofsky said he isn’t sure, because of the sheer numbers. The 150,000-strong Dreamforce user conference is at the same time as Ignite, and the latter will likely draw only about a sixth of the Dreamforce crowd. And Salesforce likely won’t be resting on its AI credentials either.

“I think you can speculate that Salesforce will also be talking about AI improvements at Dreamforce, so perhaps it’s not that differentiating for Dynamics,” Lepofsky said.

While Microsoft announced no release date for its AI tools, a preview site will go online this fall, Singh said.

Microsoft announces quarterly dividend increase – Stories

Annual shareholders meeting set for Nov. 28, 2018

REDMOND, Wash. — Sept. 18, 2018 — Microsoft Corp. on Tuesday announced that its board of directors declared a quarterly dividend of $0.46 per share, reflecting a 4 cent or 9.5 percent increase over the previous quarter’s dividend. The dividend is payable Dec. 13, 2018, to shareholders of record on Nov. 15, 2018. The ex-dividend date will be Nov. 14, 2018.

In addition, the company announced the date for the 2018 Annual Shareholders Meeting, to be held Nov. 28, 2018. Shareholders at the close of business on Sept. 26, 2018, the record date, will be entitled to vote at the Annual Shareholders Meeting.

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, financial analysts and investors only:

Investor Relations, Microsoft, (425) 706-4400

For more information, press only:

Microsoft Media Relations, WE Communications, (425) 638-7777, rrt@we-worldwide.com

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://www.microsoft.com/news. Web links, telephone numbers, and titles were correct at time of publication, but may since have changed. Shareholder and financial information is available at http://www.microsoft.com/en-us/investor.

Microsoft seeks broader developer appeal with Azure DevOps

Microsoft has rebranded its primary DevOps platform as Azure DevOps to reach beyond Windows developers or Visual Studio developers and appeal to those who just want a solid DevOps platform.

Azure DevOps encompasses five services that span the breadth of the development lifecycle. The services aim to help developers plan, build, test, deploy and collaborate to ship software faster and with higher quality. These services include the following:

  • Azure Pipelines is a CI/CD service.
  • Azure Repos offers source code hosting with version control.
  • Azure Boards provides project management with support for Agile development using Kanban boards and bug tracking.
  • Azure Artifacts is a package management system to store artifacts.
  • Azure Test Plans lets developers define, organize, and run test cases and report any issues through Azure Boards.

Microsoft customers wanted the company to break up the Visual Studio Team Services (VSTS) platform so they could choose individual services, said Jamie Cool, Microsoft’s program manager for Azure DevOps. By doing so, the company also hopes to attract a wider audience that includes Mac and Linux developers, as well as open source developers in general, who avoid Visual Studio, Microsoft’s flagship development tool set.

Open source software continues to achieve broad acceptance within the software industry. However, many developers don’t want to switch to Git source control and stay with VSTS for everything else. Over the past few years, Microsoft has technically separated some of its developer tool functions.

But the company has struggled to convince developers about Microsoft’s cross-platform capabilities and that they can pick and choose areas from Microsoft versus elsewhere, said Rockford Lhotka, CTO of Magenic, an IT services company in St. Louis Park, Minn.

Rockford Lhotka, CTO, MagenicRockford Lhotka

“The idea of a single vendor or single platform developer is probably gone at this point,” he said. “A Microsoft developer may use ASP.NET, but must also use JavaScript, Angular and a host of non-Microsoft tools, as well. Similarly, a Java developer may well be building the back-end services to support a Xamarin mobile app.”

Most developers build for a lot of different platforms and use a lot of different development languages and tools. However, the features of Azure DevOps will work for everyone, Lhotka said.

Azure DevOps is Microsoft’s latest embrace of open source development, from participation in open source development to integrating tools and languages outside its own ecosystem, said Mike Saccotelli, director of modern apps at SPR, a digital technology consulting firm in Chicago.

In addition to the rebranded Azure DevOps platform, Microsoft also plans to provide free CI/CD technology for any open source project, including unlimited compute on Azure, with the ability to run up to 10 jobs concurrently, Cool said. Microsoft has also made Azure Pipelines the first of the Azure DevOps services to be available on the GitHub Marketplace.

Microsoft shuts down zero-day exploit on September Patch Tuesday

Microsoft shut down a zero-day vulnerability launched by a Twitter user in August and a denial-of-service flaw on September Patch Tuesday.

A security researcher identified by the Twitter handle SandboxEscaper shared a zero-day exploit in the Windows task scheduler on Aug. 27. Microsoft issued an advisory after SandboxEscaper uploaded proof-of-concept code on GitHub. The company fixed the ALPC elevation of privilege vulnerability (CVE-2018-8440) with its September Patch Tuesday security updates. A malicious actor could use the exploit to gain elevated privileges in unpatched Windows systems.

“[The attacker] can run arbitrary code in the context of local system, which pretty much means they own the box … that one’s a particularly nasty one,” said Chris Goettl, director of product management at Ivanti, based in South Jordan, Utah.

The vulnerability requires local access to a system, but the public availability of the code increased the risk. An attacker used the code to send targeted spam that, if successful, implemented a two-stage backdoor on a system.

“Once enough public information gets out, it may only be a very short period of time before an attack could be created,” Goettl said. “Get the Windows OS updates deployed as quickly as possible on this one.”

Microsoft addresses three more public disclosures

Administrators should prioritize patching three more public disclosures highlighted in September Patch Tuesday.

Microsoft resolved a denial-of-service vulnerability (CVE-2018-8409) with ASP.NET Core applications. An attacker could cause a denial of service with a specially crafted request to the application. Microsoft fixed the framework’s web request handling abilities, but developers also must build the update into the vulnerable application in .NET Core and ASP.NET Core.

Chris Goettl of IvantiChris Goettl

A remote code execution vulnerability (CVE-2018-8457) in the Microsoft Scripting Engine opens the door to a phishing attack, where an attacker uses a specially crafted image file to compromise a system and execute arbitrary code. A user could also trigger the attack if they open a specially constructed Office document.

“Phishing is not a true barrier; it’s more of a statistical challenge,” Goettl said. “If I get enough people targeted, somebody’s going to open it.”

This exploit is rated critical for Windows desktop systems using Internet Explorer 11 or Microsoft Edge. Organizations that practice least privilege principles can mitigate the impact of this exploit.

Another critical remote code execution vulnerability in Windows (CVE-2018-8475) allows an attacker to send a specially crafted image file to a user, who would trigger the exploit if they open the file.

September Patch Tuesday issues 17 critical updates

September Patch Tuesday addressed more than 60 vulnerabilities, 17 rated critical, with a larger number focused on browser and scripting engine vulnerabilities.

“Compared to last month, it’s a pretty mild month. The OS and browser updates are definitely in need of attention,” Goettl said.

Microsoft closed two critical remote code execution flaws (CVE-2018-0965 and CVE-2018-8439) in Hyper-V and corrected how the Microsoft hypervisor validates guest operating system user input. On an unpatched system, an attacker could run a specially crafted application on a guest operating system to force the Hyper-V host to execute arbitrary code.

Microsoft also released an advisory (ADV180022) for administrators to protect Windows systems from a denial-of-service vulnerability named “FragmentSmack” (CVE-2018-5391). An attacker can use this exploit to target the IP stack with eight-byte IP fragments and withholding the last fragment to trigger full CPU utilization and force systems to become unresponsive.

Microsoft also released an update to a Microsoft Exchange 2010 remote code execution vulnerability (CVE-2018-8154) first addressed on May Patch Tuesday. The fix corrects the faulty update that could break functionality with Outlook on the web or the Exchange Control Panel. 

“This might catch people by surprise if they are not looking closely at all the CVEs this month,” Goettl said.

Playing to the crowd and other social media mandates with Dr. Nancy Baym – Microsoft Research

Nancy Baym

Dr. Nancy Baym, Principal Researcher from Microsoft Research

Episode 41, September 12, 2018

Dr. Nancy Baym is a communication scholar, a Principal Researcher in MSR’s Cambridge, Massachusetts, lab, and something of a cyberculture maven. She’s spent nearly three decades studying how people use communication technologies in their everyday relationships and written several books on the subject. The big take away? Communication technologies may have changed drastically over the years, but human communication itself? Not so much.

Today, Dr. Baym shares her insights on a host of topics ranging from the arduous maintenance requirements of social media, to the dialectic tension between connection and privacy, to the funhouse mirror nature of emerging technologies. She also talks about her new book, Playing to the Crowd: Musicians, Audiences and the Intimate Work of Connection, which explores how the internet transformed – for better and worse – the relationship between artists and their fans.



Nancy Baym: It’s not just that it’s work, it’s that it’s work that never, ever ends. Because your phone is in your pocket, right? So, you’re sitting at home on a Sunday morning, having a cup of coffee and even if you don’t do it, there’s always the possibility of, “Oh, I could Tweet this out to my followers right now. I could turn this into an Instagram story.” So, the possibility of converting even your most private, intimate moments into fodder for your work life is always there, now.

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: Dr. Nancy Baym is a communication scholar, a Principal Researcher in MSR’s Cambridge, Massachusetts, lab, and something of a cyberculture maven. She’s spent nearly three decades studying how people use communication technologies in their everyday relationships and written several books on the subject. The big take away? Communication technologies may have changed drastically over the years, but human communication itself? Not so much.

Today, Dr. Baym shares her insights on a host of topics ranging from the arduous maintenance requirements of social media, to the dialectic tension between connection and privacy, to the funhouse mirror nature of emerging technologies. She also talks about her new book, Playing to the Crowd: Musicians, Audiences and the Intimate Work of Connection, which explores how the internet transformed – for better and worse – the relationship between artists and their fans. That and much more on this episode of the Microsoft Research Podcast.

Host: Nancy Baym, welcome to the podcast.

Nancy Baym: Nice to be here.

Host: So, you’re a principle researcher at the MSR lab in Cambridge, Massachusetts, not to be confused with the one in Cambridge, England. Give our listeners an overview of the work that goes on in New England and of your work in particular. What are the big issues you’re looking at? Why is the work important? Basically, what gets you up in the morning?

Nancy Baym: So, the lab in New England is one of Microsoft’s smaller researcher labs. We’re very interdisciplinary, so, we have people in my basic area which is social media and social issues around technology from humanistic and social scientific perspectives. And we have that alongside people working on machine learning and artificial intelligence, people working on economics, people working on cryptography, people working on math and complexity theory, people doing algorithmic game theory, and then we also have a bioinformatics and medicine component to this program also. So, we’re really interested in getting people from very different perspectives together and listening to each other and seeing what kinds of new ideas get sparked when you get people from radically different disciplines together in the same environment and you give them long periods of time to get to know one another and get exposed to the kinds of work that they do. So, that’s the lab as a whole. My group is… we call ourselves the Social Media Collective, which is a, sort of, informal name for it. It’s not an official title but it’s sort of an affectionate one. There are three core people here in our New England lab, and then, which would be me, Mary Gray and Tarleton Gillespie, and then we have a postdoc and we have, in the summer, PhD interns, we have a research assistant, and we’re all interested in questions around how people use technologies, the kinds of work that people do through technologies, the kinds of work that technologies create for people, and the ways that that affects them, their identities, their relationships, their communities, societies as a whole.

Host: You know, as you talk about the types of researchers that you have there, I wonder, is New England unique among the labs at Microsoft?

Nancy Baym: I think we are, in that we are more interdisciplinary than many of them. I mean our Redmond lab, obviously, has got people from a huge range of disciplines, but it’s also got a huge number of people, whereas we’re a much smaller group. We’re on one floor of a building and there are, you know, anywhere from twenty to fifty of us, depending on how many visitors are in the lab and how many interns are around or what not, but that’s still a really small fraction of the Redmond group. So, I think anybody in a particular field finds themselves with many fewer colleagues from their own field relative to their colleagues as a whole in this lab. Whereas, I think most of our labs are dominated much more by people from computer science. Obviously, computer science is well-represented here, but we have a number of other fields as well. So, I think that foregrounding of interdisciplinarity is unique to this lab.

Host: That’s great. So, the social science research in the context of social computing and social media, it’s an interesting take on research in general at Microsoft, which is a high-tech company. How do you think the work that you do informs the broader work of Microsoft Research and Microsoft in general?

Nancy Baym: I would like to think that the kinds of work that I do, and that my colleagues are doing, are helping the company, and technology companies in general, think in more sophisticated ways about the ways that the technologies that we create get taken up and get used and with what consequences. I think that people who build technologies, they really want to help people do things. And they’re focused on that mission. And it can be difficult to think about, what are all the ways that that might get taken up besides the way that I imagine it will get taken up, besides the purpose that I’m designing it for? So, in some sense, I think part of our group is here to say, here’s some unexpected things you might not be thinking about. Here’s some consequences, or in the case of my own work, I’d like to think about the ways that technologies are often pushing people toward more connection and more time with others and more engagement and more sharing and more openness. And yet, people have very strong needs for privacy and for distance and for boundaries and what would it mean, for example, to think about how we could design technologies that helped people draw boundaries more efficiently rather than technologies that were pushing them toward openness all the time?

Host: I love that. And I’m going to circle back, in a bit, to some of those issues of designing for dialectic and some of the issues around unintended consequences. But first, I want to talk about a couple books you wrote. Before we talk about your newest book, I want to spend a little time talking about another book you wrote called Personal Connections in the Digital Age. And in it, you challenge conventional wisdom that tends to blame new technologies for what we might call old problems. Talk a little bit about Personal Connections in the Digital Age.

Nancy Baym: That book came out of a course that I had been teaching for, oh gosh, fifteen, sixteen, seventeen years, something like that, about communication and the internet, and one of the things that tends to come up is just what you’re talking about. This idea that people tend to receive new technologies as though this is the first time these things have ever been disrupted. So, part of what that book tries to do is to show how the way that people think and talk about the internet has these very long histories in how people think and talk about other communication technologies that have come before. So, for example, when the telephone was invented, there was a lot of concern that the telephone was going to lead to social disengagement, particularly among women, who would spend all the time talking on the phone and would stop voting. Um… (laughter) which doesn’t sound all that different from some contemporary ways that people talk about phones! Only now it’s the cell phones that are going to cause all that trouble. It’s that, but it’s also questions around things like, how do we present ourselves online? How do we come to understand who other people are online? How does language change when it’s used online? How do we build relationships with other people? How do we maintain relationships with people who we may have met offline? And also, how do communities and social networks form and get maintained through these communication technologies? So, it’s a really broad sweep. I think of that book as sort of the “one stop shop” for everything you need to know about personal connections in the digital age. If you just want to dive in and have a nice little compact introduction to the topic.

Host: Right. There are other researchers looking into these kinds of things as well. And is your work sort of dovetailing with those findings in that area of personal relationships online?

Nancy Baym: Yeah, yeah. There’s quite a bit of work in that field. And I would say that, for the most part, the body of work which I review pretty comprehensively in Personal Connections in the Digital Age tends to show this much more nuanced, balanced, “for every good thing that happens, something bad happens,” and for all of the sort of mythologies about “its destroying children” or “you can’t trust people you meet online,” or “people aren’t their real selves” or even the idea that there’s something called “real life,” which is separate from what happens on the internet, the empirical evidence from research tends to show that, in fact, online interaction is really deeply interwoven with all of our other forms of communication.

Host: I think you used the word “moral panic” which happens when a new technology hits the scene, and we’re all convinced that it’s going to ruin “kids today.” They won’t have manners or boundaries or privacy or self-control, and it’s all technology’s fault. So that’s cool that you have a kind of answer to that in that book. Let’s talk about your new book which is super fascinating: Playing to the Crowd: Musicians, Audiences and the Intimate Work of Connection. Tell us how this book came about and what was your motivation for writing it?

Nancy Baym: So, this book is the result of many years of work, but it came to fruition because I had done some early work about online fan community, particularly soap opera fans, and how they formed community in the early 1990s. And then, at some point, I got really interested in what music fans were doing online and so I started a blog where I was posting about music fans and other kinds of fans and the kinds of audience activities that people were doing online and how that was sort of messing with relationships between cultural producers and audiences. And that led to my being invited to speak at music industry events. And what I was seeing there was a lot of people with expertise saying things like, “The problem is, of course, that people are not buying music anymore, so the solution to this problem is to use social media to connect with your audience because if you can connect with them, and you can engage them, then you can monetize them.” And then I was seeing the musicians ask questions, and the kinds of questions that they were asking seemed very out-of-step with the kind of advice that they were being given. So, they would be asking questions like, do I have to use all of the sites? How do I know which ones to use? So, I got really interested in this question, of sort of, what, from the point of view from these people who were being told that their livelihood depends on creating some kind of new social relationship using these media with audiences, what is this call to connect and engage really about? What does it feel like to live with that? What are the issues it raises? Where did it come from? And then this turned into a much larger-scoped project thinking about musicians as a very specific case, but one with tremendous resonance for the ways that so many workers in a huge variety of fields now, including research, feel compelled to maintain some kind of visible, public persona that engages with and courts an audience so that when our next paper comes out, or our next record drops, or our next film is released or our next podcast comes out, the audience is already there and interested and curious and ready for it.

Host: Well let me interject with a question based on what you said earlier. How does that necessarily translate into monetization? I can see it translating into relationship and, you know, followership, but is there any evidence to support the you know…?

Nancy Baym: It’s magic, Gretchen, magic!

Host: OK. I thought so! I knew it!

Nancy Baym: You know, I work with economists and I keep saying, “Guys, let’s look at this. This is such a great research problem.” Is it true, right? Because you will certainly hear from people who work at labels or work in management who will say, “We see that our artists who engage more do better.” But in terms of any large scale “what works for which artists when?” and “does it really work across samples?” is, the million-dollar question that you just asked, is does it actually work? And I don’t know that we know the answer to that question. For some individuals, some of the time, yes. For the masses, reliably, we don’t know.

Host: Well and the other thing is, being told that you need to have this social media presence. It’s work, you know?

Nancy Baym: That’s exactly the point of the book, yeah. And it’s not just that it’s work, it’s that it’s work that never, ever ends. Because your phone is in your pocket, right? So, you’re sitting at home on a Sunday morning, having a cup of coffee, and even if you don’t do it, there’s always the possibility of, “Oh, I could tweet this out to my followers right now. I could turn this into an Instagram story.” So, the, the possibility of converting even your most private, intimate moments into fodder for your work life is always there, now. And the promise is, “Oh, if you get a presence, then magic will happen.” But first of all, it’s a lot of work to even create the presence and then to maintain it, you have to sell your personality now. Not just your stuff. You have to be about who you are now and make that identity accessible and engaging and what not. And yet it’s not totally clear that that’s, in fact, what audiences want. Or if it is what audiences want, which audiences and for which kinds of products?

(music plays)

Host: Well, let’s get back to the book a little bit. In one chapter, there’s a subsection called How Music Fans came to Rule the Internet. So, Nancy, how did music fans come to rule the internet?

Nancy Baym: So, the argument that I make in that chapter is that from the earliest, earliest days of the internet, music fans, and fans in general, were not just using the internet for their fandom, but were people who were also actively involved in creating the internet and creating social computing. So, I don’t want to say that music fans are the only people who were doing this, because they weren’t, but, from the very beginnings of online interaction, in like 1970, you already had the very people who are inventing the concept of a mailing list, at the same time saying, “Hey, we could use one of these to exchange Grateful Dead tickets, ‘cause I have some extra ones and I know there’s some other people in this building who might want them.” So, you have people at Stanford’s Artificial Intelligence laboratory in the very beginning of the 1970s saying, “Hey, we could use this enormous amount of computing power that we’ve got to digitize The Grateful Dead lyrics.” You have community computing projects like Community Memory being launched in the Bay Area putting their first terminal in a record store as a means of bringing together community. And then, from those early, early moments throughout, you see over and over and over again, music fans creating different forms of online community that then end up driving the way that the internet develops, peer-to-peer file sharing being one really clear example of a case where music fans helped to develop a technology to serve their needs, and by virtue of the success of that technology, ended up changing not just the internet, but industries that were organized around distributing cultural materials.

Host: One of the reviewers of Playing to the Crowd, and these reviews tend to be glowing, right? But he said, “It’ll change the way we think about music, technology and people.” So, even if it didn’t change everything about the way we think about music technology and people, what kinds of sort of “ah-ha findings” might people expect to find in the book?

Nancy Baym: I think one of the big ah-has is the extent to which music is a form of communication which has become co-opted, in so many ways, by commercial markets, and alongside that, the ways in which personal relationships and personal communication, have also become co-opted by commercial markets. Think about the ways that communication platforms monetize our everyday, friendly interaction through advertising. And the way that these parallel movements of music and relational communication from purely social activities to social activities that are permeated by commercial markets raises dialectic tensions that people then have to deal with as they’re continually navigating moving between people and events and circumstances and moments in a world that is so infused by technology and where our relationships are infused by technology.

Host: So, you’ve used the word “dialectic” in the context of computer interface design, and talked about the importance of designing for dialectic. Talk about what you mean by that and what kinds of questions arise for a developer or a designer with that mind set?

Nancy Baym: So, “dialectic” is one of the most important theoretical concepts to me when I think about people’s communication and people’s relationships in this project, but, in general, it’s a concept that I come back to over and over and over, and the idea is that we always have competing impulses that are both valid, and which we have to find balance between. So, a very common dialectic in interpersonal relationships is the desire to, on the one hand, be connected to others, and on the other, to be autonomous from others. So, we have that push and pull between “I want us to be part of each other’s lives all the time, and also leave me alone to make my own decisions.” (laughter) So that dialectic tension is not that one is right and one is wrong. It’s that that, and, as some of the theorists I cite on this argue, probably infinite dialectic tensions between “I want this, but I also want that” and it’s the opposite, right? And so, if we think about social interaction, instead of it being some sort of linear model where we start at point A with somebody and we move onto B and then C and then D, if we think of it instead as, even as we’re moving from A to B to C, that’s a tightrope. But at any given moment we can be toppling into one side or the other if we’re not balancing them carefully. So, if we think about a lot of the communication technologies that are available to us right now, they are founded, often quite explicitly, on a model of openness and connection and sharing. So, those are really, really valuable positions. But they’re also ends of dialectics that have opposite ends that are also very valid. So, all of these ways in which we’re pushed to be more open, more connected, to share more things, they are actually always in conflict within us with desires to be protective of other people or protective of ourselves, to have some distance from other people, to have autonomy. And to be able to have boundaries that separate us from others, as well as boundaries that connect us to one another. So, my question for designers is, how could we design in ways that make it easier for people to adjust those balances? In a way, you could sort of think about it as, what if we made the tightrope, you know, thicker so that it were easier for people to balance on, and you didn’t need to be so good at it, to make it work moment-to-moment?

Host: You know, everything you’ve just said makes me think of, you know, say, someone who wants to get involved in entertainment, in some way, and one of the plums of that is being famous, right? And then you find…

Nancy Baym: Until they are.

Host: …Until you are… that you don’t have control over all the attention you get and so that dialectic of “I want people to notice me/I want people to leave me alone” becomes wildly exacerbated there. But I think, you know, we all see “over-sharers,” as my daughter calls, them on social media. It’s like keep looking at me all the time. It’s like too much information. Have some privacy in your life…

Nancy Baym: Well you know, but that’s a great case, because I would say too much information is not actually a property of information, or of the person sending that information, it’s a property of the person receiving that information. Because, in fact, for some, it’s not going to be too much information. For some, it’s going to be exactly the right amount of information. So, I think of the example, of, from my point of view, a number of people who are parents of young children post much too much information on social networks. In particular, I’m really, really turned off by hearing about the details of their trivial illnesses that they’re going through at any given moment. You know, I mean if they got a real illness, of course I want to hear about it, but if you know, they got a fever this week and they’re just feeling a little sick, I don’t really need daily updates on their temperature, for instance. Um… on the other hand, I look at that, and I say, “Oh, too much information.” But then I say, “I’m not the audience for that.” They’ve got 500-600 friends. They probably put that there for grandma and the cousins who actually really do care. And I’m just not the audience. So, it’s not that that’s too much information. It’s that that information wasn’t meant for me. And instead of blaming them for having posted it, maybe I should just look away and move on to the next item in my feed. That’s ok, too. I’m sure that some of the things that I share strike some people as too much information but then, I’ll tell you what, some of the things that post that I think of as too much information, those are often the ones that people will later, in other contexts, say, “Oh my gosh, it meant so much to me that you posted about… whatever.” So, you know, we can’t just make these judgements about the content of what other people are producing without understanding the contexts in which it’s being received, and by whom.

Host: That is such a great reminder to us to have grace.

Nancy Baym: Grace for other people, that too, yeah.

Host: You’ve been watching, studying and writing about cyberculture for a long time. Going back a ways, what did you see, or even foresee, when you started doing this research and what if anything has surprised you along the way?

Nancy Baym: Well, it’s a funny thing. I mean, when I started doing this research, it was 1991. And the landscape has changed so much since then, so that the kinds of things that I could get away with being an insightful scholar for saying in 1991 are practically laughable now, because people just didn’t understand, at that time, that these technologies were actually going to be really socially useful. That people were going to use these technologies to present themselves to others, to form relationships, to build communities, that they were going to change the way audiences engaged, that they were going to change politics, that they were going to change so many practices of everyday life. And I think that those of us who were involved in cyberculture early, whether it was as researchers or just participants, could see that what was happening there was going to become something bigger than it was in those early days.

(music plays)

Host: I ask all of the researchers that come on the podcast some version of the question, “Is there anything that keeps you up at night?” To some degree, I think your work addresses that. You know, what ought we to be kept up at night about, and how, how ought we to address it? Is there anything that keeps you up at night, or anything that should keep us up at night that we should be thinking about critically as we’re in this landscape now?

Nancy Baym: Oh gosh, do any of us sleep anymore at all? (laughter) I mean I think what keeps me up nights is thinking, is it still ok to study the personal and the ordinary when it feels like we’re in such in extraordinary, tumultuous and frightening times, uh, nationally and globally? And I guess what I keep coming back to, when I’m lying awake at 4 in the morning saying, “Oh, maybe I just need to start studying social movements and give up on this whole interpersonal stuff.” And then I say to myself, “Wait a minute. The reason that we’re having so much trouble right now, at its heart, is that people are not having grace in their relations with one another,” to go back to your phrase. That what we really, really need right now more than anything is to be reconnected to our capacity for human connection with others. And so, in that sense, then, I kind of put myself to sleep by saying, “OK, there’s nothing more important than actual human connection and respect for one another.” And so that’s what I’m trying to foster in my work. So, I’m just going to call that my part and write a check for some of those other causes I can’t contribute to directly.

Host: I, I love that answer. And that actually leads beautifully into another question which is that your social science work at MSR is unique at industrial research labs. And I would call Microsoft, still, an industrial, you know, situation.

Nancy Baym: Definitely.

Host: So, you get to study unique and challenging research problems.

Nancy Baym: I have the best job in the world.

Host: No, I do, but you got a good one. Because I get to talk to people like you. But what do you think compels a company like Microsoft, perhaps somewhat uniquely, to encourage researchers like you to study and publish the things you do? What’s in it for them?

Nancy Baym: My lab director, Jennifer Chayes, talks about it as being like a portfolio which I think is, is a great way to think about it. So, you have this cast of researchers in your portfolio and each of them is following their own path to satisfying their curiosity and by having some of those people in that portfolio who really understand people, who really understand the way that technologies play out in ordinary people’s everyday lives and lived experiences, there may be moments where that’s exactly the stock you need at that moment. That’s the one that’s inflating and that’s the expertise that you need. So, given that we’re such a huge company, and that we have so many researchers studying so many topics, and that computing is completely infused with the social world now… I mean, if we think about the fact that we’ve shifted to so much cloud and that clouds are inherently social in the sense that it’s not on your private device, you have to trust others to store your data, and so many things are now shared that used to be individualized in computing. So, if computing is infused with the social, then it just doesn’t even really make sense for a tech company to not have researchers who understand the social, and who are studying the social, and who are on hand with that kind of expertise.

Host: As we close, Nancy, what advice would you give to aspiring researchers, maybe talking to your 25-year-old self, who might be interested in entering this field now, which is radically different from where it was when you started looking at it. What, what would you say to people that might be interested in this?

Nancy Baym: I would say, remember that there is well over a hundred years of social theory out there right now, and the fact that we have new communication technologies does not mean that people have started from scratch in their communication, and that we need to start from scratch in making sense of it. I think it’s more important than ever, when we’re thinking about new communication technologies, to understand communication behavior and the way that communication works, because that has not fundamentally transformed. The media through which we’ve used it has, but the way communication works to build identity, community, relationships, that has not fundamentally, magically, become something different. The same kind of interpersonal dynamics are still at play in many of these things. I think of the internet and communication technologies as being like funhouse mirrors. Where some phenomena get made huge and others get made small, so there’s a lot of distortion that goes on. But nothing entirely new is reflected that never existed before. So, it’s really important to understand the precedents for what you’re seeing, both in terms of theory and similar phenomena that might have occurred in earlier incarnations, in order to be able to really understand what you’re seeing in terms of both what is new, but also what’s not new. Because otherwise, what I see a lot in young scholarship is, “Look at this amazing thing people are doing in this platform with this thingy.” And it is really interesting, but it also actually looks a whole lot like what people were doing on this other platform in 1992, which also kind of looks a lot like what people were doing with ‘zines in the 1920s. And if we want to make arguments about what what’s new and what’s changing because of these things, it’s so important that we understand what’s not new and what these things are not changing.

(music plays)

Host: Nancy Baym, it’s been an absolute delight talking to you today. I’m so glad you took time to talk to us.

Nancy Baym: Alrighty, bye.

To learn more about Dr. Nancy Baym, and how social science scholars are helping real people understand and navigate the digital world, visit Microsoft.com/research.

How does AD DS differ from Microsoft Azure Active Directory?

While Active Directory Domain Services and Microsoft Azure Active Directory appear similar, they are not interchangeable.

Administrators exploring whether to move to Azure Active Directory for enterprise authentication and authorization should understand how the cloud-based platform differs from the traditional on-premises Active Directory.

Distinguish on-premises AD from Azure AD

Active Directory (AD) is a combination of services to help manage users and systems, including Active Directory Domain Services (AD DS) and Active Directory Federation Services (AD FS). AD DS is the database that provides the directory service, which is essentially the foundation of AD.

AD uses an X.500-based hierarchical framework and traditional tools such as domain name systems to locate assets, lightweight directory access protocol (LDAP) to work with directories both on premises and on the internet, and Kerberos and NT LAN Manager (NTLM) for secure authentication. AD also supports the use of organizational units (OUs) and group policy objects (GPOs) to organize and present assets.

Microsoft Azure Active Directory is a directory service from Microsoft’s cloud that handles identity management across the internet using the HTTP and HTTPS protocols. Azure AD’s flat structure does not use OUs and GPOs, which prevents the use of the organizational structure of on-premises AD.

Instead of Kerberos, Azure AD uses authentication and security protocols such as Security Assertion Markup Language and Open Authorization. In addition, the AD Graph API queries Azure AD rather than LDAP.

Structural differences between Azure AD and AD DS

Microsoft Azure Active Directory cannot create domains, trees and forests like AD DS. Instead, Azure AD treats each organization like a tenant that accesses Azure AD via the Azure portal to manage the organization’s users, passwords and permissions.

Administrators can use AD DS and Microsoft Azure Active Directory separately or use both for a single AD entity.

Organizations that subscribe to a Microsoft cloud service, such as Office 365 or Exchange Online, are Azure AD tenants. Azure AD supports single sign-on to give users access to multiple services after logging in.

Microsoft Azure Active Directory is different from Azure Active Directory Domain Services. Where Azure AD provides fewer features than on-premises AD, Azure AD DS serves as a more full-featured domain controller that uses LDAP, domain joining, Kerberos and NTLM authentication. Azure AD DS is a complete version of AD in the Azure cloud.

When to consider a combination of AD DS and Azure AD

Administrators can use AD DS and Microsoft Azure Active Directory separately or use both for a single AD entity. For example, an application hosted in the cloud could use on-premises AD, but it might suffer from latency from authentication requests that bounce from Azure to the on-premises AD DS.

Organizations have several options to implement AD in Azure. For example, an organization can build an AD domain in Azure that integrates with the local AD domain via Azure AD Connect. This creates a trust relationship between the domains.

Alternatively, an organization can extend its on-premises AD DS to Azure by running AD DS as a domain controller in an Azure VM. This is a common method for enterprises that have local and Azure resources connected via a virtual private network or dedicated connectivity, such as an ExpressRoute connection.

There are several other ways to use a combination of the cloud and on-premises directory services. Admins can create a domain in Azure and join it to the local AD forest. A company can build a separate forest in Azure that is trusted by the on-premises AD forest. Admins can use AD FS to replicate a local AD DS deployment to Azure.

Empowering Diverse Startups: Microsoft joins forces with Backstage Capital and Black & Brown Founders | Blog

I wanted to share our excitement that Microsoft for Startups will be joining forces with Backstage Capital and Black & Brown Founders to help accelerate opportunities for diverse startups. Over the next 18 months, we will be committing over $6M in sponsorship dollars, cloud technology, and support to empower underrepresented founders identified by these two organizations.

Today, less than 10% of all venture capital deals go to women, people of color, and LGBT founders. As Arlan Hamilton, Founder & Managing Partner of Backstage Capital, said to me when we first met, “Many VCs see this as a pipeline problem. We see it as the biggest opportunity in investment.”

At Microsoft we’re also focused on creating much greater diversity within our startup ecosystem. We fundamentally believe great ideas come from anywhere and have repeatedly found that diversity fuels innovation. We share Arlan’s view of the opportunity in front of us, which is also backed by research showing how diverse founding teams outperform the market average.

I was equally inspired after meeting Aniyia Williams, the originator of Black & Brown Founders. When she described their philosophy in supporting Black and Latinx founders, I knew we wanted to get involved and support their efforts. Their approach is to enable entrepreneurs through workshops, community, and regional conferences focused on the foundations of entrepreneurship, which is profitability, people development, and both business and technical innovation. I also really appreciated the distinction between how they approached the various communities and designed their program to adapt as their members’ needs evolved.

When we launched Microsoft for Startups in February, we shifted our program to an incessant focus on how we can best be of service to the startup community and assist them on their terms and timelines. We paired Microsoft cloud technology access with the technical and business support our startups were requesting (e.g. go-to-market partnership with access to our enterprise customer base through Microsoft’s worldwide channel and salesforce).

As part of these new partnerships with Backstage Capital and Black & Brown Founders, we’ll be offering the following:

  • Premier technology and business partner of Backstage Capital’s new accelerator program. 
  • Sponsoring Black & Brown Founders’ Project NorthStar, a 3-day tech conference in Philadelphia that provides connections, education, and opportunities for current or aspiring entrepreneurs and professionals from the Black and Latinx community.
  • Deliver the benefits of the Microsoft for Startups offer to eligible startup members of these organizations. The program provides startups with up to $120,000 in free Azure credits, enterprise grade technical support and development tools as well as dedicated resources to prepare startup marketing and sales teams to effectively sell their cloud solutions to enterprise organizations in partnership with Microsoft’s global sales organization and partner ecosystem.
  • Provide continuous training and mentorship to help underestimated entrepreneurs tackle issues such as selling to large enterprises, building a learning organization, designing your partner channel, and architecting durable technical solutions.
  • Provide 1:1 office hours for entrepreneurs to meet with Microsoft experts and tailor discussion to their needs across strategy, technology and business topics.

These new partnerships are core to our company’s mission to empower every person and organization on the planet to achieve more and build on several recent investments we’ve made to promote the success of diversity in startups, including our recently announced partnership with The Riveter and the M12 Female Founders Competition.

There is so much potential in these communities and we are honored to work with Backstage Capital and Black & Brown Founders. Together, we will work to change the makeup of startup communities around the world that will fuel the growth of new and diverse innovation.

Putting the cloud under the sea with Ben Cutler – Microsoft Research

ben cutler podcast

Ben Cutler from Microsoft Research. Photo by Maryatt Photography.

Episode 40, September 5, 2018

Data centers have a hard time keeping their cool. Literally. And with more and more data centers coming online all over the world, calls for innovative solutions to “cool the cloud” are getting loud. So, Ben Cutler and the Special Projects team at Microsoft Research decided to try to beat the heat by using one of the best natural venues for cooling off on the planet: the ocean. That led to Project Natick, Microsoft’s prototype plan to deploy a new class of eco-friendly data centers, under water, at scale, anywhere in the world, from decision to power-on, in 90 days. Because, presumably for Special Projects, go big or go home.

In today’s podcast we find out a bit about what else the Special Projects team is up to, and then we hear all about Project Natick and how Ben and his team conceived of, and delivered on, a novel idea to deal with the increasing challenges of keeping data centers cool, safe, green, and, now, dry as well!


Episode Transcript

Ben Cutler: In some sense we’re not really solving new problems. What we really have here is a marriage of these two mature industries. One is the IT industry, which Microsoft understands very well. And then the other is a marine technologies industry. So, we’re really trying to figure out how do we blend these things together in a way that creates something new and beneficial?

(music plays)

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: Data centers have a hard time keeping their cool. Literally. And with more and more data centers coming online all over the world, calls for innovative solutions to “cool the cloud” are getting loud. So, Ben Cutler and the Special Projects team at Microsoft Research decided to try to beat the heat by using one of the best natural venues for cooling off on the planet: the ocean. That led to Project Natick, Microsoft’s prototype plan to deploy a new class of eco-friendly data centers, under water, at scale, anywhere in the world, from decision to power-on, in 90 days. Because, presumably for Special Projects, go big or go home.

In today’s podcast we find out a bit about what else the Special Projects team is up to, and then we hear all about Project Natick, and how Ben and his team conceived of, and delivered on, a novel idea to deal with the increasing challenges of keeping data centers cool, safe, green, and, now, dry as well! That and much more on this episode of the Microsoft Research Podcast.

Host: Ben Cutler. Welcome to the podcast.

Ben Cutler: Thanks for having me.

Host: You’re a researcher in Special Projects at MSR. Give us a brief description of the work you do. In broad strokes, what gets you up in the morning?

Ben Cutler: Well, so I think Special Projects is a little unusual. Rather than have a group that always does the same thing persistently, it’s more based on this idea of projects. We find some new idea, something, in our case, that we think is materially important to the company, and go off and pursue it. And it’s a little different in that we aren’t limited by the capabilities of the current staff. We’ll actually go out and find partners, whether they be in academia or very often in industry, who can kind of help us grow and stretch in some new direction.

Host: How did Special Projects come about? Has it always been “a thing” within Microsoft Research, or is it a fairly new idea?

Ben Cutler: Special Projects is a relatively new idea. In early 2014, my manager, Norm Whitaker, who’s a managing scientist inside Microsoft Research was recruited to come here. Norm had spent the last few years of his career at DARPA, which is Defense Advanced Research Projects Agency, which has a very long history in the United States, and a lot of the seminal technology achieved is not just on the defense side, where we see things like stealth, but also on the commercial or consumer side had their origins in DARPA. And so, we’re trying to bring some of that culture here into Microsoft Research and a willingness to go out and pursue crazy things and a willingness not just to pursue new types of things, but things that are in areas that historically we would never have touched as a company, and just be willing to crash into some new thing and see if it has value for us.

Host: So, that seems like a bit of a shift from Microsoft, in general, to go in this direction. What do you think prompted it, within Microsoft Research to say, “Hey let’s do something similar to DARPA here?”

Ben Cutler: I think if you look more broadly at the company, with Satya, we have this very different perspective, right? Which is, not everything is based on what we’ve done before. And a willingness to really go out there and draw in things from outside Microsoft and new ideas and new concepts in ways that we’ve never done, I think, historically as a company. And this is in some sense a manifestation of this idea of, you know, what can we do to enable every person in every organization on the planet to achieve more? And a part of that is to go out there and look at the broader context of things and what kind of things can we do that might be new that might help solve problems for our customers?

Host: You’re working on at least two really cool projects right now, one of which was recently in the news and we’ll talk about that in a minute. But I’m intrigued by the work you’re doing in holoportation. Can you tell us more about that?

Ben Cutler: If you think about what we typically do with a camera, we’re capturing this two-dimensional information. One stage beyond that is what’s called a depth camera, which is, in addition to capturing color information, it captured the distance to each pixel. So now I’m getting a perspective and I can actually see the distance and see, for example, the shape of someone’s face. Holoportation takes that a step further where we’ll have a room that we outfit with, say, several cameras. And from that, now, I can reconstruct the full, 3-D content of the room. So, you can kind of think of this as, I’m building a holodeck. And so now you can imagine I’m doing a video conference, or, you know, something as simple as like Facetime, but rather than just sort of getting that 2-D, planar information, I can actually now wear a headset and be in some immersive space that might be two identical conferences rooms in two different locations and I see my local content, but I also see the remote content as holograms. And then of course we can think of other contexts like virtual environments, where we kind of share across different spaces, people in different locations. Or even, if you will, a broadcast version of this. So, you can imagine someone’s giving a concert. And now I can actually go be at that concert even if I’m not there. Or think about fashion. Imagine going to a fashion show and actually being able to sit in the front row even though I’m not there. Or, everybody gets the front row seats at the World Cup soccer.

Host: Wow. It’s democratizing event attendance.

Ben Cutler: It really is. And you can imagine I’m visiting the Colosseum and a virtual tour guide appears with me as I go through it and can tell me all about that. Or some, you know, awesome event happens at the World Cup again, and I want to actually be on the soccer field where that’s happening right now and be able to sort of review what happened to the action as though I was actually there rather than whatever I’m getting on television.

Host: So, you’re wearing a headset for this though, right?

Ben Cutler: You’d be wearing an AR headset. For some of the broadcast things you can imagine not wearing a headset. It might be I’ve got it on my phone and just by moving my phone around I can kind of change my perspective. So, there’s a bunch of different ways that this might be used. So, it’s this interesting new capture technology. Much as HoloLens is a display, or a viewing technology, this is the other end, capture, and there’s different ways we can kind of consume that content. One might be with a headset, the other might just be on a PC using a mouse to move around much as I would on a video game to change my perspective or just on a cell phone, because today, there’s a relatively small number of these AR/VR headsets but there are billions of cell phones.

Host: Right. Tell me what you’re specifically doing in this project?

Ben Cutler: In the holoportation?

Host: Yeah.

Ben Cutler: So, really what’s going on right now is, when this project first started to outfit a room, to do this sort of a thing, might’ve been a couple hundred thousand dollars of cost, and it might be 1 to 3 gigabits of data between sites. So, it’s just not really practical, even at an enterprise level. And so, what we’re working on is, with the HoloLens team and other groups inside the company, to really sort of dramatically bring down that cost. So now you can imagine you’re a grandparent and you want to kind of play with your grandkids who are in some other location in the world. So, this is something that we think, in the next couple years, actually might be at the level the consumers can have access to this technology and use it every day.

Host: This is very much in the research stage, though, right?

Ben Cutler: We have an email address and we hear from people every day, “How do I buy this? How can I get this?” And you know, it’s like, “Hey, here’s our website. It’s just research right now. It’s not available outside the company. But keep an eye on this because maybe that will change in the future.”

Host: Yeah. Yeah, and that is your kind of raison d’etre is to bring these impossibles into inevitables in the market. That should be a movie. The Inevitables.

Ben Cutler: I think there’s something similar to that, but anyway…

Host: I think a little, yeah. So just drilling a little bit on the holoportation, what’s really cool I noticed on the website, which is still research, is moving from a room-based hologram, or holoported individual, into mobile holoportation. And you’ve recently done this, at least in prototype, in a car, yes?

Ben Cutler: We have. So, we actually took an SUV. We took out the middle seat. And then we mounted cameras in various locations. Including, actually, the headrests of the first-row passengers. So that if you’re sitting in that back row we could holoport you somewhere. Now this is a little different than, say, that room-to-room scenario. You can imagine, for example, the CEO of our company can’t make a meeting in person, so he’ll take it from the car. And so, the people who are sitting in that conference room will wear an AR headset like a HoloLens. And then Satya would appear in that room as though he’s actually there. And then from Satya’s perspective, he’d wear a VR headset, right? So, he would not be sitting in his car anymore. He would be holoported into that conference room.

(music plays)

Host: Let’s talk about the other big project you’re doing: Project Natick. You basically gave yourself a crazy list of demands and then said, “Hey, let’s see if we can do it!” Tell us about Project Natick. Give us an overview. What it is, how did it come about, where it is now, what does it want to be when it grows up?

Ben Cutler: So, Project Natick is an exploration of manufactured data centers that we place underwater in the ocean. And so, the genesis of this is kind of interesting, because it also shows not just research trying to influence the rest of the company, but that if you’re working elsewhere inside Microsoft, you can influence Microsoft Research. So, in this case, go back to 2013, and a couple employees, Sean James and Todd Rawlings, wrote this paper that said we should put data centers in the ocean and the core idea was, the ocean is a place where you can get good cooling, and so maybe we should look at that for data centers. Historically, when you look at data centers, the dominant cost, besides the actual computers doing the work, is the air conditioning. And so, we have this ratio in the industry called PUE, or Power Utilization Effectiveness. And if you go back a long time ago to data centers, PUEs might be as high as 4 or 5. A PUE of 5 says that, for every watt of power for computers, there’s an additional 4 watts for the air conditioning, which is just kind of this crazy, crazy thing. And so, industry went through this phase where we said, “OK, now we’re going to do this thing called hot aisle/cold aisle. We line up all the computers in a row, and cold air comes in one side and hot air goes out the other.” Now, modern data centers that Microsoft builds have a PUE of about 1.125. And the PUE we see of what we have right now in the water is about 1.07. So, we have cut the cooling cost. But more importantly we’ve done it in a way that we’ve made the data center much colder. So, we’re about 10-degrees Celsius cooler than land data centers. And we’ve known, going back to the middle of the 20th century, that higher temperatures are a problem for components and in fact, a factor of 10-degree Celsius difference can be a factor of 2 difference of the life expectancy of equipment. So, we think that this is one way to bring reliability up a lot. So, this idea of reliability is really a proxy for server longevity and how do we make things last longer? In addition to cooling, there’s other things that we have here. One of which is the atmosphere inside this data center is dry nitrogen atmosphere. So, there’s no oxygen. And the humidity is low. And we think that helps get rid of corrosion. And then the other thing is, data centers we get stuff comes from outside. So, by having this sealed container, safe under the ocean we hopefully have this environment that will allow servers to last much longer.

Host: How did data center technology and submarine technology come together so that you could put the cloud under water?

Ben Cutler: Natick is a little bit unusual as a research project because in some sense we’re not really solving new problems. What we really have here is a marriage of these two mature industries. One is the IT industry, which Microsoft understands very well. And then the other is a marine technologies industry. So, we’re really trying to figure out, how do we blend these things together in a way that creates something new and beneficial?

Host: And so, the submarine technology, making something watertight and drawing on the decades that people have done underwater things, how did you bring that together? Did you have a team of naval experts…?

Ben Cutler: So, the first time we did this, we just, sort of, crashed into it, and we, literally, just built this can and we just kind of dropped it in the water, and ok, we can do this, it kind of works. And so, then the second time around, we put out what we call a Request for Information. We’re thinking of doing this thing, and we did this to government and to academia and to industry, and just to see who’s interested in playing this space? What do they think about it? What kind of approaches would they take? And you know, we’re Microsoft. We don’t really know anything about the ocean. We’ve identified a bunch of folks we think do know about it. And on the industry side we really looked at three different groups. We looked to ship builders, we looked to people who were doing renewable energy in the ocean, which we should come back to that, and then we looked to oil and gas services industry. And so, we got their response and on the basis of that, we then crafted a Request for Proposal to actually go off and do something with us. And that identified what kind of equipment we put inside it, what our requirements were in terms of how we thought that this would work, how cool it had to be, the operating environment that needed to be provided for the servers, and also some more mundane stuff like, when you’re shipping it, what’s the maximum temperature things can get to when it’s like, sitting in the sun on a dock somewhere? And, on the basis of that, we got a couple dozen proposals from four different continents. And so, we chose a partner and then set forward. And so, in part, we were working with University of Washington Applied Physics Lab… is one of three centers of excellence for ocean sciences in the United States, along with Woods Hole and Scripps. And so, we leveraged that capability to help us go through the selection process. And then the company we chose to work with is a company called Naval Group, which is a French company, and among other things, they do naval nuclear submarines, surface ships, but they also do renewable energies. And, in particular, renewable energies in the ocean, so offshore wind, they do tidal energy which is to say, gaining energy from the motion of the tides, as well as something called OTEC which is Ocean Thermal Energy Conversion. So, they have a lot of expertise in renewable energy. Which is very interesting to us. Because another aspect of this that we like is this idea of co-location with offshore renewable energies. So, the idea is, rather than connecting to the grid, I might connect to renewable energies that get placed in the same location where we put this. That’s actually not a new idea for Microsoft. We have data centers that are built near hydroelectric dams or built near windfarms in Texas. So, we like this idea of renewable energy. And so, as we think about this idea of data centers in the ocean, it’s kind of a normal thing, in some sense, that this idea of the renewables would go with us.

Host: You mentioned the groups that you reached out to. Did you have any conversation with environmental groups or how this might impact sea life or the ocean itself?

Ben Cutler: So, we care a lot about that. We like the idea of co-location with the offshore renewables, not just for the sustainability aspects of this, but also for the fact that a lot of those things are going up near large populations centers. So, it’s a way to get close to customers. We’re also interested in other aspects of sustainability. And those include things like artificial reefs. We’ve actually filed an application for a patent having to use this idea of undersea data centers, potentially, as artificial reefs.

Host: So, as you look to maybe, scaling up… Say this thing, in your 5-year experiment, does really well. And you say, “Hey, we’re going to deploy more of these.” Are you looking, then, with the sustainability goggles on, so to speak, for Natick staying green both for customers but also for the environment itself?

Ben Cutler: We are. And I think one thing people should understand too, is you look out at the ocean and it looks like this big, vast open space, but in reality, it’s actually very carefully regulated. So anywhere we go, there are always authorities and rules as to what you can do and how you do them, so there’s that oversight. And there’s also things that we look at directly, ourselves. One of the things that we like about these, is from a recyclability standpoint, it’s a pretty simple structure. Every five years, we bring that thing back to shore, we put a new set of servers in, refresh it, send it back down, and then when we’re all done we bring it back up, we recycle it, and the idea is you leave the seabed as you found it. On the government side, there’s a lot of oversight, and so, the first thing to understand is, typically, like, as I look at the data center that’s there now, the seawater that we eject back into the ocean is about 8/10 of a degree warmer, Celsius, than the water that came in. It’s a very rapid jet, so, it very quickly mixes with the other seawater. And in our case, the first time we did this, a few meters downstream it was a few thousandths of a degree warmer by the time we were that far downstream.

Host: So, it dissipates very quickly.

Ben Cutler: Water… it takes an immense amount of energy to heat it. If you looked at all of the energy generated by all the data centers in the world and pushed all of them at the ocean, per year you’d raise the temperature a few millionths of a degree. So, in net, we don’t really worry about it. The place that we worry about it is this idea of local warming. And so, one of the things that’s nice about the ocean is because there are these persistent currents, we don’t have buildup of temperature anywhere. So, this question of the local heating, it’s really just, sort of, make sure your density is modest and then the impact is really negligible. An efficient data center in the water actually has less impact on the oceans than an inefficient data center on land does.

Host: Let’s talk about latency for a second. One of your big drivers in putting these in the water, but near population centers, is so that data moves fairly quickly. Talk about the general problems of latency with data centers and how Natick is different.

Ben Cutler: So, there are some things that you do where latency really doesn’t matter. But I think latency gets you in all sorts of ways, and in sometimes surprising ways. The thing to remember is, even if you’re just browsing the web, when a webpage gets painted, there’s all of this back-and-forth traffic. And so, ok, so I’ve got now a data center that’s, say, 1,000 kilometers away, so it’s going to be 10 milliseconds, roundtrip, per each communication. But I might have a couple hundred of those just to paint one webpage. And now all of a sudden it takes me like 2 seconds to paint that webpage. Whereas it would be almost instantaneous if that data center is nearby. And think about, also, I’ve got factories and automation and I’ve got to control things. I need really tight controls there in terms of the latency in order to do that effectively. Or imagine a future where autonomous vehicles become real and they’re interacting with data centers for some aspect of their navigation or other critical functions. So, this notion of latency really matters in a lot of ways that will become, I think, more present as this idea of intelligent edge grows over time.

Host: Right. And so, what’s Natick’s position there?

Ben Cutler: So, Natick’s benefit here, is more than half the world’s population lives within a couple hundred kilometers of the ocean. And so, in some sense, you’re finding a way to put data centers very close to a good percentage of the population. And you’re doing it in a way that’s very low impact. We’re not taking land because think about if I want to put a data center in San Francisco or New York City. Well turns out, land’s expensive around big cities. Imagine that. So, this is a way to go somewhere where we don’t have some of those high costs. And, potentially, with this offshore renewable energy, and not, as we talked about before, having any impact on the water supply.

Host: So, it could solve a lot of problems all at once.

Ben Cutler: It could solve a lot of problems in this very, sort of, environmentally sustainable way, as well as, in some sense, adding these socially sustainable factors as well.

Host: Yeah. Talk a little bit about the phases of this project. I know there’s been more than one. You alluded to that a little bit earlier. But what have you done stage wise, phase wise? What have you learned?

Ben Cutler: So, Phase 1 was a Proof of Concept, which is literally, we built a can, and that can had a single computer rack in it, and that rack only had 24 servers. And that was about one-third of the space of the rack. It was a standard, what we call, 42U rack, which reflects the size of the rack. Fairly standard for data centers. And then other two thirds were filled with what we call load trays. Think of them as, all they do is, they’ve got big resistors that generate heat. So, it’s like hairdryers. And so, they’re used, actually, today in data centers to just, sort of, commission new data centers. Test the cooling system, actually. In our case, we just wanted to generate heat. Could we put these things in the water? Could we cool it? What would that look like? What would be the thermal properties? So, that was a Proof of Concept just to see, could we do this? Could we just, sort of, understand the basics? Were our intuitions right about this? What sort of problems might we encounter? And just, you know, I hate to use… but, you know, get our feet wet. Learning how to interact…

Host: You had to go there.

Ben Cutler: It is astonishing the number of expressions that relate to water that we use.

Host: Oh gosh, the puns are…

Ben Cutler: It’s tough to avoid. So, we just really wanted to get some sense of, what it like was to work with the marine industry? Every company and, to some degree, industry, has ways in which they work. And so, this was really an opportunity for us to learn some of those and become informed, before we go to this next stage that we’re at now. Which is more as a prototype stage. So, this vessel that we built this time, is about the size of a shipping container. And that’s by intent. Because then we’ve got something that’s of a size that we can use standard logistics to ship things around. Whether the back of a truck, or on a container ship. Again, keeping with this idea of, if something like this is successful, we have to think about what are the economics of this? So, it’s got 12 racks this time. It’s got 864 servers. It’s got FPGAs, which is something that we use for certain types of acceleration. And then, each of those 864 servers has 32 terabytes of disks. So, this is a substantial amount of capability. It’s actually located in the open ocean in realistic operating conditions. And in fact, where we are, in the winter, the waves will be up to 10 meters. We’re at 36 meters depth. So that means the water above us will vary between 26 and 46 meters deep. And so, it’s a really robust test area. So, we want to understand, can this really work? And what, sort of, the challenges might be in this realistic operating environment.

Host: So, this is Phase 2 right now.

Ben Cutler: This is Phase 2. And so now we’re in the process of learning and collecting data from this. And just going through the process of designing and building this, we learned all sorts of interesting things. And so, turns out, when you’re building these things to go under the ocean, one of the cycling that you get is just from the waves going by. And so, as you design these things, you have to think about how many waves go by this thing over the lifetime? What’s the frequency of those waves? What’s the amplitude of those waves? And this all impacts your design, and what you need to do, based on where you’re going to put it and how long it will be. So, we learned a whole bunch of stuff from this. And we expect everything will all be great and grand over the next few years here. But we’ll obviously be watching, and we’ll be learning. If there is a next phase, it would be a pilot. And now we’re talking to build something that’s larger scale. So, it might be multiple vessels. There might be a different deployment technology than what we used this time, to get greater efficiency. So, I think those are things that, you know, we’re starting to think about, but mostly, right now, we’ve got this great thing in the water and we’re starting to learn.

Host: Yeah. And you’re going to leave it alone for 5 years, right?

Ben Cutler: This thing will just be down there. Nothing will happen to it. There will be no maintenance until it’s time to retire the servers, which, in a commercial setting, might be every 5 years or longer. And then we’ll bring it back. So, it really is the idea of a lights-out thing. You put it there. It just does its thing and then we go and pull it back later. In an actual commercial deployment, we’d probably be deeper than 36 meters. The reason we’re at 36 meters, is, it turns out, 40 meters is a safe distance for human divers to go without a whole lot of special equipment. And we just wanted that flexibility in case we did need some sort of maintenance or some sort of help during this time. But in a real commercial deployment, we’d go deeper, and one of the reasons for that, also, is just, it will be harder for people to get to it. So, people worry about physical security. We, in some sense, have a simpler challenge than a submarine because a submarine is typically trying to hide from its adversaries. We’re not trying to hide. If we deploy these things, we’d always be within the coastal waters of a country and governed by the laws of that country. But we do also think about, let’s make this thing safe. And so, one of the safety aspects is not just the ability to detect when things are going around you, but also to put it in a place where it’s not easy for people to go and mess with it.

Host: Who’s using this right now? I mean this is an actual test case, so, it’s a data center that somebody’s accessing. Is it an internal data center or what’s the deal on that?

Ben Cutler: So, this data center is actually on our global network. Right now, it’s being used by people internally. We have a number of different teams that are using it for their own production projects. One group that’s working with it, is we have an organization inside Microsoft called AI for Earth. We have video cameras, and so, one of the things that they do is, they’re watching the different fish going by, and other types of much more bizarre creatures that we see. And characterizing and counting those, and so we can kind of see how things evolve over time. And one of the things we’re looking to do, potentially, is to work with other parties that do these more general assessments and then provide some of those AI technologies to them for their general research of marine environment and how, when you put different things in the water, how that affects things, either positively or negatively. Not just, sort of, what we’re doing, but other types of things that go in the water which might be things as simple as cables or marine energy devices or other types of infrastructure.

Host: I would imagine, when you deploy something in a brand-new environment, that you have unintended consequences or unexpected results. Is there anything interesting that’s come out of this deployment that you’d like to share?

Ben Cutler: So, I think when people think of the ocean, they think this is like a really hostile and dangerous place to put things. Because we’re all used to seeing big storms, hurricanes and everything that happens. And to be sure, right at that interface between land and water is a really dangerous place to be. But what you find is that, deep under the waves on the seabed, is a pretty quiet and calm place. And so, one of the benefits that we see out of this, is that even for things like 100-year hurricanes, you will hear, acoustically, what’s going on, on the surface, or near the land… waves crashing and all this stuff going on. But it’s pretty calm down there. The idea that we have this thing deep under the water that would be immune to these types of things is appealing. So, you can imagine this data center down there. This thing hits. The only connectivity back to land is going to be fiber. And that fiber is largely glass, with some insulating shell, so it might be fuse so it will break off. But the data center will keep operating. Your data center will still be safe, even though there might be problems on land. So, this diversity of risk is another thing that’s interesting to people when we talk about Natick.

Host: What about deployment sites? How have you gone about selecting where you put Project Natick and what do you think about other possibilities in the future?

Ben Cutler: So, for this Phase 2, we’re in Europe. And Europe, today, is the leader in offshore renewable energies. Twenty-nine of the thirty largest offshore windfarms are located in Europe. We’re deployed at the European Marine Energy Center in the Orkney Islands of Scotland. The grid up there is 100% renewable energy. It’s a mix of solar and wind as well as these offshore energies that people are testing at the European Marine Energy Center or EMEC. So, tidal energy and wave energy. One of the things that’s nice about EMEC is people are testing these devices. So, in the future, we have the option to go completely off this grid. It’s 100% renewable grid, but we can go off and directly connect to one of those devices and test out this idea of a co-location with renewable energies.

Host: Did you look at other sites and say, hey, this one’s the best?

Ben Cutler: We looked at a number of sites. Both test sites for these offshore renewables as well as commercial sites. For example, go into a commercial windfarm right off the bat. And we just decided, at this research phase, we had better support and better capabilities in a site that was actually designed for that. One of the things is, as I might have mentioned, the waves there get very, very large in the winter. So, we wanted some place that had very aggressive waters so that we know that if we survive in this space that we’ll be good pretty much anywhere we might choose to deploy.

Host: Like New York. If you can make it there…

Ben Cutler: Like New York, exactly.

Host: You can make it anywhere.

Ben Cutler: That’s right.

(music plays)

Host: what was your path to Microsoft Research?

Ben Cutler: So, my career… I would say that there’s been very little commonality in what I’ve done. But the one thing that has been common is this idea of taking things from early innovation to market introduction. So, a lot of my early career was in startup companies, either as a founder or as a principle. I was in super computers, computer storage, video conferencing, different types of semiconductors, and then I was actually here at Microsoft earlier, and I was working in a group exploring new operating system technologies. And then, after that, I went to DARPA, where I was there for a few years working on different types of information technology. And then I came back here. And, truthfully, when I first heard about this idea that they were thinking about doing these underwater data centers, it just sounded like the dumbest idea to me, and… But you know, I was willing to go and then, sort of, try and think through, ok, on the surface it sounds ridiculous. But a lot of things start that way. And you have to be willing to go in, understand the economics, understand the science and the technology involved, and then draw some conclusion of whether you think that can actually go somewhere reasonable.

Host: As we close, Ben, I’m really interested in what kinds of people you have on your team, what kinds of people might be interested in working on Special Projects here. Who’s a good fit for a Special Projects research career?

Ben Cutler: I think we’re looking for people who are excited about the idea of doing something new and don’t have fear of doing something new. In some sense, it’s a lot like people who’d go into a startup. And what I mean by that is, you’re taking a lot more risk, because I’m not in in a large organization, I have to figure out a lot of things out myself, I don’t have a team that will know all these things, and a lot of things may fall on the floor just because we don’t have enough people do get everything done. It’s kind of like driving down the highway and you’re, you know, lashed to the front bumper of the car. You’re fully exposed to all the risk and all the challenges of what you’re doing. And you’re, you know, wide open. There’s no end of things to do and you have to figure out what’s important, what to prioritize, because not everything can get done. But have the flexibility to really, then, understand that even though I can’t get everything done, I’m going to pick and choose the things that are most important and really drive in new directions without a whole lot of constraints on what you’re doing. So, I think that’s kind of what we look to. I have only two people who actually directly report to me on this project. That’s the team. But then I have other people who are core members, who worked on it, who report to other people, and then across the whole company, more than two hundred people touched this Phase 2, in ways large and small. Everything from helping us design the data center, to people who refurbished servers that went into this. So, it’s really a “One Microsoft” effort. And so, I think that there’s always opportunities to engage, not just by being on a team, but interacting and providing your expertise and your knowledge base to help us be successful. Because only in that way that we can take these big leaps. And so, in some sense, we’re trying to make sure that Microsoft Research is really staying true to this idea of pursuing new things but not just five years out, in known fields, but look at these new fields. Because the world is changing. And so, we’re always looking for people who are open to these new ideas and frankly are willing to bring new ideas with them as to where they think we should go and why. And that’s how we as a company I think grow and see new markets and are successful.

(music plays)

Host: Ben Cutler, it’s been a pleasure. Thanks for coming on the podcast today.

Ben Cutler: My pleasure as well.

To learn more about Ben Cutler, Project Natick, and the future of submersible data centers, visit natick.research.microsoft.com.

Microsoft study: Teens looking to parents for help with online issues – Microsoft on the Issues

Preliminary results of a new Microsoft study show teenagers around the world are increasingly turning to their parents and other trusted adults for help with online problems — an encouraging development as the new school year begins.

More than four in 10 teens (42 percent) from 22 countries who encountered online issues said they asked their parents for help, while 28 percent said they sought advice from another adult such as a teacher, coach or counselor. Those figures are up an impressive 32 and 19 percentage points, respectively, compared to last year’s findings which showed only 10 percent of young people turned to their parents for advice and just 9 percent asked for help from other adults. In addition, adults and teens across the globe say parents are by far the best placed of any group to keep young people and families safe online. Results show parents have both the greatest potential — and were deemed the most effective — at promoting online safety among young people, teens and families.

The findings are from the latest research associated with Microsoft’s work in digital civility — encouraging safer and healthier online interactions among all individuals and communities. The study, “Civility, Safety and Interaction Online — 2018,” polled teens ages 13-17 and adults ages 18-74 in 22 countries[i] about more than 20 online risks. This latest research builds on similar studies conducted over the previous two years, which polled the same age groups in 23 and 14 countries, respectively. A total of 11,157 individuals participated in the latest research.

Online risk exposure, consequences and pain higher for teen girls 

Digital Civility Research: Risk impact on girls vs. boysTeenage girls were more likely to ask for help from their parents (44 percent of girls vs. 37 percent of boys) and from other trusted adults (29 percent of girls vs. 26 percent of boys), the study shows, likely because life online in general is harder on girls than boys. Indeed, the data demonstrate that girls have a higher level of online risk exposure than boys; they suffer more consequences and “pain” from online ills, and the online risks and abuse that they experience are more emotionally charged. Moreover, as online risks have grown in severity — think “sextortion” and “swatting”[ii] — young people are perhaps more inclined to seek advice from the older generation.

“Civility in cyberspace has become a ‘must’ as we understand so much more about how harmful simple type and images can be,” said Dr. Sharon Cooper, a U.S.-based pediatrician, who works with survivors of cybervictimization. “The immediate, seemingly universal, distribution of unwanted materials can wound both youth and adults.”

Based on her experience, Dr. Cooper spoke of chronic anxiety, depression and a rational paranoia as just some of the resulting harms from negative online experiences. “Sadly, research has shown that some link to cybervictimization has become the issue in nearly 50 percent of cases of suicidal thoughts resulting in seeking care in emergency rooms,” she added.

And, some of these consequences were borne out in our research. Two-thirds (66 percent) of female teenage respondents reported being exposed to online risks vs. 60 percent of male teenage respondents. Nearly three-quarters (73 percent) of girls reported negative consequences following an online issue compared to 67 percent of boys, and the level of pain associated with online risks and the intensity of the attendant emotions — namely fear, anger and sadness — were higher for girls.

New mix of countries in latest study

In 2018, Microsoft added Canada and Singapore to the survey, while three previously polled countries (Australia, China and Japan) were removed. Complete and final results will be made available on Feb. 5, 2019, to mark international Safer Internet Day along with a year-over-year comparison of the Microsoft Digital Civility Index. The Digital Civility Index measures the perceived level of online civility in a given country based on the reported level of risk exposure of individuals in that country. Between 2016 and 2017, the Digital Civility Index did not change—both years read 65 percent, despite the addition in the second year of nine countries and three risks. In the latest survey, the 21 polled-about risks break down as follows:

  • Reputational – “Doxing” and damage to personal or professional reputations
  • Behavioral – Being treated meanly; experiencing trolling, online harassment or bullying; encountering hate speech and microaggressions
  • Sexual – Sending or receiving unwanted sext messages and making sexual solicitations; receiving unwanted sexual attention – a new risk added in this latest research, and being a victim of sextortion or non-consensual pornography (aka “revenge porn”), and
  • Personal / Intrusive – Being the target of unwanted contact, experiencing discrimination, swatting, misogyny, exposure to extremist content/recruiting, or falling victim to hoaxes, scams or fraud.

Back to school with Microsoft’s Digital Civility Challenge

We’re making this preliminary research available in the back-to-school timeframe to encourage parents, teachers, teens and young people to commit to Microsoft’s Digital Civility Challenge – four basic tenets for life online, namely:

  • Live the “Golden Rule” and treat others as you would like to be treated by leading with empathy, compassion and kindness, and affording everyone respect and dignity both online and off.
  • Respect differences by honoring diverse perspectives and, when disagreements surface, engage thoughtfully and avoid name-calling and abusive language.
  • Pause before replying to comments or posts you disagree with and refrain from posting or sending anything that could hurt someone, damage a reputation or threaten someone’s safety.
  • Stand up for yourself and others if it’s safe and prudent to do so; also, report illegal and abusive content and behavior and preserve evidence.

We will post at least one other early look at some other key findings in the weeks ahead. In the meantime, to learn more about digital civility and how you can become a champion for these common-sense online practices, visit www.microsoft.com/digitalcivility. For more on online safety generally, visit our website and check out and share our resources; “like” us on Facebook and follow us on Twitter.

[i] Countries surveyed:  Argentina, Belgium, Brazil, Canada*, Chile, Colombia, France, Germany, Hungary, India, Ireland, Italy, Malaysia, Mexico, Peru, Russia, Singapore*, South Africa, Turkey, the United Kingdom, the United States and Vietnam. (* Indicates the first time this country has been included in this research.)

[ii] In the study, “swatting” is defined as deceiving emergency services like police, fire or medical into sending an emergency response team, typically to a person’s home, based on a false report of an ongoing critical incident or crime.

Tags: ,

Learn the tricks for using Microsoft Teams with Exchange

Using Microsoft Teams means Exchange administrators need to understand how this emerging collaboration service connects to the Exchange Online and Exchange on-premises systems.

At its 2017 Ignite conference, Microsoft unveiled its intelligent communications plan, which mapped out the movement of features from Skype for Business to Microsoft Teams, the Office 365 team collaboration service launched in March 2017. Since that September 2017 conference, Microsoft has added meetings and calling features to Teams, while also enhancing the product’s overall functionality.

Organizations that run Exchange need to understand how Microsoft Teams relies on Office 365 Groups, as well as the setup considerations Exchange administrators need to know.

How Microsoft Teams depends on Office 365 Groups

Each team in Microsoft Teams depends on the functionality provided by Office 365 Groups, such as shared mailboxes or SharePoint Online team sites. An organization can permit all users to create a team and Office 365 Group, or it can limit this ability by group membership. 

When creating a new team, it can be linked to an existing Office 365 Group; otherwise, a new group will be created.

Microsoft Teams layout
Microsoft Teams is Microsoft’s foray into the team collaboration space. Using Microsoft Teams with Exchange will require administrators to stay abreast of roadmap plans for proper configuration and utilization of the collaboration offering.

Microsoft adjusted settings recently so new Office 365 Groups created by Microsoft Teams do not appear in Outlook by default. If administrators want new groups to show in Outlook, they can use the Set-UnifiedGroup PowerShell command.

Microsoft Teams’ reliance on Office 365 Groups affects organizations that run an Exchange hybrid configuration. In this scenario, the Azure AD Connect group writeback feature can be enabled to synchronize Office 365 Groups to Exchange on premises as distribution groups. But this setting could lead to the creation of many Office 365 Groups created via Microsoft Teams that will appear in Exchange on premises. Administrators will need to watch this to see if the configuration will need to be adjusted.

Using Microsoft Teams with Exchange Online vs. Exchange on premises

As an Exchange Online customer, subscribers also get access to all the Microsoft Teams features. However, if the organization uses Exchange on premises, then certain functionality, such as the ability to modify user profile pictures and add connectors, is not available.

Microsoft Teams’ reliance on Office 365 Groups affects organizations that run an Exchange hybrid configuration.

Without connectors, users cannot plug third-party systems into Microsoft Teams; certain add-ins, like the Twitter connector that delivers tweets into a Microsoft Teams channel, cannot be used. Additionally, organizations that use Microsoft Teams with Exchange on-premises mailboxes must run on Exchange 2016 cumulative update 3 or higher to create and view meetings in Microsoft Teams.

Message hygiene services and Microsoft Teams

Antispam technology might need to be adjusted due to some Microsoft Teams and Exchange integration issues.

When a new member joins a team, the email.teams.microsoft.com domain sends an email to the new member. Microsoft owns this domain name, which the tenant administrator cannot adjust.

Because the domain is considered an external email domain to the organization’s Exchange Online deployment, the organization’s antispam configuration in Exchange Online Protection may mark the notification email as spam. Consequently, the new member might not receive the email or may not see it if it goes into the junk email folder.

To prevent this situation, Microsoft recommends adding email.teams.microsoft.com to the allowed domains list in Exchange Online Protection.

Complications with security and compliance tools

Administrators need to understand the security and compliance functionality when using Microsoft Teams with Exchange Online or Exchange on premises. Office 365 copies team channel conversations in the Office 365 Groups shared mailbox in Exchange Online so its security and compliance tools, such as eDiscovery, can examine the content. However, Office 365 stores copies of chat conversations in the users’ Exchange Online mailboxes, not the shared mailbox in Office 365 Groups.

Historically, Office 365 security and compliance tools could not access conversation content in an Exchange on-premises mailbox in a hybrid environment. Microsoft made changes to support this scenario, but customers must request this feature via Microsoft support.

Configure Exchange to send email to Microsoft Teams

An organization might want its users to have the ability to send email messages from Exchange Online or Exchange on premises to channels in Microsoft Teams. To send an email message to a channel, users need the channel’s email address and permission from the administrator. A right-click on a channel reveals the Get email address option. All the channels have a unique email address.

Administrators can restrict the domains permitted to send email to a channel in the Teams administrator settings in the new Microsoft Teams and Skype for Business admin center.