Today’s post was written by Stephen Cutchins CIO and accessibility lead at Accenture.
Diversity at Accenture is a source of strength; the wealth of different perspectives and skillsets that our employees bring to the table keeps us leading in our field. Achieving more as a company starts with addressing the needs of every single employee in our workforce. I am passionate about accessibility. I grew up with two cousins with disabilities and it shaped my outlook on the whole idea of inclusion in the workplace. Accessible technology is about one thing—fitting the tools to the humans who use them—and I’m fortunate to work with a company that shares my vision. I wanted to create an accessibility practice at Accenture, and to that end, I started as the first employee in the CIO’s Center for Excellence, where we look at finding the right tools for an inclusive workplace. And when it comes to business tools, we see Microsoft as a leader in inclusive technology and a great partner, a perfect match for our goals to put technology to work empowering every one of our employees. In fact, we now take it for granted that the experiences within Microsoft 365 are going to work well for our employees.
As a human-centric company, our workplace initiatives are designed to bring the conversation about accessibility to the forefront, encouraging an open dialogue about how we can support employees’ needs in the workplace. Accenture runs on Office 365 productivity services that include a wealth of built-in accessibility features. The Microsoft approach of “accessibility by design” matches our philosophy that accessibility is not an add-on or an afterthought, but an inherent part of the technology we use to communicate and collaborate as an organization.
The ability to collaborate effectively with your colleagues to get work done is the baseline of any productive organization. A lot of credit goes to accessibility features in Office 365 ProPlus applications—such as Skype for Business, Word, and Outlook—for helping us tap into the incredible resources in our company. Daily Skype voice and video calls become transformative when people who are blind or motor-disabled can participate by using JAWS screen reader for Windows or voice dictation software. Even minor changes can have an enormous impact. I was excited to see that when Microsoft moved the Accessibility Checker front and center in Word, near the spell check, it raised awareness of both the feature itself and the need to use it. We are all of different abilities, and learning to consider the full range of situations across the disability spectrum means employees will use the tools at hand for better communication and collaboration with everyone.
It gives me enormous satisfaction that our inclusive workplace, with Microsoft technologies, engages our employees to do their best work and to help them realize their true potential and grow as human beings. Everyone benefits.
Read the case study to learn more about how Accenture is empowering its workforce with the intuitive accessibility tools built into Windows 10 and Office 365.
Office 365 group limits to rein in unchecked access, which could lead to unintended consequences.
An Office 365 group not only contains the membership list for a collection of people, but also manages provisioning and access to multiple services, such as Exchange and SharePoint. At a fundamental level, this means each time a user creates a group for something — a project, or perhaps a team — they add a SharePoint site, group inbox, calendar, Planner, OneNote and more.
Groups is also the foundation behind new services such as Microsoft Teams, Office 365’s chat-based collaboration app. In addition to messaging via channels, Teams enables users to chat with colleagues over voice and video calls, collaborate on documents and use tabs to display other relevant team information. Teams uses Office 365 Groups to produce a team within Teams, not only for the membership list, but also to connect the underlying group-enabled services for data storage.
Why Office 365 group limits are crucial
By default, Office 365 users can create groups without any restrictions. While this appears to be a great idea to prompt viral adoption, it is likely to backfire.
The strength of Office 365 Groups is only one group is needed to manage a team’s calendar, share files among colleagues, and hold group video calls and chats. However, this is not immediately obvious to workers as they explore available services.
For example, a user starts work on a project and, being new to Microsoft Planner, decides to add a plan with the name Project Z Plan. The user also sees he can create a group calendar in Outlook, which he names Project Z Calendar. He feels he could also use a SharePoint site for the project, so he makes one called Project Z. Later, the user discovers Microsoft Teams and feels it can help with the project collaboration efforts, so he generates a new team named Project Z Team.
Each of those actions creates a new group in Office 365. A combined lack of guidance and structure means the worker’s actions — intended to build a seamless fabric that connects multiple Office 365 services — added multiple silos and redundant resources.
This scenario illustrates the need for administrators to develop Office 365 group limits to avoid similar issues. Users need instruction on what tool to use and when, but also some understanding of what a group is in the context of the organization.
Checklist for a proper Office 365 Groups configuration
Before enabling Office 365 Groups for widespread adoption, the administrator should adjust the basic settings to provide limits and help users adhere to corporate standards.
At a minimum, the IT department should consider the following Office 365 Groups configuration:
the email address policy for group Simple Mail Transfer Protocol addresses;
group creation restrictions; and
Apart from the email address policy, all other configurations require an Azure Active Directory Premium license, as documented here.
Next, define the settings to adjust:
Policy to update
Configuration to implement
Reason for the change
The company will use the main domain name because all the mailboxes were moved to Office 365.
Usage guideline URL
Shows users best practices for producing Office 365 Groups.
Group creation restrictions
Enables line managers group to add Office 365 Groups
Only managers can create new Office 365 Groups.
Low risk, medium risk and high risk
Enables users to classify groups and be aware of the sensitivity of the information within the group.
To make these changes, we use PowerShell to change the configuration in multiple places.
For the email address policy configuration, add a new policy that applies to all groups with the New-EmailAddressPolicy cmdlet:
With those adjustments in place, the new Office 365 Groups creation process changes, as shown below.
Now, new Groups will have appropriate email addresses assigned — existing groups remain unchanged.
Add boundaries and reduce complications
It’s important for administrators to employ Office 365 group limits. This practice prevents unchecked access to resources in the collaboration platform, which maintains order and avoids problems with redundancy and wasted resources.
Change key settings to put basic governance in place to steer users toward usage guidelines for Office 365 Groups. This helps the administrator ensure the groups are created correctly and can be managed properly as adoption grows.
We are excited to extend our lead in standards-based Industrie 4.0 cloud solutions using the industrial interoperability standard OPC UA, with several new product announcements at SPS IPC Drives 2017 in Nuernberg, Europe’s leading industrial automation exhibition, which takes place next week.
We continue to be the only cloud provider that offers both OPC UA client/server as well as the upcoming (in OPC UA version 1.04) Publish/Subscribe communication pattern to the cloud and back with open-source modules for easy connection to existing machines, without requiring changes to these machines and without requiring opening the on-premises firewall. We achieve this though the two Azure IoT Edge modules OPC Proxy and OPC Publisher, which are available open-source on GitHub and as Docker containers on DockerHub.
As previously announced, we have now contributed an open-source OPC UA Global Discovery Server (GDS) and Client to the OPC Foundation GitHub. This contribution now brings us close to the 4.5 million source lines of contributed code landmark, keeping us in the lead as largest contributor of open-source software to the OPC Foundation. This server can be run in a container and used for self-hosting in the OT network.
Additionally at SPS, Microsoft will demo its commercial, fully Azure IoT integrated version of the GDS and accompanying client at the OPC Foundation booth. This version runs as part of the Azure IoT Edge offering made available to the public last week.
We have continued to release monthly updates to our popular open-source Azure IoT Suite Connected Factory preconfigured solution which we launched at Hannover Messe this year. Most recently, we have updated the look and feel to be in line with the new set of preconfigured solutions currently being released. We will continue to release new versions of the Connected Factory on a monthly basis with improvements and new features, so check them out on a regular basis.
Our OPC UA Gateway program also keeps growing rapidly. Since we launched the program just six months ago at Hannover Messe, we now have 13 partners signed up including Softing, Unified Automation, Hewlett Packard Enterprise, Kepware, Cisco, Beckhoff, Moxa, Advantech, Nexcom, Prosys OPC, MatrikonOPC, Kontron, and Hilscher.
Furthermore, we are excited to announce that the Industrial IoT Starter Kit, previously announced at OPC Day Europe, is now available to order online from our partner Softing. The kit empowers companies to securely link their production line from the physical factory floor to the Microsoft Azure IoT Suite Connected Factory in less than one day. This enables the management, collection, and analysis of OPC UA telemetry data in a centralized way to gain valuable business insights immediately and improve operational efficiency. As with all our Industrial IoT products, this kit uses OPC UA, but also comes with a rich PLC protocol translation software from Softing called dataFEED OPC Suite. It comes with industry-grade hardware from Hewlett Packard Enterprise in the form of the HPE GL20 IoT Gateway. Swing by the Microsoft booth to check it out and get the chance to try it out for 6 months in your own industrial installation.
Stop by our booth in Hall 6, as well as the OPC Foundation booth in Hall 7 to see all of this for yourself!
“Accessibility by design” is an important concept for Microsoft, and one that underpins many of its artificial intelligence-powered products, including Seeing AI.
Announced on Wednesday among a series of other AI tools, Seeing AI is a free mobile application designed to support people with visual impairments by narrating the world around them. The app — which is an ongoing research project bringing together deep learning and Microsoft Cognitive Services — can read documents, making sense of structural elements such as headings, paragraphs, and lists, as well as identify a product using its barcode.
It can additionally recognise and describe images in other apps, and even pinpoint people’s faces and provide a description of their appearance, though camera quality and lighting might influence its description.
At the Microsoft Future of Artificial Intelligence event in Sydney, Kenny Johar Singh, a Melbourne-based cloud solutions architect at Microsoft, demonstrated Seeing AI, which he uses to help navigate the physical world.
Although he lost 75 percent of his vision due to a degenerative retinal condition, technology has been an “empowering force” that compelled him to pursue a career in the industry, Singh said.
Where before Singh was reliant on his wife to bridge the information gap between him and the physical world, by using Seeing AI he is able to be more independent.
In front of guests, Singh used the app to scan a product, which the app correctly identified as Bounce’s coconut lemon-flavoured natural energy ball.
“What it’s actually done for me is that, now that it has detected the ball, I have the calorie calibration — proteins, carbohydrates, fats, and so forth. So I know what I’m picking and using which basically means that I can be totally independent now and my wife absolutely loves it because she doesn’t need to be dragged into stuff like this,” he told guests at the Microsoft Summit.
Singh also pointed his phone’s camera at Jenny Lay-Flurrie, Microsoft’s chief accessibility officer, and the Seeing AI app described Lay-Flurrie as a “54-year-old woman with brown hair wearing glasses looking happy”. The last three descriptive pieces of information were accurate, though it was off-the-mark about her age.
“People who are blind are not the best at taking pictures, and we often get a lot of edge cases coming through with those pictures. Pictures that aren’t in the middle bracket of the high quality pictures that you can use to … adapt your algorithms and get some machine learning. They are in a lot of ways sometimes dirty pictures, but the overall quality of cognitive services will increase exponentially because you’re including this data sample,” Lay-Flurrie told guests at the Microsoft Summit.
Microsoft’s accessibility features are being used by the Australian National University’s law lecturer Cameron Roles, who said in a statement, “Now is definitely, in my view, the most exciting time in history to be blind.”
One of triplets, Roles, who is also a director with Vision Australia, was born three months early and the oxygen that saved his life left him blind. Of particular use to him are the latest accessibility features in platforms such as Office 365 and Windows 10.
For example, Microsoft has recently integrated its alternative text engine into the core of Windows 10, which Lay-Flurrie said was a “serious step”. This means visually impaired users that use third-party screen readers or Microsoft’s own Narrator, which reads text aloud and describes events, will be able to get a description of the contents in an image, rather than simply knowing there’s an image on screen.
The company also recently introduced Eye Control, a built-in eye-tracking feature for Windows 10, enabling people living with motor neurone disease (MND) and other mobility impairments to navigate their computers. The feature currently only works with Swedish eye-tracking vendor Tobii’s Eye Tracker 4C, though Microsoft is working to add support for other similar devices.
Within Eye Control is a capability called “shape writing”, aimed at speeding up typing by allowing the user to look at the first and last letters of a word and “simply glancing at letters in between”. Microsoft said in August that a “hint of the word predicted will appear on the last key of the word”, and if the prediction is incorrect, the user can select other predicted alternatives.
The company has additionally introduced colour-blindness filters in Windows 10, a condition that Lay-Flurrie said is more common than people think, affecting one in nine people.
Lay-Flurrie, who has a hearing impairment, said that while we need to consider the potential implications of AI in areas such as privacy and security, we should also look at the positives. She said there are 1 billion people living with disabilities globally, and AI can empower these people both in their day-to-day lives and in workplace environments.
“I also look at my daughter, who has autism, she’s 10. With autism, you don’t always understand social cues and social language, and facial expressions are not obvious to her what they mean. She often misinterprets what we’re saying … So I love the potential and the power and beginning to see a wave of innovation in the area of cognitive and mental health where you’re understanding those social cues, you’re using that visual stimulus to give examples, you’re helping to prompt what would be the next step to your learning through real life as opposed to sitting there … some of the therapeutic applications [require you] to sit there and watch YouTube videos,” she said.
“I think there’s real-time applications that can change the lives and include in the same way you do in the workplace with PowerPoint Designer and position people as the geniuses that they are … And you need to be able to perform at the same level as anyone else. Cognitive services and some of these beautiful engines could give us that capability.”
Lay-Flurrie also said people with disabilities can lead or contribute significantly to innovation.
“People with disabilities have a unique lens on the world that could really give a massive input of innovation here and accelerate our path with AI,” she said at the Microsoft Summit.
Microsoft’s chief storyteller Steve Clayton communicated a similar sentiment at the event, saying innovations designed by and for people living with disabilities can prove to be useful more broadly.
“When I started to learn about inclusivity in design, the dropped kerb was originally invented for people who are in wheelchairs. It turns out that the dropped kerb on a sidewalk is also incredibly useful if you’re carrying groceries or if you’re on a skateboard. There are these serendipitous moments I think we’ve found where we said, ‘Hey, we’re going to create a piece of technology that is for people with visual impairment or other disabilities’ that actually turned out to be incredibly useful for the rest of the world,” he said.
Microsoft has been able to integrate AI into its products, while offering new AI-powered products, because of advances in computer vision, speech recognition, and natural language understanding.
The company has developed technologies that can recognise speech with an error rate of 5.1 percent and identify images with an error rate of 3.5 percent.
Microsoft is also currently leading a competition run by Stanford University that uses information from Wikipedia to test how well AI systems can answer questions about text passages. The competition is expected to generate results that can be applied in areas such as Bing search and chatbot responses.
“This means that using AI’s deep learning, computers can recognise words in a conversation on par with a person, deliver relevant answers to very specific questions, and provide real-time translation,” the company said in an announcement on Thursday.
“It also means that computers on a factory floor can distinguish between a fabricated part and a human arm, or that an autonomous vehicle can tell the difference between a bouncing ball and a toddler skipping across a street.”
In Australia, the University of Canberra has developed the Lucy and Bruce chatbots to streamline support services for students and employees using Microsoft Bot Framework and Microsoft Cognitive Services Language Understanding Intelligent Service.
Once launched, Lucy will connect to the university’s Dynamics 365 platform, allowing students to raise tickets when Lucy can’t find the answer. The university is also exploring possibilities to use Bruce to allow IT service tickets to be logged by staff.
Australian Securities Exchange-listed packaging manufacturer Pact Group has also worked with Microsoft using its Cognitive Services Computer Vision for facial and objection recognition to boost workplace safety.
Pact’s Workroom Kiosk Demo can recognise individual employees in a workshop environment, detecting if they are wearing appropriate safety gear and monitoring their behaviour based on an understanding of the tasks individual employees are authorised to perform. Team leaders are automatically alerted if there are potential issues, and an on-site trial of the system will be launched soon.
Microsoft has also announced advancements to Translator, with expanded use of neural networks to improve both text and speech translations in all of Translator’s supported products.
For people learning Chinese, the company will “soon” release a new mobile application from Microsoft Research Asia that can act as an always available, AI-based language learning assistant.
The company has additionally announced Visual Studio Tools for AI for AI developers and data scientists, which it said combines Visual Studio’s capabilities such as debugging and rich editing, with the support of deep learning frameworks such as Microsoft Cognitive Toolkit, Google Tensorflow, and Caffe. Visual Studio Tools for AI leverages existing code support for Python, C/C++/C#, and supplies additional support for Cognitive Toolkit BrainScript, Microsoft said.
AI capabilities for Azure IoT Edge — which enable developers to build and test container-based modules using C, Java, .NET, Node.js and Python, and simplify the deployment and management of workloads and machine learning models at the edge — are also now generally available, the company said.
“AI is about amplifying human ingenuity through intelligent technology that will reason with, understand, and interact with people and, together with people, help us solve some of society’s most fundamental challenges,” Clayton said in a statement.
Microsoft to integrate Visual Studio with AI services
Another piece of Microsoft’s ‘Open Mind Studio’ falls into place: A new extension enabling developers to use AI services from inside Visual Studio.
Microsoft’s Visual Studio Live Share to improve developer collaboration
Microsoft plans to debut a new developer collaboration service in early 2018 that could make it easier for developers to work in tandem even when located remotely.
Microsoft gets data-fabulous at NYC event
Microsoft announces Azure Databricks service, new Cosmos DB features, enterprise AI capabilities and more at its annual Connect(); event in New York
Microsoft streamlines big data analytics with new Azure services (TechRepublic)
New database offerings and services in Microsoft’s Azure cloud platform were unveiled today at Microsoft’s Connect(); 2017 conference, alongside GitHub’s adoption of GVFS.
Microsoft reveals Azure IoT Edge: Putting AI at the furthest reaches of your network (TechRepublic)
Redmond revealed a variety of new services to help firms take advantage of AI, ranging from new analytics tools to easier ways to incorporate machine learning into software.
William Blum, Principal Research Engineering Lead. (Photography by Scott Eklund/Red Box Pictures)
Microsoft researchers have developed a new method for discovering software security vulnerabilities that uses machine learning and deep neural networks to help the system root out bugs better by learning from past experience. This new research project, called neural fuzzing, is designed to augment traditional fuzzing techniques, and early experiments have demonstrated promising results.
Software security testing is a hard task that is traditionally done by security experts through costly and targeted code audits, or by using very specialized and complex security tools to detect and assess vulnerabilities in code. We recently released a tool, called Microsoft Security Risk Detection, that significantly simplifies security testing and does not require you to be an expert in security in order to root out software bugs. The Azure-based tool is available to Windows users and in preview for Linux users.
Fuzz testing The key technology underpinning Microsoft Security Risk Detection is fuzz testing, or fuzzing. It’s a program analysis technique that looks for inputs causing error conditions that have a high chance of being exploitable, such as buffer overflows, memory access violations and null pointer dereferences.
Fuzzers come in different categories:
Blackbox fuzzers, also called “dumb fuzzers,” rely solely on the sample input files to generate new inputs.
Whitebox fuzzers analyze the target program either statically or dynamically to guide the search for new inputs aimed at exploring as many code paths as possible.
Greybox fuzzers, just like blackbox fuzzers, don’t have any knowledge of the structure of the target program, but make use of a feedback loop to guide their search based on observed behavior from previous executions of the program.
Figure 1 – Crashes reported by AFL. Experimental support in MSRD
Neural fuzzing Earlier this year, Microsoft researchers including myself, Rishabh Singh, and Mohit Rajpal, began a research project looking at ways to improve fuzzing techniques using machine learning and deep neural networks. Specifically, we wanted to see what a machine learning model could learn if we were to insert a deep neural network into the feedback loop of a greybox fuzzer.
For our initial experiment, we looked at whether we could learn over time by observing past fuzzing iterations of an existing fuzzer.
We applied our methods to a type of greybox fuzzer called American fuzzy lop, or AFL.
We tried four different types of neural networks and ran the experiment on four target programs, using parsers for four different file formats: ELF, PDF, PNG, XML.
The results were very encouraging—we saw signiﬁcant improvements over traditional AFL in terms of code coverage, unique code paths and crashes for the four input formats.
The AFL system using deep neural networks based on the Long short-term memory (LSTM) neural network model gives around 10 percent improvement in code coverage over traditional AFL for two files parsers: ELF and PNG.
When looking at unique code paths, neural AFL discovered more unique paths than traditional AFL for all parsers except PDF. For the PNG parser, after 24 hours of fuzzing it found twice as many unique code paths as traditional AFL.
Figure 2 – Input gain over time (in hours) for the libpng file parser.
A good way to evaluate fuzzers is to compare the number of crashes reported. For the ELF file parser, neural AFL reported more than 20 crashes whereas traditional AFL did not report any. This is astonishing given that neural AFL was trained on AFL itself. We also observed more crashes being reported for text-based file formats like XML, where neural AFL could find 38 percent more crashes than traditional AFL. For PDF, traditional AFL did overall better than neural AFL in terms of new code paths found. However, neither system reported any crashes.
Figure 3 – Reported crashes over time (in hours) for readelf (left) and libxml (right).
Overall, using neural fuzzing outperformed traditional AFL in every instance except the PDF case, where we suspect the large size of the PDF files incurs noticeable overhead when querying the neural model.
In general, we believe our neural fuzzing approach yields a novel way to perform greybox fuzzing that is simple, efficient and generic.
Simple: The search is not based on sophisticated hand-crafted heuristics — the system learns a strategy from an existing fuzzer. We just give it sequences of bytes and let it figure out all sorts of features and automatically generalize from them to predict which types of inputs are more important than others and where the fuzzer’s attention should be focused.
Efficient: In our AFL experiment, in the first 24 hours we explored significantly more unique code paths than traditional AFL. For some parsers we even report crashes not already reported by AFL.
Generic: Although we’ve tested it only on AFL, our approach could be applied to any fuzzer, including blackbox and random fuzzers.
We believe our neural fuzzing research project is just scratching the surface of what can be achieved using deep neural networks for fuzzing. Right now, our model only learns fuzzing locations, but we could also use it to learn other fuzzing parameters such as the type of mutation or strategy to apply. We are also considering online versions of our machine learning model, in which the fuzzer constantly learns from ongoing fuzzing iterations.
William Blum leads the engineering team for Microsoft Security Risk Detection.
AUSTIN — Don Freese said infosec professionals lead with fear and emotion too often when discussing security issues when they should be speaking a language that C-level executives and board members understand: risk.
Freese, deputy assistant director of the FBI and former head of the bureau’s National Cyber Investigative Joint Task Force (NCIJTF), spoke Monday morning at the (ISC)2 Security Congress about the importance of cyber-risk management and how the lack of proper practices is hurting enterprise security postures.
“When we start to use emotion and fear to drive the conversation – and often times it’s said in the security game that our worst problem is people – we’re failing in that fundamental message,” Freese said during a keynote discussion with Brandon Dunlap, senior manager of security, risk and compliance at Amazon.
Trying to spur executives into action through fear isn’t effective, Freese said. Instead, security professionals need to identify and measure the various risks to an organization and determine which ones are most pressing and need a portion of the organization’s limited resources in order to be mitigated. “That’s the way we connect with the business world,” he said. “We want to talk about increasing the rigor in how we manage risk.”
Good cyber-risk management starts, Freese said, with enterprise security teams distinguishing between a risk and a threat. However, Freese said that “regrettably, …often times we conflate the two [risks and threats],” which lead to every conceivable risk being viewed as an impending threat.
“That’s simply not a good way to communicate what we’re trying to do. It’s not giving us traction in the world about how we prioritize our resources against those particular threats,” Freese said, adding that it confuses the message. “We’re crying wolf.”
Instead, security teams must delineate between what cyber threats are possible (pretty much everything, he said) and what’s probable (a much smaller and more manageable pool) while analyzing the intent and capability of the potential threat actor, the frequency of the threat and the potential impact of a successful attack.
“If we can start the conversation with not only probability but describe the frequency and the magnitude of the impacts based on the intent and capability, then we start to set up a much more understandable paradigm,” Freese said. “And let me pause and say it’s difficult to do, and that’s why we’re not doing it yet.”
Cybersecurity insurance: not the answer, yet
Dunlap asked Freese about the growth of the cybersecurity insurance market and if it could help organizations with cyber-risk management. “Cyber insurance hasn’t really settled in as a real robust mechanism yet. It’s still mostly business insurance, but that’s because we don’t measure the risks very well,” Freese said.
However, Freese said insurance actuaries are working on the issue, and there is potential for collaboration between the two fields. “There are several different actuarial groups that are looking at cyber risk to measure that in a way that’s quantifiable for pure insurance purposes,” he said.
Still, he said organizations must at least start moving toward a defined cyber-risk management plan. Freese said in his role at the FBI, he’s worked with companies across the globe on addressing cybersecurity issues and threats, and he stressed that the companies that are successful and don’t find themselves in a data breach headline all had one thing in common.
“They’re managing risk in a very measurable, very incremental and consistent type of way,” he said. “They know what’s going on in their networks and they know what type of data they have.”
New Windows 10 syncing features should be popular among users but could lead to IT security risks.
Microsoft’s upcoming Windows 10 Fall Creators Update will include the Continue on PC feature, which allows users to start web browsing on their Apple iPhones or Google Android smartphones and then continuing where they left off on their PCs. A similar feature called Timeline, which will allow users to access some apps and documents across their smartphones and PCs, is also in the works. IT will have to pay close attention to both of these features, because linking PCs to other devices can threaten security.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
“It does have the potential to be a real mess,” said Willem Bagchus, messaging and collaboration specialist at United Bank in Parkersburg, W.Va. “To pick up data on another device, you have to do it securely. This has to be properly protected.”
How Continue on PC works
Continue on PC syncs browser sessions through an app for iPhones and Android smartphones. Users must be logged into the same Microsoft account in the app and on their Windows 10 PC.
When on a webpage, smartphone users can select the Share option in the browser and choose Continue on PC, which syncs the browsing session through the app. The feature is currently available as part of a preview build leading up to the Windows 10 Fall Creators Update, and the iOS app is already available in the Apple App Store.
Willem Bagchusmessaging and collaboration specialist, United Bank
Microsoft did not say if the feature will allow users to continue a browsing session on their smartphone that started on their PC. Apple’s Continuity feature offers this capability, and the Google Chrome browser lets users share tabs and browsing history across multiple devices as well.
Continue on PC could expose sensitive data when sharing web applications through synced devices, said Jack Gold, founder and principal analyst of J. Gold Associates, a mobile analyst firm in Northborough, Mass.
For example, if a user’s personal laptop is stolen that is synced to a corporate phone, the thief could access business web apps through a synced browsing session, exposing company data. If the feature is expanded to share browsing sessions from a PC to a smartphone, all it would take is someone to steal a user’s smartphone to have access to the web apps the employee used on their PC.
“It could be something to worry about if a user loses their phone,” Gold said. “I can’t lose that device because it can sync to my PC.”
To avoid this problem, IT could use enterprise mobility management (EMM) software to blacklist the Continue on PC app altogether, or simply prevent users from sharing the browser session through the app.
Timeline shares security issues
Originally, Timeline was supposed to be part of the Windows 10 Fall Creators Update, but now it will come out in a preview build shortly afterward, Microsoft said.
Timeline suggests recent documents and apps a user accessed on a synced smartphone and allows them to pull some of them up on their PC, and vice versa. Microsoft hasn’t disclosed which apps the feature will support.
This feature could also cause a security problem if a user loses their PC or smartphone and it gets in the wrong hands. Timeline is basically a dashboard displaying every app, document and webpage the user was in across multiple devices, so someone could access documents, apps and web apps that contain work data on the stolen device.
“Security is needed across the board,” said Bagchus, whose company plans to move to Windows 10 next year. “It absolutely has to be managed.”
EMM software should also come into play when managing this feature, he said.
IT needs to force users to have passwords on all PCs and mobile devices to protect from these instances, said Jim Davies, IT director at Ongweoweh Corp., a pallet and packing management company in Ithaca, N.Y.
“This is something that will be used by a lot of people in a lot of companies,” Davies said. “People won’t need to email themselves a link because this makes it simpler. That being said, your password is that much more important now.”
Ongweoweh Corp. plans to migrate to Windows 10 in the first quarter of 2018.
It is likely that these Windows 10 syncing features won’t be limited to smartphones, and iPads and Android tablets could gain this ability in the future, Bagchus said.
“This feature … makes productivity easier,” Bagchus said. “This will be huge.”