Deepfake technology has advanced at a rapid pace, but the infosec community is still undecided about how much of a threat deepfakes represent.
Many are familiar with deepfakes in their video and image form, where machine learning technology generates a celebrity saying something they didn’t say or putting a different celebrity in their place. However, deepfakes can also appear in audio and even text-based forms. Several sessions at RSA Conference 2020 examined how convincing these fakes can be, as well as technical approaches to refute them. But so far, threat researchers are unsure if deepfakes have been used for cyberattacks in the wild.
In order to explore the potential risk of deepfakes, SearchSecurity asked a number of experts about the threat deepfakes pose to society. In other words, should we be worried about deepfakes?
There was a clear divide in the responses between those who see deepfakes as a real threat and those who were more lukewarm on the idea.
Concern about deepfakes
Some security experts at RSA Conference 2020 feared that deepfakes would be used as part of disinformation campaigns in U.S. elections. McAfee senior principal engineer and chief data scientist Celeste Fralick said that with the political climate being the way it is around the world, deepfakes are “absolutely something that we should be worried about.”
Fralick cited a demonstration of deepfake technology during an RSAC session presented by Sherin Mathews, senior data scientist at McAfee, and Amanda House, data scientist at McAfee.
“We have a number of examples, like Bill Hader morphing into Tom Cruise and morphing back. I never realized they looked alike, but when you see the video you can see them morph. So certainly in this political climate I think that it’s something to be worried about. Are we looking at the real thing?”
Jake Olcott, BitSight’s vice president of communications and government affairs, agreed, saying that deepfakes are “a huge threat to democracy.” He notes that the platforms that own the distribution of content, like social media sites, are doing very little to stop the spread of misinformation.
“I’m concerned that because the fakes are so good, people are either not interested in distinguishing between what’s true and what’s not, but also that the malicious actors, they recognize that there’s sort of just like a weak spot and they want to just continue to pump this stuff out.”
CrowdStrike CTO Mike Sentonas made the point that they’re getting harder to spot and easier to create.
“I think it’s something we’ll more and more have to deal with as a community.”
Deepfake threats aren’t pressing
Other security experts such as Patrick Sullivan, Akamai CTO of security strategy, weren’t as concerned about the potential use of deepfakes in cyberattacks.
“I don’t know if we should be worrying. I think people should be educated. We live in a democracy, and part of that is you have to educate yourself on things that can influence you as someone who lives in a democracy,” Sullivan said. “I think people are much smarter about the ways someone may try to divide online, how bots are able to amplify a message, and I think the next thing people need to get their arms around is video, which has always been an unquestionable point of data, which you may have to be more skeptical about.”
Malwarebytes Labs director Adam Kujawa said that while he’s not so worried about the ever-publicized deepfake videos, he does show concern with deepfake text and systems that automatically predict or create text based on a user’s input.
“I see as being pretty dangerous because if you utilize that with limited input derived from social media accounts, anything you want to create a pretty convincing spear phishing email, almost on the fly.”
That said, he echoed Sullivan’s point that people are generally able to spot when something is obviously not real.
“They are getting better [however], and we need to develop technology that can identify these things you and I won’t be able to, because eventually that’s going to happen,” Kujawa said.
Greg Young, Trend Micro’s vice president of cybersecurity, went as far as to call deepfakes “not a big deal.”
However, he added, ” I think where it’s going to be used is business email compromise where you try to get a CEO or CFO to send you a Western Union payment. So if I can imitate that person’s voice, deepfake for voice alone would be very useful because I can tell the CFO to do this thing if I’m the person pretending to be the CEO, and they’re going to do it. We don’t leave video messages today, so the video side I’m less concerned about. I think deepfakes will be used more in disinformation campaigns. We’ve already seen some of that today.”
It seems that every year there’s a new record for the pace of change in IT, from the move from mainframe to client/server computing, to embracing the web and interorganizational data movements. The current moves that affect organizations are fundamental, and IT operations had better pay attention.
Cloud providers are taking over ownership of the IT platform from organizations. Organizations are moving to a multi-cloud hybrid platform to gain flexibility and the ability to quickly respond to market needs. Applications have started to transition from monolithic entities to composite architectures built on the fly in real time from collections of functional services. DevOps has affected how IT organizations write, test and deliver code, with continuous development and delivery relatively mainstream approaches.
These fundamental changes mean that IT operations managers have to approach the application environment in a new way. Infrastructure health dashboards don’t meet their needs. Without deep contextual knowledge of how the platform looks at an instant, and what that means for performance, administrators will struggle to address issues raised.
Enter AIOps platforms
AIOps means IT teams use artificial intelligence to monitor the operational environment and rapidly and automatically remediate any problems that arise — and, more to the point, prevent any issues in the first place.
True AIOps-based management is not easy to accomplish. It’s nearly impossible to model an environment that continuously changes and then also plot all the dependencies between hardware, virtual systems, functional services and composite apps.
However, AIOps does meet a need. It is, as yet, a nascent approach. Many AIOps systems do not really use that much artificial intelligence; many instead rely on advanced rules and policy engines to automatically remediate commonly known and expected issues. AIOps vendors collect information on operations issues from across their respective customer bases to make the tools more useful.
Today’s prospective AIOps buyers must beware of portfolio repackaging — AIOps on the product branding doesn’t mean they use true artificial intelligence. Question the vendor carefully about how its system learns on the go, deals with unexpected changes and manages idempotency. 2020 might be the year of AIOps’ rise, but it might also be littered with the corpses of AIOps vendors that get things wrong.
AIOps’ path for the future
As we move through 2020 and beyond, AIOps’ meaning will evolve. Tools will better adopt learning systems to model the whole environment and will start to use advanced methods to bring idempotency — the capability to define an end result and then ensure that it is achieved — to the fore. AIOps tools must be able to either take input from the operations team or from the platform itself and create the scripts, VMs, containers, provisioning templates and other details to meet the applications’ requirements. The system must monitor the end result from these hosting decisions and ensure that not only is it as-expected, but that it remains so, no matter how the underlying platform changes. Over time, AIOps tools should extend so that business stakeholders also have insights into the operations environment.
Such capabilities will mean that AIOps platforms move from just operations environment tool kits to part and parcel of the overall BizDevOps workflows. AIOps will mean an overarching orchestration system for the application hosting environment, a platform that manages all updates and patches, and provides feedback loops through the upstream environment.
The new generation of AIOps tools and platforms will focus on how to avoid manual intervention in the operations environment. Indeed, manual interventions are likely to be where AIOps could fail. For example, an administrator who puts wrong information into the flow or works outside of the AIOps system to make any configuration changes could start a firestorm of problems. When the AIOps system tries to fix them, it will find that it does not have the required data available to effectively model the change the administrator has made.
2020 will see AIOps’ first baby steps to becoming a major tool for the systems administrator. Those who embrace the idea of AIOps must ensure that they have the right mindset: AIOps has to be the center of everything. Only in extreme circumstances should any action be taken outside of the AIOps environment.
The operations team must reach out to the development teams to see how their feeds can integrate into an AIOps platform. If DevOps tools vendors realize AIOps’ benefits, they might provide direct integrations for downstream workflows or include AIOps capabilities into their own platform. This trend could expand the meaning of AIOps to include business capabilities and security as well.
As organizations move to highly complex, highly dynamic platforms, any dependency on a person’s manual oversight dooms the deployment to failure. Simple automation will not be a workable way forward — artificial intelligence is a must.
Amazon is a powerhouse when it comes to recruiting. It hires at an incredible pace and may be shaping how other firms hire, pay and find workers. But it also offers a cautionary tale, especially in the use of AI.
Amazon HR faces a daunting task. The firm is adding thousands of employees each quarter through direct hiring and acquisitions. In the first quarter of 2019, it reported having 630,000 full and part-time employees. By the third quarter, that number rose 19% to 750,000 employees.
Amazon’s hiring strategy includes heavy use of remote workers or flex jobs, including a program called CamperForce. The program was designed for nomadic people who live full or part-time in recreational vehicles. They help staff warehouses during peak retail seasons.
Amazon’s leadership in remote jobs can be measured by FlexJobs, a site that specializes in connecting professionals to remote work. Amazon ranked sixth this year out of the 100 top companies with remote jobs. FlexJobs’ rankings are based on data from some 51,000 firms. The volume of job ads determines ranking.
The influence of large employers
Amazon’s use of remote work is influential, said Brie Reynolds, career development manager and coach at FlexJobs. There is “a lot of value in seeing a large, well-known company — a successful company — employing remote workers,” she said.
In April, Amazon CEO Jeff Bezos challenged other retailers to raise their minimum wage to $15, which is what Amazon did in 2018. “Better yet, go to $16 and throw the gauntlet back at us,” said Bezos, in his annual letter to shareholders.
But the impact of Amazon’s wage increase also raises questions.
“Amazon is such a large employer that increases for Amazon’s warehouse employees could easily have a large spillover effect raising wage norms among employers in similar industries and the same local area,” said Michael Reich, a labor market expert and a professor of economics at the University of California at Berkeley. But without more data from Amazon and other companies in the warehouse sector, he said it’s difficult to tell where the evidence falls.
Amazon HR’s experience with AI in recruiting may also be influential, but as a warning.
The warning from Amazon
In late 2018, Reuters reported that Amazon HR developed an algorithm for hiring technical workers. But because of its training, the algorithm was recommending men over women. The technical workforce suffers from a large gender gap.
The Amazon experience “shows that all historical data contains an observable bias,” said John Sumser, principal analyst at HRExaminer. “In the Amazon case, utilizing historical data perpetuated the historical norm — a largely male technical workforce.”
Any AI built on anything other than historical data runs the distinct risk of corrupting the culture of the client, Sumser said.
In July, Amazon said it would spend $700 million to upskill 100,000 U.S. workers through 2025. The training program amounts to about $1,000 a year per employee, which may be well less than Amazon HR’s cost of hiring new employees.
Josh BersinIndependent HR analyst
In late 2018, Amazon HR’s talent acquisition team had more than 3,500 people. The company is interested in new HR tech and takes time to meet with vendors, said an Amazon recruiting official at the HR Technology Conference and Expo.
But Amazon, overall, doesn’t say much about its HR practices and that may be tempering the company’s influence, said Josh Bersin, an independent HR analyst.
Bersin doesn’t believe the industry is following Amazon. And part of his belief is due to the company’s Apple-like secrecy on internal operations, he said.
“I think people are interested in what they’re doing, and they probably are doing some really good things,” Bersin said. “But they’re not taking advantage of the opportunity to be a role model.”
Enterprise use of public cloud storage has been growing at a steady pace for years, yet plenty of IT shops remain in the early stages of the journey.
IT managers are well aware of the potential benefits — including total cost of ownership, agility and unlimited capacity on demand — and many face cloud directives. But companies with significant investments in on-premises infrastructure are still exploring the applications where public cloud storage makes the most sense beyond backup, archive and disaster recovery (DR).
Ken Lamb, who oversees resiliency for cloud at JP Morgan, sees the cloud as a good fit, especially when the financial services company needs to get an application to market quickly. Lamb said JP Morgan uses public cloud storage from multiple providers for development and testing, production applications and DR and runs the workloads internally in “production parallel mode.”
JP Morgan’s cloud data footprint is small compared to its overall storage capacity, but Lamb said the company has a large migration plan for Amazon Web Services (AWS).
“The biggest problem is the way applications interact,” Lamb said. “When you put something in the cloud, you have to think: Is it going to reach back to anything that you have internally? Does it have high communication with other applications? Is it tightly coupled? Is it latency sensitive? Do you have compliance requirements? Those kind of things are key decision areas to say this makes sense or it doesn’t.”
Public cloud storage trends
Enterprise Strategy Group research shows an increase in the number of organizations running production applications in the public cloud, whereas most used it only for backups or archives a few years ago, according to ESG senior analyst Scott Sinclair. Sinclair said he’s also seeing more companies identify themselves as “cloud-first” in terms of their overall IT strategy, although many are “still beginning their journeys.”
“When you’re an established company that’s been around for decades, you have a data center. You’ve probably got a multi-petabyte environment. Even if you didn’t have to worry about the pain of moving data, you probably wouldn’t ship petabytes to the cloud overnight,” Sinclair said. “They’re reticent unless there is some compelling need. Analytics would be one.”
The Hartford has a small percentage of its data in the public cloud. But the Connecticut-based insurance and financial services company plans to use Amazon’s Simple Storage Service (S3) for hundreds of terabytes, if not petabytes, of data from its Hadoop analytics environment, said Stephen Whitlock, who works in cloud operations for compute and storage at The Hartford.
One challenge The Hartford faces in shifting from on-premises Hortonworks Hadoop to Amazon Elastic MapReduce (EMR) is mapping permissions to its large data set, Whitlock said. The company migrated compute instances to the cloud, but the Hadoop Distributed File System (HDFS)-based data remains on premises while the team sorts out the migration to the EMR File System (EMRFS), Amazon’s implementation of HDFS, Whitlock said.
Finishing the Hadoop project is the first priority before The Hartford looks to public cloud storage for other use cases, including “spiky” and “edge” workloads, Whitlock said. He knows costs for network connectivity, bandwidth and data transfers can add up, so the team plans to focus on applications where the cloud can provide the greatest advantage. The Hartford’s on-premises private cloud generally works well for small applications, and the public cloud makes sense for data-driven workloads, such as the analytics engines that “we can’t keep up with,” Whitlock said.
“It was never a use case to say we’re going to take everything and dump it into the cloud,” Whitlock said. “We did the metrics. It just was not cheaper. It’s like a convenience store. You go there when you’re out of something and you don’t want to drive 10 miles to the Costco.”
Moving cloud data back
Capital District Physicians’ Health Plan (CDPHP), a not-for-profit organization based in Albany, NY, learned from experience that the cloud may not be the optimal place for every application. CDPHP launched its cloud initiative in 2014, using AWS for disaster recovery, and soon adopted a cloud-first strategy. However, Howard Fingeroth, director of infrastructure architecture and data engineering at CDPHP, said the organization plans to bring two or three administration and financial applications back to its on-premises data center for cost reasons.
“We did a lot of lift and shift initially, and that didn’t prove to be a real wise choice in some cases,” Fingeroth said. “We’ve now modified our cloud strategy to be what we’re calling ‘smart cloud,’ which is really doing heavy-duty analysis around when it makes sense to move things to the cloud.”
Fingeroth said the cloud helps with what he calls the “ilities”: agility, affordability, flexibility and recoverability. CDPHP primarily uses Amazon’s Elastic Block Storage for production applications that run in the cloud and also has less expensive S3 object storage for backup and DR in conjunction with commercial backup products, he said.
“As time goes on, people get more sophisticated about the use of the cloud,” said John Webster, a senior partner and analyst at Evaluator Group. “They start with disaster recovery or some easy use case, and once they understand how it works, they start progressing forward.”
Evaluator Group’s most recent hybrid cloud storage survey, conducted in 2018, showed that disaster recovery was the primary use case, followed by data sharing/content repository, test and development, archival storage and data protection, according to Webster. He said about a quarter used the public cloud for analytics and tier 1 applications.
Public cloud expansion
The vice president of strategic technology for a New York-based content creation company said he is considering expanding his use of the public cloud as an alternative to storing data in SAN or NAS systems in photo studios in the U.S and Canada. The VP, who asked that neither he nor his company be named, said his company generates up to a terabyte of data a day. It uses storage from Winchester Systems for primary data and has about 30 TB of “final files” on AWS. He said he is looking into storage gateway options from vendors such as Nasuni and Morro Data to move data more efficiently into a public cloud.
“It’s just a constant headache from an IT perspective,” he said of on-premises storage. “There’s replication. There’s redundancy. There is a lot of cost involved. You need IT people in each location. There is no centralized control over that data. Considering all the labor, ongoing support contacts and the ability to scale without doing capex [with on-premises storage], it’s more cost effective and efficient to be in the cloud.”
NEW YORK — AWS pledges to maintain its torrid pace of product and services innovations and continue to expand the breadth of both to meet customer needs.
“You decide how to build software, not us,” said Werner Vogels, Amazon vice president and CTO, in a keynote at the AWS Summit NYC event. “So, we need to give you a really big toolbox so you can get the tools you need.”
But AWS, which holds a healthy lead over Microsoft and Google in the cloud market, also wants to serve as an automation engine for customers, Vogels added.
“I strongly believe that in the future … you will only write business logic,” he said. “Focus on building your application, drop it somewhere and we will make it secure and highly available for you.”
Parade of new AWS services continues
Vogels sprinkled a series of news announcements throughout his keynote, two of which centered on containers. First, Amazon CloudWatch Container Insights, a service that provides container-level monitoring, is now in preview for monitoring clusters in Amazon Elastic Container Service and Amazon Fargate, in addition to Amazon EKS and Kubernetes. In addition, AWS for Fluent Bit, which serves as a centralized environment for container logging, is now generally available, he said.
Serverless compute also got some attention with the release of Amazon EventBridge, a serverless event bus to take in and process data across AWS’ own services and SaaS applications. AWS customers currently do this with a lot of custom code, so “the goal for us was to provide a much simpler programming model,” Vogels said. Initial SaaS partners for EventBridge include Zendesk, OneLogin and Symantec.
Werner VogelsCTO, AWS
AWS minds the past, with eye on the future
Most customers are moving away from the concept of a monolithic application, “but there are still lots of monoliths out there,” such as SAP ERP implementations that won’t go away anytime soon, Vogels said.
But IT shops with a cloud-first mindset focus on newer architectural patterns, such as microservices. AWS wants to serve both types of applications with a full range of instance types, containers and serverless functionality, Vogels said.
He cited customers such as McDonald’s, which has built a home-delivery system with Amazon Elastic Container Service. It can take up to 20,000 orders per second and is integrated with partners such as Uber Eats, Vogels said.
Vogels ceded the stage for a time to Steve Randich, executive vice president and CIO of the Financial Industry Regulatory Authority (FINRA), a nonprofit group that seeks to keep brokerage firms fair and honest.
FINRA moved wholesale to AWS and its systems now ingest up to 155 billion market events in a single day — double what it was three years ago. “When we hit these peaks, we don’t even know them operationally because the infrastructure is so elastic,” Randich said.
FINRA has designed the AWS-hosted apps to run across multiple availability zones. “Essentially, our disaster recovery is tested daily in this regard,” he said.
AWS’ ode to developers
Developers have long been a crucial component of AWS’ customer base, and the company has built out a string of tool sets aimed to meet a broad set of languages and integrated development environments (IDEs). These include AWS Cloud9, IntelliJ, Python, Visual Studio and Visual Studio Code.
VS Code is Microsoft’s lighter-weight, browser-based IDE, which has seen strong initial uptake. All the different languages in VS Code are now generally available, Vogels said to audience applause.
Additionally, AWS Cloud Development Kit (CDK) is now generally available with support for TypeScript and Python. AWS CDK makes it easier for developers to use high-level construct to define cloud infrastructure in code, said Martin Beeby, AWS principal developer evangelist, in a demo.
AWS seeks to keep the cloud secure
Vogels also used part of his AWS Summit talk to reiterate AWS’ views on security, as he did at the recent AWS re:Inforce conference dedicated to cloud security.
“There is no line in the sand that says, ‘This is good-enough security,'” he said, citing newer techniques such as automated reasoning as key advancements.
Classic security precautions have become practically obsolete, he added. “If firewalls were the way to protect our systems, then we’d still have moats [around buildings],” Vogels said. Most attack patterns AWS sees are not brute-force front-door efforts, but rather spear-phishing and other techniques: “There’s always an idiot that clicks that link,” he said.
The full spectrum of IT, from operations to engineering to compliance, must be mindful of security, Vogels said. This is true within DevOps practices such as CI/CD from both an external and internal level, he said. The first involves matters such as identity access management and hardened servers, while the latter brings in techniques including artifact validation and static code analysis.
AWS Summit draws veteran customers and newcomers
The event at the Jacob K. Javits Convention Center drew thousands of attendees with a wide range of cloud experience, from FINRA to fledgling startups.
“The analytics are very interesting to me, and how I can translate that into a set of services for the clients I’m starting to work with,” said Donald O’Toole, owner of CeltTools LLC, a two-person startup based in Brooklyn. He retired from IBM in 2018 after 35 years.
AWS customer Timehop offers a mobile application oriented around “digital nostalgia,” which pulls together users’ photographs from various sources such as Facebook and Google Photos, said CTO Dmitry Traytel.
A few years ago, Timehop found itself in a place familiar to many startups: Low on venture capital and with no viable monetization strategy. The company created its own advertising server on top of AWS, dubbed Nimbus, rather than rely on third-party products. Once a user session starts, the system conducts an auction for multiple prominent mobile ad networks, which results in the best possible price for its ad inventory.
“Nimbus let us pivot to a different category,” Traytel said.
For those who want to explore quantum computing and learn the Q# programming language at their own pace, we have created the Quantum Katas – an open source project containing a series of programming exercises that provide immediate feedback as you progress.
Coding katas are great tools for learning a programming language. They rely on several simple learning principles: active learning, incremental complexity growth, and feedback.
The Microsoft Quantum Katas are a series of self-paced tutorials aimed at teaching elements of quantum computing and Q# programming at the same time. Each kata offers a sequence of tasks on a certain quantum computing topic, progressing from simple to challenging. Each task requires you to fill in some code; the first task might require just one line, and the last one might require a sizable fragment of code. A testing framework validates your solutions, providing real-time feedback.
Programming competitions are another great way to test your quantum computing skills. Earlier this month, we ran the first Q# coding contest and the response was tremendous. More than 650 participants from all over the world joined the contest or the warmup round held the week prior. More than 350 contest participants solved at least one problem, while 100 participants solved all fifteen problems! The contest winner solved all problems in less than 2.5 hours. You can find problem sets for the warmup round and main contest by following the links below. The Quantum Katas include the problems offered in the contest, so you can try solving them at your own pace.
We hope you find the Quantum Katas project useful in learning Q# and quantum computing. As we work on expanding the set of topics covered in the katas, we look forward to your feedback and contributions!
Technology is changing the way people get things done. We’ve picked up the pace. Our work is more collaborative. And we’re blurring the boundaries of time and place. When we ask customers why they continue to choose Office for their most important work, they tell us that they love the power the Office apps offer. The breadth and depth of features is unmatched in the industry and allows them to do things that they just can’t do with other products. But they also tell us that they need Office to adapt to the changing environment, and they’d love us to simplify the user experience and make that power more accessible. Today, we’re pleased to announce user experience updates for Word, Excel, PowerPoint, and Outlook rolling out gradually over the next few months. These changes are inspired by the new culture of work and designed to deliver a balance of power and simplicity.
Office is used by more than a billion people every month, so while we’re excited about these changes, we also recognize how important it is to get things right. To guide our work, we came up with “The Three Cs”—a set of guiding principles that we use as a north star. Because these principles will make this process feel different than any previous user experience update, we thought it would be useful to share them with you.
Customers—We’re using a customer-driven innovation process to co-create the design of the Office apps. That process consists of three phases: initial customer research and analysis; concepting and co-creation; and validation and refinement.
Context—Customers love the power of Office, but they don’t need every feature at the same time. We want our new designs to understand the context that you’re working in, so you can focus on the job at hand. That means surfacing the most relevant commands based on the work you’re doing and making it easy to connect and collaborate with others.
Control—We recognize that established skills and routines are powerful—and that the way someone uses the apps often depends on specific parts of the user interface. So we want to give users control, allowing them to toggle significant changes on and off.
These updates are exclusive to Office.com and Office 365—the always up-to-date versions of our apps and services. But they won’t happen all at once. Instead, over the next several months we will deploy new designs to select customers in stages and carefully test and learn. We’ll move them into production only after they’ve made it through rigorous rounds of validation and refinement.
The initial set of updates includes three changes:
Simplified ribbon—A new, updated version of the ribbon is designed to help users focus on their work and collaborate naturally with others. People who prefer to dedicate more screen space to the commands will still be able to expand the ribbon to the classic three-line view.
The first app to get this new experience will be the web version of Word and will start to roll out to select consumer users today on Office.com. Select Insiders will then see the simplified ribbon in Outlook for Windows in July.
Word, Excel, and PowerPoint for Windows offer our deepest, richest feature set—and they’re the preferred experience for users who want to get the most from our apps. Users have a lot of “muscle memory” built around these versions, so we plan on being especially careful with changes that could disrupt their work. We aren’t ready to bring the simplified ribbon to these versions yet because we feel like we need more feedback from a broader set of users first. But when we do, users will always be able to revert back to the classic ribbon with one click.
New colors and icons—Across the apps you’ll start to see new colors and new icons built as scalable graphics—so they render with crisp, clean lines on screens of any size. These changes are designed to both modernize the user experience and make it more inclusive and accessible.
The new colors and icons will first appear in the web version of Word for Office.com. Then, later this month, select Insiders will see them in Word, Excel, and PowerPoint for Windows. In July, they will go to Outlook for Windows, and in August they will begin rolling out to Outlook for Mac.
Search—Search will become a much more important element of the user experience, providing access to commands, content, and people. With “zero query search,” simply placing your cursor in the search box will bring up recommendations powered by AI and the Microsoft Graph.
Commercial users can already see this experience in action in Office.com, SharePoint Online, and the Outlook mobile app, and it will start rolling out to commercial users of Outlook on the web in August.
For an overview of these changes, check out the video below by Jon Friedman, our chief designer for Office.
To develop these initial designs, Jon’s team worked closely with customers. They collected data on how people use the apps and built prototypes to test new concepts. While we have plenty of work left to do, we’ve definitely heard encouraging things from customers using early builds:
“It’s simpler and I feel like I can open it and immediately get my bearings and move forward. Not a lot of extra information. The tasks are obvious on this screen.”
“The toolbar provides the most frequently used features…maximizing the screen real estate for the actual content.”
“I like the extra space. What I do find is that the feature to toggle it off/on is helpful because occasionally I can’t figure out (quickly) where something went.”
We plan on carefully monitoring usage and feedback as the changes roll out, and we’ll update our designs as we learn more.
Technology is changing the way people get things done at work, at school, and at home, resetting expectations for productivity. Inspired by these changes, these updates are designed to deliver a balance of power and simplicity. But what’s most exciting for us is that over the next few months we’ll be co-creating and refining these new experiences with our customers—and making the power of Office more accessible for everyone.
The mobile device management space is growing at a rapid pace, and MDM is widely used across the enterprise to manage and secure smartphones and tablets. Investing in this technology enables organizations to not just secure mobile devices themselves, but the data on them and the corporate networks they connect to, as well.
The market for MDM software is saturated now, and there are new vendors arriving in this vertical on a consistent basis. Many of the larger names in mobile security, meanwhile, have been buying up smaller vendors and integrating their technology into their mobile management offerings, while others have remained pure mobile device management companies from the beginning. So what are the best mobile device management products available today?
Since the mobile security market has become so crowded, it is harder than ever to determine what the best mobile device management products are for an organization’s environment.
To make choosing easier for readers, this article evaluates five leading EMM companies offering MDM as a part of their bundles and their products against the most important criteria to consider when procuring and deploying mobile security in the enterprise. These criteria include MDM implementation, app integration, containerization vs. non-containerization, licensing models and policy management. The mobile management vendors covered are Good Technology Inc., VMware AirWatch, MobileIron Inc., IBM MaaS360, Sophos and Citrix.
That being said, there are also niche players — such as BlackBerry — that are attempting to move into the broader MDM market outside of just securing and managing their own hardware, in addition to free offerings from the likes of Google that have attempted to compete with the above list of MDM vendors by providing tools to assist in Android device management. Even Microsoft has a small amount of MDM built into its operating systems to manage mobile devices.
Today, the vast majority of mobile devices in use — both smartphones and tablets — run on either Apple’s iOS or Google’s Android OS. So while many of today’s MDM products are also capable of managing Windows Phones, BlackBerry devices and so on, this article focuses mostly on their Apple and Android management and security capabilities.
Selecting the best mobile device management product for your organization isn’t easy. By using the criteria presented in this feature and asking six crucial questions before buying MDM software, an organization will find it easier to procure the right mobile management and security products to satisfy its enterprise needs.
Criteria #1: Implementation of MDM
Organizations should understand and plan out their mobile device deployment and MDM requirements before looking at vendors. The installation criteria for MDM are normally based on a few things: resources, money and hardware. With that being said, there are two distinct installation possibilities when deploying an MDM product.
The first is an on-premises implementation that needs dedicated resources, both from a hardware and technical perspective, to assist with installing the system or application on a network. Vendors like Good Technology with it’s Good For Enterprise suite require the installation of servers within an organization’s DMZ. This will necessitate firewall changes and operating system resources to implement.
These systems will then need to be managed appropriately to verify that they’re consistently patched and scanned for vulnerabilities, among other issues. In essence, this type of MDM deployment is treated as an additional server on an organization’s network.
It’s possible that a smaller business might shy away from an install of this nature due to the requirements and technical know-how it would take to get off the ground. On the other hand, if businesses are able to manage this type of mobile management and security product, it gives them complete ownership of these systems and the data that’s on them.
The second installation type is a cloud-based service that enables an off premises installation of MDM, removing any concerns regarding management, technical resources and hardware. Vendors like VMware AirWatch and Sophos have the ability to let customers provision their entire MDM product in the cloud and manage the system from any internet connection. This is both a pro and a con: It provides companies with resource constraints — like not having the experience or headcount — with the ability to get an MDM product set up quickly, but it does so at the risk of having data reside outside the complete control of these organizations — within the cloud.
Depending on an organization’s resource availability, technical experience and risk appetite, these are the two options — on-premises and cloud — currently available for installing MDM.
Criteria #2: App integration
Apps are a major reason mobile device popularity and demand has increased exponentially over the years. Without the ability to have apps work properly and yet securely, the power of mobile devices and the ability for users to take full advantage of these tools becomes severely limited.
MDM companies have realized this need for functionality and security, so they’ve created business-grade apps that enable productivity without compromising the integrity of mobile devices, the data on them and the networks to which they connect.
Citrix XenMobile has created XenMobile Apps that are tied together and save data in a secure sandbox on mobile devices, so users don’t need to use unapproved apps to send business data to potentially insecure apps out of an enterprise’s control. The sandboxing technology works by securing, and even at times partitioning, the MDM app separately from the rest of the mobile OS — essentially isolating it from the rest of the device, while allowing a user to have the ability to work securely and efficiently.
There are also third-party app vendors that MDM vendors have partnered with to create branded apps. Good Technology has, for example, partnered with many large vendors to accommodate the need to use their apps with a specific MDM environment. This integration between vendors is extremely helpful and adds to the synergy between both vendors to create better security and more productive users. Sophos also allows this with their Secure Workspace feature, which enables users to access files within a container while securing the access to these documents.
Whether you’re using apps created by an MDM vendor for additional security, or apps that have been developed through the collaboration of an MDM vendor and a third-party vendor, it’s important to know that most of the work on a mobile device is done via these apps, and securing the data that flows through them and is created on them is important.
Criteria #3: Container vs. non-container
There are two major operational options available when researching MDM products: MDM that uses the container approach and MDM that uses the non-container approach. This is a major decision that needs to be made before selecting a mobile management product, as most vendors only subscribe to one of these methods.
This decision, whether to go with the container or non-container method of mobile management, will guide the device policy, app installation policy, BYOD plans and data security for the mobile devices that an organization is looking to manage.
A containerized approach is one that keeps all the data and access to corporate resources contained within an app on a mobile device. This app normally won’t allow access to the app from outside the mobile device and vice versa.
Both the Good for Enterprise suite and MaaS360 offer MDM products that enable customers to use a containerized approach. Large companies tend to benefit from this approach — as do government agencies and financial institutions — as it tends to offer the highest degree of protection for sensitive data.
Once a container is removed from a mobile device, all organizational data is gone, and the organization can be sure there was no data leakage onto the mobile device
In contrast to the restricted tactic used by containerization, the non-container approach creates a more fluid and seamless user experience on mobile devices. Companies like VMware AirWatch, Sophos and MobileIron are the leaders in this approach, which enables security on mobile devices via policies and integrated apps. This means these systems rely on pushing policies to the native OS to control their mobile devices. They also support multiple integrated apps — supplied by trusted vendors the MDM companies have partnered with — that assist in adding an additional layer of security to their data. These companies also allow the use of containers and help bridge the gap between customer needs.
Many organizations, including startups and those in retail, lean toward the non-container approach for mobile management and security due to the speed and native familiarity that end users already have with their mobile devices — with OS-bundled calendaring and mail apps, for example. However, keep in mind, in order to completely secure all the data on mobile devices, the non-container approach requires the aforementioned tight MDM policies and integrated apps to enforce the protection of data.
Criteria #4: License models
The licensing model for MDMs has changed slightly in recent years. In the past, there was only a per device license model, which meant organizations were pushed into using licensing models that weren’t very effective for them financially. Due to the emergence of tablets and users carrying multiple smartphones, there became the need to have a license model based on the user — and not the individual device.
All the MDM products covered in this article offer similar, if not identical, pricing models. MDM vendors have listened to the customers and realized that end users in this day and age don’t always have one device. Which licensing model an organization chooses — per device or user based — depends on the company’s mobile device inventory.
The per-device model normally works well for small companies. In this model, every user gets a device that counts against the organization’s total license count. If a user has three devices, all of these go against the total license count of the business. These licenses are normally cheaper per seat, but can quickly become expensive if there are multiple devices per user requiring coverage.
The user-based pricing model, by contrast, takes into account the need for users to have multiple devices that all require MDM coverage. With this model, the user name is the basis of the license, and the user can have multiple devices attached to his one license. This is the reason many larger organizations lean toward this model, or at least a hybrid approach of the two licensing models — to account for users who use multiple mobile devices.
MDM criteria #5: Policy management
This is an important feature of mobile device management, and one that organizations need to review with either a request for proposal (RFP) or something that outlines the details of what mobile device policies they require. Mobile policies enable organizations to make granular changes to a mobile device to limit certain features — the camera and apps, among others — push wireless networks, create VPN tunnels and whitelist apps. This is the nuts and bolts of MDM, and a criterion that should be reviewed heavily during the proof-of-concept stage with specific vendors.
This ability to push certain features of a policy to mobile devices is certainly required, as is the ability to wipe devices remotely if the need occurs should they be lost or stolen. While all the MDM products covered in this article provide the ability to remotely wipe mobile devices, in the case of Good for Enterprise and IBM MaaS360, organizations have the option to wipe mobile devices completely or to just remove the container.
Also important for MDM products is the ability to perform actions such as VPN connections, wireless network configurations and certificate installs, which AirWatch can accomplish. Sophos also offers the ability to manage policies from a security perspective by enforcing antiphishing, antimalware and web protection.
You must assert these options in an RFP beforehand to determine what part of the mobile device policy you’re looking to secure. Evaluating what policy changes you can push to a mobile device and what functions an organization might want to see within a policy will help provide insight for an educated decision on the best mobile device management products.
Most times there will be multiple policies created that allow certain users to receive a particular policy, while allowing someone with other needs to receive a completely different MDM policy. This is a standard function within all MDMs, but it should be understood that a single policy for all users is not always plausible.
Finding the best mobile device management product for you
There are many vendors in this saturated market, but following these five criteria should assist organizations in narrowing the field down to find the best mobile device management products available today. There is much overlap between vendors, but finding the right one that can secure an organization’s data completely and offer full coverage, with the ability to manage all the aspects needed in a policy, is what businesses should be aiming for in MDM products.
Many large companies, especially those in the financial or government sector, are running Good for Enterprise due to the extra layer of security it provides by leveraging a container and integrated apps developed by vendors with whom they partnered.
IBM MaaS360, on the other hand, offers both a container and non-container approach to mobile security and management, which makes it suitable for larger enterprises that require some flexibility in terms of operational method deployment. This gives IBM MaaS360 the ability to play to both sides and gives them some leverage over competitors by attracting customers from both mindsets.
Many midsize companies don’t have to meet the level of security imposed by large financial clients, though, and thus aren’t running to boost their mobile device security. We’ve seen that, many times, compliance will bring an extra layer of required security, however, thereby making these organizations more conscience at times about securing data on mobile devices.
Midsize to large companies — those outside of the financial sector — tend to run AirWatch, Sophos or MobileIron MDM due to their abilities to keep the native feel of mobile devices intact, while being able to push custom policies that secure mobile devices to the clients.
As for app integration, Citrix has performed very well in this area with XenMobile, having shown that it’s pushing the boundaries of this area. These apps are selling points to many customers who want to integrate their data onto a mobile device, but want the flexibility to manage the data these mobile apps are consuming. By dispensing these approved apps to managed mobile devices and writing policy for their data to be used on these apps, MDM products, such as Citrix’s, assist with adding an extra layer of data control for the company and ease of use for the user.
As mobile devices become more indispensible for business users, the MDM market will keep expanding in response to the growing need for mobile security.
Evolution doesn’t occur at a steady pace. It’s marked by moments of a consequential and relatively sudden change, which significantly alter survival dynamics and give rise to entirely new paradigms.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
This happened with the Cambrian explosion. Approximately 541 million years ago, and over the next 70 million to 80 million years, organisms rapidly evolved from mostly single-cell to complex and diverse creatures that better resemble life on planet Earth as we know it.
As CloudBees CTO and Jenkins founder Kohsuke Kawaguchi explained in his Jenkins World keynote, the Cambrian explosion serves as an apt metaphor for both Jenkins and the DevOps digital transformation.
Otherwise mundane elements sparked and fueled the Cambrian explosion, like the gradual evolution of eyesight. While crude at first, many experts believe eyesight reached a tipping point that upheaved the predator-prey dynamic by enabling predators to hunt more effectively. This increased pressure to evolve and kicked off an arms race, as prey developed better defense features, like armor, speed and camouflage.
Automation, cloud and mobile: Fueling the DevOps digital transformation
For Jenkins, which started as a single app for a single use case, the automation features in early builds stand in for eyesight, while mobility and cloud serve the same for DevOps as a whole. Modern software as we know it has been around for about 70 years. But it’s easy to see mobility, cloud and automation, igniting software’s Cambrian explosion.
All were limited and seemingly innocuous at first, but eventually developed to enable an online broker like Amazon to compete with Walmart, the world’s largest physical retailer. The pressure to evolve is why Walmart dropped $3 billion on e-commerce startup Jet.com earlier this year. The pressure to evolve is why all business are now in the software business — a refrain repeated at Jenkins World.
Evolution equals transformation, and the latter was a steady theme at Jenkins World; although, both could just as easily double as warnings. CloudBees CEO Sacha Labourey hit that point home in his keynote focusing on “Digital Darwinism,” quoting Eric Shinseki, retired Army general and former U.S. secretary of Veteran Affairs: “If you dislike change, you’re going to dislike irrelevance even more.”
Instant insights, the next big thing
CloudBees used Jenkins World to launch DevOptics, which Labourey claimed provides a “single source of truth” for a “holistic view” of the deployment pipeline, aggregating data from disparate tools and teams. From his description, it’s a DevOps system of record — one that ultimately helps the business side “identify ROI from DevOps initiatives,” according to CloudBees.
CloudBees was’’t alone in trying to make metric sense of the deployment pipeline. Electric Cloud recently unveiled ElectricFlow 8.0 with DevOps Insight Analytics, using Jenkins World to show it off to prospective developers. According to Electric Cloud, Insight Analytics provides “teams with automated data collection and powerful reporting to connect DevOps toolchain metrics and performance back to the milestones and business value (features, user stories) being delivered in every release.”
Anders Wallgren, CEO at Electric Cloud, based in San Jose, Calif., stated it offered instant insights to relevant pipeline analytics, helping troubleshoot bottlenecks and spot trends, for both IT and business leaders.
So, what’s the big deal about dashboards and insights? Plenty, according to Kawaguchi — particularly CloudBees Blue Ocean. He said he sees it as another element fueling the DevOps digital transformation.
A friendly UI that both business and IT can understand improves the constant delivery user experience. Think of it as extending the pipeline beyond IT to business and marketing. With relevant insights, organizations can better meet customer needs and react to customer demands.
It’s both an evolutionary and revolutionary software explosion, fueled by cloud, mobile, automation and easy access to actionable data. Take another look at Walmart as it scrambles to stave off Amazon, or at Marriott and Hilton doing the same with Airbnb. Look at Tesla and it’s software fix to its hardware problem. It’s already here, altering survival dynamics and giving rise to entirely new paradigms.
Applications are changing the pace of business today – from delivering amazing customer experiences, to transforming internal operations. To keep pace, developers need solutions that help them quickly build, deploy and scale applications without having to maintain the underlying web servers or operating systems. Azure App Service delivers this experience and currently hosts more than 1 million cloud applications. Using its powerful capabilities such as integrated CI/CD, deployment slots and auto scaling, developers can get applications to the end users much faster; and today we’re making it even better.
I am pleased to announce that Azure App Service is now generally available on Linux, including its Web App for Containers capability. With this, we now offer built-in image support for ASP.NET Core, Node.js, PHP and Ruby on Linux, as well as provide developers an option to bring their own Docker formatted container images supporting Java, Python, Go and more.
In Azure, we continue to invest in providing more choices that help you maximize your existing investments. Supporting Azure App Service on Linux is an important step in that direction.
High productivity development
To accelerate cloud applications development, you can take advantage of the built-in images for ASP.NET Core, Node.js, PHP and Ruby, all running on Linux, letting you focus on your applications instead of infrastructure. Just select the stack your web app needs, we will set up the application environment and handle the maintenance for you. If you want more control of your environment, simply SSH into your application and get full remote access to administrative commands.
Pre-built packages including WordPress, Joomla and Drupal solutions are also available in Azure Marketplace and can be deployed with just a few clicks to App Service.
Ease of deployment
With the new App Service capability, Web App for Containers, you can get your containerized applications to production in seconds. Simply push your container image to Docker Hub, Azure Container Registry, or your private registry, and Web App for Containers will deploy your containerized application and provision required infrastructure. Furthermore, whenever required, it will automatically perform Linux OS patching and load balancing for you.
Apart from the portal, you also have the option to deploy to App Service using CLI or Azure Resource Management templates.
Built-in CI/CD, scale on demand
Azure App Service on Linux offers built-in CI/CD capabilities and an intuitive scaling experience. With a few simple clicks, you can integrate with GitHub, Docker Hub or Azure Container Registry, and realize continuous deployment through Jenkins, VSTS or Maven.
Deployment Slots let you easily deploy to target environments, swap staging to production, schedule performance and quality tests, and roll-back to previous versions with zero downtime.
After you promote the updates to production, scaling is as simple as dragging a slider, calling a REST API, or configuring automatic scaling rules. You can scale your applications up or down on demand or automatically, and get high availability within and across different geographical regions.
To get started with Azure App Service on Linux, check out the use cases and try App Service for free. Want to learn more? Sign up for our upcoming webinar focused on containerized applications. You can also join us and thousands of other developers at Open Source Summit North America. For more information and updates, follow @OpenAtMicrosoft.