Tag Archives: DIEGO

Kubernetes tools vendors vie for developer mindshare

SAN DIEGO — The notion that Kubernetes solves many problems as a container orchestration technology belies the complexity it adds in other areas, namely for developers who need Kubernetes tools.

Developers at the KubeCon + CloudNativeCon North America 2019 event here this week noted that although native tooling for development on Kubernetes continues to improve, there’s still room for more.

“I think the tooling thus far is impressive, but there is a long way to go,” said a software engineer and Kubernetes committer who works for a major electronics manufacturer and requested anonymity.

Moreover, “Kubernetes is extremely elegant, but there are multiple concepts for developers to consider,” he said. “For instance, I think the burden of the onboarding process for new developers and even users sometimes can be too high. I think we need to build more tooling, as we flush out the different use cases that communities bring out.”

Developer-oriented approach

Enter Red Hat, which introduced an update of its Kubernetes-native CodeReady Workspaces tool at event.

Red Hat CodeReady Workspaces 2 enables developers to build applications and services on their laptops that mirror the environment they will run in production. And onboarding is but one of the target use cases for the technology, said Brad Micklea, vice president of developer tools, developer programs and advocacy at Red Hat.

The technology is especially useful in situations where security is an issue, such as bringing in new contracting teams or using offshore development teams where developers need to get up and running with the right tools quickly.

I think the tooling thus far is impressive, but there is a long way to go.
Anonymous Kubernetes committer

CodeReady Workspaces runs on the Red Hat OpenShift Kubernetes platform.

Initially, new enterprise-focused developer technologies are generally used in experimental, proof-of-concept projects, said Charles King, an analyst at Pund-IT in Hayward, Calif. Yet over time those that succeed, like Kubernetes, evolve from the proof-of-concept phase to being deployed in production environments.

“With CodeReady Workspaces 2, Red Hat has created a tool that mirrors production environments, thus enabling developers to create and build applications and services more effectively,” King said. “Overall, Red Hat’s CodeReady Workspaces 2 should make life easier for developers.”

In addition to popular features from the first version, such as an in-browser IDE, Lightweight Directory Access Protocol support, Active Directory and OpenAuth support as well as one-click developer workspaces, CodeReady Workspaces 2 adds support for Visual Studio Code extensions, a new user interface, air-gapped installs and a shareable workspace configuration known as Devfile.

“Workspaces is just generally kind of a way to package up a developer’s working workspace,” Red Hat’s Micklea said.

Overall, the Kubernetes community is primarily “ops-focused,” he said. However, tools like CodeReady Workspaces help to empower both developers and operations.

For instance, at KubeCon, Amr Abdelhalem, head of the cloud platform at Fidelity Investments, said the way he gets teams initiated with Kubernetes is to have them deliver on small projects and move on from there. CodeReady Workspaces is ideal for situations like that because it simplifies developer adoption of Kubernetes, Micklea said.

Such a tool could be important for enterprises that are banking on Kubernetes to move them into a DevOps model to achieve business transformation, said Charlotte Dunlap, an analyst with GlobalData.

“Vendors like Red Hat are enhancing Kubernetes tools and CLI [Command Line Interface] UIs to bring developers with more access and visibility into the ALM [Application Lifecycle Management] of their applications,” Dunlap said. “Red Hat CodeReady Workspaces is ultimately about providing enterprises with unified management across endpoints and environments.”

Competition for Kubernetes developer mindshare

Other companies that focus on the application development platform, such as IBM and Pivotal, have also joined the Kubernetes developer enablement game.

Earlier this week, IBM introduced a set of new open-source tools to help ease developers’ Kubernetes woes. Meanwhile, at KubeCon this week, Pivotal made its Pivotal Application Service (PAS) on Kubernetes generally available and also delivered a new release of the alpha version of its Pivotal Build Service. The PAS on Kubernetes tool enables developers to focus on coding while the platform automatically handles software deployment, networking, monitoring, and logging.

The Pivotal Build Service enables developers to build containers from source code for Kubernetes, said James Watters, senior vice president of strategy at Pivotal. The service automates container creation, management and governance at enterprise scale, he said.

The build service brings technologies such as Pivotal’s kpack and Cloud Native Buildpacks to the enterprise. Cloud Native Buildpacks address dependencies in the middleware layer, such as language-specific frameworks. Kpack is a set of resource controllers for Kubernetes. The Build Service defines the container image, its contents and where it should be kept, Watters said.

Indeed, Watters said he believes it just might be game over in the Kubernetes tools space because Pivotal owns the Spring Framework and Spring Boot, which appeal to a wide swath of Java developers, which is “one of the most popular ways enterprises build applications today,” he said.

“There is something to be said for the appeal of Java in that my team would not need to make wholesale changes to our build processes,” said a Java software developer for a financial services institution who requested anonymity because he was not cleared to speak for the organization.

Yet, in today’s polyglot programming world, programming language is less of an issue as teams have the capability to switch languages at will. For instance, Fidelity’s Abdelhalem said his teams find it easier to move beyond a focus strictly on tools and more on overall technology and strategy to determine what fits in their environment.

Go to Original Article
Author:

Kubernetes Helm Tiller is dead, and IT pros rejoice

SAN DIEGO — The death of Kubernetes Helm Tiller in version 3 was the talk of the cloud-native world here at KubeCon + CloudNativeCon North America 2019 this week, as the change promises better security and stability for a utility that underpins several other popular microservices management and GitOps tools.

Kubernetes Helm is a package manager used to deploy apps to the container orchestration platform. It’s widely used to deploy enterprise apps to containers through CI/CD pipelines, including GitOps and progressive delivery tools. It’s also a key component for installing and updating the custom resource definitions (CRDs) that underpin the Istio service mesh in upstream environments.

Helm Tiller was a core component of the software in its initial releases, which used a client-server architecture for which Tiller was the server. Helm Tiller acted as an intermediary between users and the Kubernetes API server, and handled role-based access control (RBAC) and the rendering of Helm charts for deployment to the cluster. With the first stable release of Helm version 3 on Nov. 13, however, Tiller was removed entirely, and Helm version 3 now communicates directly with the Kubernetes API Server.

Such was the antipathy for Helm Tiller among users that when maintainers proclaimed the component’s death from the KubeCon keynote stage here this week, it drew enthusiastic cheers.

“At the first Helm Summit in 2018, there was quite a lot of input from the community, especially around, ‘Can we get rid of Tiller?'” said Martin Hickey, a senior software engineer at IBM and a core maintainer of Helm, in a presentation on Helm version 3 here. “[Now there’s] no more Tiller, and the universe is safe again.”

KubeCon Helm keynote
News of Helm Tiller’s demise from the KubeCon keynote stage this week drew cheers from the audience.

Helm Tiller had security and stability issues

IT pros who used previous versions of Helm charts said the client-server setup between Helm clients and Tiller was buggy and unstable, which made it even more difficult to install already complex tools such as Istio service mesh for upstream users.

“Version 3 offers new consistency in the way it handles CRDs, which had weird dependency issues that we ran into with Istio charts,” said Aaron Christensen, principal software engineer at SPS Commerce, a communications network for supply chain and logistics businesses in Minneapolis. “It doesn’t automatically solve the problem, but if the Istio team makes use of version 3, it could really simplify deployments.”

[Now there’s] no more Tiller, and the universe is safe again.
Martin HickeySenior software engineer, IBM and a core maintainer of Helm

Helm Tiller was designed before Kubernetes had its own RBAC features, but once these were added to the core project, Tiller also became a cause for security concerns among enterprises. From a security perspective, Tiller had cluster-wide access and could potentially be used for privilege escalation attacks if not properly secured.

It was possible to lock down Helm Tiller in version 2 — heavily regulated firms such as Fidelity Investments were able to use it in production with a combination of homegrown tools and GitOps utilities from Weaveworks. But the complexity of that task and Helm Tiller stability problems meant some Kubernetes shops stayed away from Helm altogether until now, which led to other problems with rolling out apps on container clusters.

“Helm would issue false errors to our CI/CD pipelines, and say a deployment failed when it didn’t, or it would time out connecting to the Kubernetes API server, which made the deployment pipeline fail,” said Carlos Traitel, senior DevOps engineer at Primerica, a financial services firm in Duluth, Ga.

Primerica tried to substitute kube-deploy, a different open source utility for Helm, but also ran into management complexity with it. Primerica engineers plan to re-evaluate Helm version 3 as soon as possible. The new version uses a three-way merge process for updates, which compares the desired state with the actual state of the cluster along with the changes users want to apply, and could potentially eliminate many common errors during the Helm chart update process.

Despite its difficulties, Helm version 2 was a crucial element of Kubernetes management, SPS’s Christensen said.

“It worked way more [often] than it didn’t — we wouldn’t go back and use something else,” he said. “It helps keep 20-plus resources consistent across our clusters … and we were also able to implement our own automated rollbacks based on Helm.”

Go to Original Article
Author:

Kubernetes security opens a new frontier: multi-tenancy

SAN DIEGO — As enterprises expand production container deployments, a new Kubernetes security challenge has emerged: multi-tenancy.

Among the many challenges with multi-tenancy in general is that it is not easy to define, and few IT pros agree on a single definition or architectural approach. Broadly speaking, however, multi-tenancy occurs when multiple projects, teams or tenants, share a centralized IT infrastructure, but remain logically isolated from one another.

Kubernetes multi-tenancy also adds multilayered complexity to an already complex Kubernetes security picture, and demands that IT pros wire together a stack of third-party and, at times, homegrown tools on top of the core Kubernetes framework.

This is because core upstream Kubernetes security features are limited to service accounts for operations such as role-based access control — the platform expects authentication and authorization data to come from an external source. Kubernetes namespaces also don’t offer especially granular or layered isolation by default. Typically, each namespace corresponds to one tenant, whether that tenant is defined as an application, a project or a service.

“To build logical isolation, you have to add a bunch of components on top of Kubernetes,” said Karl Isenberg, tech lead manager at Cruise Automation, a self-driving car service in San Francisco, in a presentation about Kubernetes multi-tenancy here at KubeCon + CloudNativeCon North America 2019 this week. “Once you have Kubernetes, Kubernetes alone is not enough.”

Karl Isenberg, Cruise Automation
Karl Isenberg, tech lead manager at Cruise Automation, presents at KubeCon about multi-tenant Kubernetes security.

However, Isenberg and other presenters here said Kubernetes multi-tenancy can have significant advantages if done right. Cruise, for example, runs very large Kubernetes clusters, with up to 1,000 nodes, shared by thousands of employees, teams, projects and some customers. Kubernetes multi-tenancy means more highly efficient clusters and cost savings on data center hardware and cloud infrastructure.

“Lower operational costs is another [advantage] — if you’re starting up a platform operations team with five people, you may not be able to manage five [separate] clusters,” Isenberg added. “We [also] wanted to make our investments in focused areas, so that they applied to as many tenants as possible.”

Multi-tenant Kubernetes security an ad hoc practice for now

The good news for enterprises that want to achieve Kubernetes multi-tenancy securely is that there are a plethora of third-party tools they can use to do it, some of which are sold by vendors, and others open sourced by firms with Kubernetes development experience, including Cruise and Yahoo Media.

Duke Energy Corporation, for example, has a 60-node Kubernetes cluster in production that’s stretched across three on-premises data centers and shared by 100 web applications so far. The platform is comprised of several vendors’ products, from Diamanti hyper-converged infrastructure to Aqua Security Software’s container firewall, which logically isolates tenants from one another at a granular level that accounts for the ephemeral nature of container infrastructure.

“We don’t want production to talk to anyone [outside of it],” said Ritu Sharma, senior IT architect at the energy holding company in Charlotte, N.C., in a presentation at KubeSec Enterprise Summit, an event co-located with KubeCon this week. “That was the first question that came to mind — how to manage cybersecurity when containers can connect service-to-service within a cluster.”

Some Kubernetes multi-tenancy early adopters also lean on cloud service providers such as Google Kubernetes Engine (GKE) to take on parts of the Kubernetes security burden. GKE can encrypt secrets in the etcd data store, which became available in Kubernetes 1.13, but isn’t enabled by default, according to a KubeSec presentation by Mike Ruth, one of Cruise’s staff security engineers.

Google also offers Workload Identity, which matches up GCP identity and access management with Kubernetes service accounts so that users don’t have to manage Kubernetes secrets or Google Cloud IAM service account keys themselves. Kubernetes SIG-Auth looks to modernize how Kubernetes security tokens are handled by default upstream to smooth Kubernetes secrets management across all clouds, but has run into snags with the migration process.

In the meantime, Verizon’s Yahoo Media has donated a project called Athenz to open source, which handles multiple aspects of authentication and authorization in its on-premises Kubernetes environments, including automatic secrets rotation, expiration and limited-audience policies for intracluster communication similar to those offered by GKE’s Workload Identity. Cruise also created a similar open source tool called RBACSync, along with Daytona, a tool that fetches secrets from HashiCorp Vault, which Cruise uses to store secrets instead of in etcd, and injects them into running applications, and k-rail for workload policy enforcement.

Kubernetes Multi-Tenancy Working Group explores standards

While early adopters have plowed ahead with an amalgamation of third-party and homegrown tools, some users in highly regulated environments look to upstream Kubernetes projects to flesh out more standardized Kubernetes multi-tenancy options.

For example, investment banking company HSBC can use Google’s Anthos Config Management (ACM) to create hierarchical, or nested, namespaces, which make for more highly granular access control mechanisms in a multi-tenant environment, and simplifies their management by automatically propagating shared policies between them. However, the company is following the work of a Kubernetes Multi-Tenancy Working Group established in early 2018 in the hopes it will introduce free open source utilities compatible with multiple public clouds.

Sanjeev Rampal, Kubernetes Multi-Tenancy Working Group
Sanjeev Rampal, co-chair of the Kubernetes Multi-Tenancy Working Group, presents at KubeCon.

“If I want to use ACM in AWS, the Anthos license isn’t cheap,” said Scott Surovich, global container engineering lead at HSBC, in an interview after a presentation here. Anthos also requires VMware server virtualization, and hierarchical namespaces available at the Kubernetes layer could offer Kubernetes multi-tenancy on bare metal, reducing the layers of abstraction and potentially improving performance for HSBC.

Homegrown tools for multi-tenant Kubernetes security won’t fly in HSBC’s highly regulated environment, either, Surovich said.

“I need to prove I have escalation options for support,” he said. “Saying, ‘I wrote that’ isn’t acceptable.”

So far, the working group has two incubation projects that create custom resource definitions — essentially, plugins — that support hierarchical namespaces and virtual clusters that create self-service Kubernetes API Servers for each tenant. The working group has also created working definitions of the types of multi-tenancy and begun to define a set of reference architectures.

The working group is also considering certification of multi-tenant Kubernetes security and management tools, as well as benchmark testing and evaluation of such tools, said Sanjeev Rampal, a Cisco principal engineer and co-chair of the group.

Go to Original Article
Author:

Approaches for embedding human ethics in AI systems

SAN DIEGO — It may be early days for AI, but it’s not too early to start thinking about the ethical considerations around the technology. In fact, today is a defining moment for AI ethics, said Gartner research vice president Darin Stewart at this week’s Gartner Catalyst event in San Diego.

“The decisions we make now are going to sit at the core of our models for years and continue to evolve, continue grow and continue to learn,” said Stewart during his session. “So we need to set them on a firm ethical foundation so that as they grow through the years they’ll continue to reflect our values.”

The crux is that, right now, embedding ethics in AI systems is a low priority for most. The focus is mostly on AI control — making sure systems do what they’re supposed to do and having a few corrective measures if things go wrong. But, as Stewart said, putting in a “big red button” to hit when things go very wrong isn’t enough. By that time it’s already too late.

What we need in our AI solutions is something Stewart calls “value alignment.”

“We need some assurance that when these systems make predictions, come to conclusions, give us recommendations and make decisions that they’re going to reflect our values,” he said. “And not just our personal values as its creator, but the values of the organization we serve and the community or society that it exists within.”

Stewart admitted that value alignment might seem above IT practitioners’ paygrade, but he certainly doesn’t see it that way.

“The developers, the engineers and the designers are in the vanguard,” he said. “You all are best positions to take those steps to move us closer to that value alignment.”

What steps can IT practitioners take to embed human values and ethics in AI?

Darin Stewart, research vice president, Gartner
Darin Stewart talks about the urgent need to develop an AI ethics strategy at the Gartner Catalyst event this week.

A measure of fairness

For starters, they can make sure they aren’t directly inserting bias into algorithms. Bias is more or less ubiquitous in machine learning, Stewart said. It arises when the AI model makes predictions and decisions based on sensitive or prohibited attributes — things like race, gender, sexual orientation and religion.

We need some assurance that when these systems make predictions, come to conclusions, give us recommendations and make decisions that they’re going to reflect our values.
Darin Stewartresearch vice president, Gartner

Fortunately, there are effective techniques for removing bias in data sets, Stewart said. But IT practitioners need to make sure it never gets in the model in the first place. At the very beginning, they need to articulate — and document — measures of fairness and the algorithm’s priorities. A measure of fairness is what constitutes equitable and consistent treatment for all groups participating in a system or solution. Stewart said there are a lot of guidelines available for use — many for free.

“At the very beginning, when you’re building the product, decide explicitly and intentionally what that measure of fairness is going to be and optimize your algorithms to reflect both the values statement you create and that measure of fairness,” he said. “Then use the boundaries as constraints on the training process itself.”

The United States has had a doctrine of discrimination since the Civil Rights Act of 1964, which defines types of discriminatory intent and permissible use of race. Start there, Stewart said.

“Ideally, we will have higher standards, but the locally- set, legal definition of acceptable discrimination should be the bare minimum you work with,” he said.

Pay attention to your AI solution in the real world

AI and machine learning systems rarely behave the same in the real world as they did in testing, Stewart said. That can pose a problem for ensuring ethics in AI.

“The problem is that once we release our solutions into the real world, we stop paying attention to the inputs it’s feeding off of in the real world,” he said. “We don’t pay attention to the inputs that are still continuing to train and evolve the model, and that’s where things start to go wrong.”

He points to the now infamous case of the Microsoft Tay chatbot, which was the target of a malicious attack by Twitter trolls that biased the system beyond repair. That’s why Stewart emphasizes the importance of consistently and continually inspecting training data.

“Kind of like stalking your kids on social media, you need to keep an eye on your solutions once they are released into the real world,” Stewart said. That includes regularly testing and auditing the AI systems.

Instilling ethics in AI: The ‘centaur’ approach

At this point in time, Stewart said we usually don’t want to turn decisions entirely over to the AI system. If you are just trying to automate data entry into a spreadsheet or speed up the throughput of an assembly line, then it’s fine to do so, he said. But if you are creating a system that is going to “materially impact someone’s life,” you should have a human being in the loop who understands why decisions are being made.

“You don’t want a doctor performing surgery just because the AI said so,” he said. Decisions ideally are made by combining machine intelligence, which can deal with exponentially more data than humans can, with human intelligence, which can account for factors that may not have been in the data set. “We’re looking for centaurs rather than robots.”

Gartner Catalyst 2018: A future without data centers?

SAN DIEGO — Can other organizations do what Netflix has done — run a business without a data center? That’s the question that was posed by Gartner Inc. research vice president Douglas Toombs at the Gartner Catalyst 2018 conference.

While most organizations won’t run 100% of their IT in the cloud, the reality is that many workloads can be moved, Toombs told the audience.

“Your future IT is actually going to be spread across a number of different execution venues, and at each one of these venues you’re trading off control and choice, but you get the benefits of not having to deal with the lower layers,” he said.

Figure out the why, how much and when

When deciding why they are moving to the cloud, the “CEO drive-by strategy” — where the CEO swings in and says, “We need to move a bunch of stuff in the cloud, go make it happen,” — shouldn’t be the starting point, Toombs said.

“In terms of setting out your overall organizational priorities, what we want to do is get away from having just that as the basis and we want to try to think of … the real reasons why,” Toombs said.

Increasing business agility and accessing new technologies should be some of the top reasons why businesses would want to move their applications to the cloud, Toombs said. Once they have a sense of “why,” the next thing is figuring out “how much” of their applications will make the move. For most mainstream enterprises, the sweet spot seems to be somewhere between 40% and 80% of their overall applications, he said.

Businesses then need to figure out the timeframe to make this happen. Those trying to move 50% or 60% of their apps usually give themselves about three years to try and accomplish that goal, he said. If they’re more aggressive — with a target of 80% — they will need a five-year horizon, he said.

Whatever metric you pick, you want to track this very publicly over time within your organization.
Douglas Toombsresearch vice president, Gartner

“We need to get everyone in the organization with a really important job title — could be the top-level titles like CIO, CFO, COO — also in agreement and nodding along with us, and what we suggest for this is actually codifying this into a cloud strategy document,” Toombs told the audience at Gartner Catalyst 2018.

Dissecting application risk

Once organizations have outlined their general strategy, Toombs suggested they incorporate the CIA triad of confidentiality, integrity and availability for risk analysis purposes.

These three core pillars are essential to consider when moving an app to the cloud so the organization can determine potential risk factors.

“You can take these principles and start to think of them in terms of impact levels for an application,” he said. “As we look at an app and consider a potential new execution venue for it, how do we feel about the risk for confidentiality, integrity and availability — is this kind of low, or no risk, or is it really severe?”

Assessing probable execution venues

Organizations need to think very carefully about where their applications go if they exit their data centers, Toombs said. He suggested they assess their applications one-by-one, moving them off into other execution venues when they’re capable and are not going to increase overall risk

“We actually recommend starting with the app tier where you would have to give up the most control and look in the SaaS market,” he said. They can then look at PaaS, and if they have exhausted the PaaS options in the market, they can start to look at IaaS, he said.

However, if they have found an app that probably shouldn’t go to a cloud service but they still want to get to no data centers, organizations could talk to hosting providers that are out there — they’re happy to sell them hardware on a three-year contract and charge monthly for it — or go to a colocation provider. Even if they have put 30% of their apps in a colocation environment, they are not running data center space anymore, he said.

But if for some reason they have found an app that can’t be moved to any one of these execution venues, then they have absolutely justified and documented an app that now needs to stay on premises, he said. “It’s actually very freeing to have a no-go pile and say, ‘You know what, we just don’t think this can go or we just don’t think this is the right time for it, we will come back in three years and look at it again.'”

Kilowatts as a progress metric

While some organizations say they are going to move a certain percentage of their apps to the cloud, others measure in terms of number of racks or number of data centers or square feet of data center, he said.

Toombs suggested using kilowatts of data center processing power as a progress metric. “It is a really interesting metric because it abstracts away the complexities in the technology,” he said.

It also:

  • accounts for other overhead factors such as cooling;
  • easily shows progress with first migration;
  • should be auditable against a utility bill; and
  • works well with kilowatt-denominated colocation contracts.

“But whatever metric you pick, you want to track this very publicly over time within your organization,” he reminded the audience at the Gartner Catalyst 2018 conference. “It is going to give you a bit of a morale boost to go through your 5%, 10%, 15%, and say ‘Hey, we’re getting down the road here.'”

Visit Xbox at San Diego Comic-Con 2018 – Xbox Wire

San Diego Comic-Con, the world’s largest comic and pop-culture festival, is coming soon and Xbox is bringing the fun with exclusive gear, panels, celebrity guests, and more! See below for details on everything Xbox at SDCC and join the fun from California or from the comfort of your own couch.

Xbox Booth (San Diego Comic-Con badge required)
Hall A, Booth #100

For the first time ever, Xbox will have exclusive gear available at SDCC! Stop by to pick up exclusive clothing and items from Xbox and your favorite games and then get them customized on the spot with your Gamertag. See some of the items available here.

Visit us on Thursday, July 19 and Saturday, July 21 for signing sessions with some of your favorite developers and designers, but get there early: only the first 100 people to receive passes will be eligible!

Signings (San Diego Comic-Con badge required):

  • Brendan Greene (“PlayerUnknown”), Creative Director, PUBG Corp – Thursday, July 19 from 3 p.m. to 4 p.m. PDT
  • Joe Neate, Executive Producer, and Mike Chapman, Design Director, Sea of Thieves – Saturday, July 21 from 3 p.m. to 4 p.m. PDT

Xbox Gear Comic-Con Sweepstakes

Can’t make it to the booth to pick up the exclusive gear? Retweet @Xbox to potentially win an Xbox Gear Comic-Con prize pack! Four winners will receive a collection of exclusive Xbox Gear, and one grand prize winner will receive the gear and an Xbox One X.

Follow @Xbox or @XboxCanada on Twitter and retweet the following tweet when it goes live at the start of San Diego Comic-Con: “RT and follow for a chance to win exclusive #XboxSDCC #XboxGear! NoPurchNec. Ends 7/22/18. #Sweepstakes Rules: bit.ly/2KV2DQ1.” You have until July 22 to enter. Click through for the Official Rules.

Sea of Thieves Panel (San Diego Comic-Con badge required)
Room 5AB, Saturday, July 21 from 1:30pm – 2:30pm PDT

Special guest and Sea of Thieves fan Freddie Prinze Jr. (“Star Wars Rebels,” “24,” “Scooby Doo,” “I Know What You Did Last Summer”) joins the Rare crew, Joe Neate, Mike Chapman, and Peter Hentze as they discuss the lore and expanded universe of Sea of Thieves. Attendees will also receive a limited-edition Sea of Thieves comic and time-limited exclusive in-game DLC!

Xbox Live Sessions

If you’re not in San Diego but still want to follow along with the fun, we’re hosting two action-packed Xbox Live Sessions that you won’t want to miss.

  • PUBG featuring Brendan “PlayerUnknown” Greene: On Thursday, July 19 at 5:00 p.m. PDT, PUBG Creative Director Brendan Greene (@PLAYERUNKNOWN) and Microsoft Executive Producer Nico Bihary (@nico_bihary) will join Rukari Austin (@rukizzelrukizzel) to get their loot on in PUBG’s Miramar map live from inside of a PUBG Bus created by West Coast Customs. That’s right – Xbox, PUBG Corp., and West Coast Customs have teamed up to create a one-of-a-kind, tricked out PUBG Bus which will be home to the livestream and available to see in-person at The Experience at Comic-Con.
  • Sea of Thieves with Freddie Prinze Jr.: On Saturday, July 21 at 5:00 p.m. PDT, Sea of Thieves fan Freddie Prinze Jr. (@RealFPJr) will sail the high seas with members of the Rare team and Major Nelson in an episode of Xbox Live Sessions that’s sure to test the sea legs of the seasoned actor. Fans at home can tune in and watch on http://mixer.com/Xbox  and http://twitch.tv/Xbox.

Xbox at “The Experience at Comic-Con”

SDCC map

Head over to Petco Park where you can play Xbox One games, earn free swag, and win awesome prizes! No Comic-Con badge required.

  • Visit the Samsung truck at The Experience at Comic-Con, located in the Lexus Lot at Petco Park. Climb aboard the truck to compete in Forza Motorsport 7 on Xbox One X via Samsung’s 2018 QLED TVs. More information can be found here.
  • Come visit the first stop of the Xbox One Summer of PUBG tour. Win prizes, check out the PUBG Bus, and stick around for the Xbox Live Sessions! More information can be found here

For all the SDCC details, visit the Xbox SDCC website. For more Xbox news, follow @Xbox on Twitter and stay tuned to Xbox Wire. See you at San Diego Comic-Con!

Cross-platform app support settles on web development

SAN DIEGO — Cross-platform apps are the future of enterprise software, but it’s not that easy for many organizations to adopt them.

To create an application that works across different operating systems and form factors, developers must focus on making its internal architecture compatible with multiple platforms, not necessarily focus on its front-end interface. But the options for deploying these types of apps can be expensive, so a compelling alternative for many organizations is to develop web apps.

“Web technologies are more than capable of delivering really high-end user experiences,” said Kirk Knoernschild, research vice president at Gartner. “Web has maximum portability to different form factors.”

Knoernschild and IT professionals discussed the challenges of cross-platform app development and deployment here at this week’s Gartner Catalyst Conference.

Cross-platform apps a hard sell

Whether an organization builds a cross-platform app in-house, hires third-party developers or purchases the app from a software provider, it can be a costly proposition. And it’s difficult to convince the business to spend money on technology that does not directly provide a financial return on investment.

“The savings are hard to quantify,” said Chris Haaker, director of end user computing innovation at Relx Group, a business information and analytics provider in Miamisburg, Ohio. “If this made you 20% more productive, can you show that?”

The last thing you want to do is deliver a compromised user experience.
Kirk Knoernschildresearch vice president, Gartner

Haaker’s branch of the global company has no in-house or third-party developers and instead buys any software it needs directly from vendors. Eighty percent of employees there use smartphones for work, mostly for corporate email access, but the office can’t afford to hire mobile developers, Haaker said. So a few tech-savvy interns are building web apps that can work across different operating systems instead.

“If we could have an app for all endpoints, that’s a place I would love to get to,” Haaker said. “That’s wonderful.”

But for now, unified app development is too new of a concept for the company to invest in, he said.

“There’s got to be somebody at the top that’s going to buy into that,” he added.

Low-code cross-platform app dev

One way organizations can develop cross-platform apps with less cost and effort is through low-code development tools. Rollins Inc., a global pest control services company based in Atlanta, used OutSystems to create a web app that helps employees track service information and communicate with customers.

The responsive web app adjusts the interface to suit the endpoint, whether it’s a desktop in Rollins’ offices or on salespeople’s iPads out in the field. OutSystems, which allows companies to build web, mobile or cloud apps, lets Rollins build dashboards that show customer site maps, the pests prevalent at those sites and other information.

“You can see, does this customer’s contract cover bees?” said David Christian, manager and senior architect at Rollins. “If it does, we can send out a technician to deal with that.”

The web approach is common today because it means developers can use a single code base to write an app that works across multiple endpoints. When organizations don’t have to write multiple versions of the same app, it often results in cost savings.

“It’s something we’re seeing more and more of in development teams, but it has to be for the right use case,” Knoernschild said. “The last thing you want to do is deliver a compromised user experience.”

Native mobile apps often provide more device-specific capabilities, however, so responsive web apps aren’t always the best choice.

“You’ve got more things available when you code for native mobile,” Christian said. “[A web app] won’t be quite as responsive. The phone format is not the best format for some of the larger dashboard views.”

Cross-platform app support

To make it easier to deploy cross-platform apps and ensure their security, IT must limit users’ device and operating system choices, said Andrew Garver, research director at Gartner, in a Catalyst session.

“This is not giving users what they want all the time,” he said. “It’s an art to maximize productivity through the benefits of end user choice while balancing your risk requirements.”

To prepare for a future where apps are independent of operating systems and devices, organizations must also ensure that they don’t rely on a single OS or OS version, plug-in, browser or browser version, Garver said. They should also plan for emerging device types, such as wearables, he said.

For successful cross-platform app support, IT departments should follow these steps, Garver said:

  • Identify gaps in IT skills and start to fill them.
  • Make it clear to business leaders that cross-platform computing is not a single project, but rather a long-term approach that will evolve.
  • Merge disparate IT teams that need to work together, such as desktop and mobile groups.

“It’s just a matter of getting all of us moving in the same direction,” Haaker said.

Mobile threat defense helps fill EMM’s gaps

SAN DIEGO — As more IT pros realize that EMM doesn’t completely protect mobile data, they’re taking a closer look at mobile threat defense tools.

Enterprise mobility management (EMM) allows IT to enforce security policies and control what users do on their devices. But attacks on mobile operating systems and devices are becoming more common as hackers identify vulnerabilities, and organizations need clear insight into these threats and their potential effects. Mobile threat defense tools can help with that piece of the security puzzle, said analysts and attendees here at the Gartner Catalyst conference.

“EMM is more of just the management; it’s just pushing a policy to the phone,” said Seth Wiese, an IT security administrator at Kuraray America, a chemicals manufacturer in Houston.

Mobile threat defense tools supplement EMM by continuously monitoring devices for malicious apps and other risks, and by providing analytics around app and network usage to prevent cyberattacks. Kuraray uses VMware AirWatch for EMM and wants to adopt this technology to get more monitoring capabilities and predictive analytics about its devices, Wiese said.

But for organizations just starting out with mobility, it can be a challenge to convince higher-ups that IT requires more than just EMM for security.

Patrick Hevesi, research director, GartnerPatrick Hevesi

“That comes down to dollars and sense,” said the director of enterprise solutions at a banking and investment firm, who requested anonymity because he is not authorized to speak publicly. “And how do you assign a cost value to data being lost?”

The bank uses Microsoft Intune to manage around 750 corporate-owned mobile devices, but there is definitely a need to supplement that software with mobile threat defense, the director said.

Mobile threat defense market heats up

Traditional security vendors are acquiring mobile threat defense startups to integrate this technology into their larger product offerings; see Symantec’s acquisition of Skycure last month.

Other vendors in the market include Appthority, Check Point and Zimperium. All of these offerings have different capabilities for analyzing devices, apps and operating systems to identify risks, and many use machine learning to detect patterns in user and app behavior and predict future threats.

“There’s not one tool,” said Patrick Hevesi, research director at Gartner, in a session. “Some tools detect. Some tools prevent. Some tools remediate. Some tools pop up an alert. So as you’re building this strategy, you need to start thinking about what attacks you’re most worried about.”

This approach can help IT decide what tool to buy. One organization could be prone to malware, while another may have users downloading unwanted applications, for instance. At Kuraray, data leakage is the biggest concern, Wiese said.

Every code written by someone can be exploited by someone else.
Patrick Hevesiresearch director, Gartner

The most common mobile attack vectors are websites, app stores, text messages and network vectors such as rogue access points on Wi-Fi networks, Hevesi said. Traditional antivirus software might not catch threats to mobile devices, and hackers have wised up and figured out where the vulnerabilities in mobile operating systems are, he said.

“Vulnerabilities exist on all mobile platforms,” he added. “It’s software. Every code written by someone can be exploited by someone else.”

Mobile threat defense best practices

As part of a strong mobile security strategy, IT should set up data classification levels that determine how much risk each user’s information presents and how much security they require, because not all will be the same, Hevesi said.

“Maybe your CEO just wants email, calendar, contacts,” he said. “So maybe you don’t need EMM for that and just use [Microsoft] Exchange ActiveSync and throw threat defense on there.”

Classifying data levels is the first step the banking and investment firm’s director wants to take as he evaluates mobile threat defense software.

“I’m trying to understand the users to figure out the risk profile,” he said.

IT should also limit the devices and operating systems that employees can use, to ensure they have the most secure and up-to-date versions available, and continuously educate users on how to avoid mobile threats. For instance, there’s a flashlight app on Google Play that requests permissions to access information in many other apps, Hevesi said.

“Train your users to say no,” he said.