Tag Archives: enterprises

Government IT pros: Hiring data scientists isn’t an exact science

WASHINGTON, D.C. — Government agencies face the same problems as enterprises when it comes to turning their vast data stores into useful information. In the case of government, that information is used to provide services such as healthcare, scientific research, legal protections and even to fight wars.

Public sector IT pros at the Veritas Public Sector Vision Day this week talked about their challenges in making data useful and keeping it secure. A major part of their work currently involves finding the right people to fill data analytical roles, including hiring data scientists. They described data science skills as a combination of roles that require technical, as well as subject matter expertise, which often requires a diverse team to become successful.

Tiffany Julian, data scientist at the National Science Foundation, said she recently sat in on a focus group involved with the Office of Personnel Management’s initiative to define data scientist.

“One of the big messages from that was, there’s no such thing as a unicorn. You don’t hire a data scientist. You create a team of people who do data science together,” Julian said.

Julian said data science includes more than programmers and technical experts. Subject experts who know their company or agency mission also play a role.

“You want your software engineers, you want your programmers, you want your database engineers,” she said. “But you also want your common sense social scientists involved. You can’t just prioritize one of those fields. Let’s say you’re really good at Python, you’re really good at R. You’re still going to have to come up with data and processes, test it out, draw a conclusion. No one person you hire is going to have all of those skills that you really need to make data-driven decisions.”

Wanted: People who know they don’t know it all

Because she is a data scientist, Julian said others in her agency ask what skills they should seek when hiring data scientists.

You don’t hire a data scientist. You create a team of people who do data science together.
Tiffany JulianData scientist, National Science Foundation

“I’m looking for that wisdom that comes from knowing that I don’t know everything,” she said. “You’re not a data scientist, you’re a programmer, you’re an analyst, you’re one of these roles.”

Tom Beach, chief data strategist and portfolio manager for the U.S. Patent and Trademark Office (USPTO), said he takes a similar approach when looking for data scientists.

“These are folks that know enough to know that they don’t know everything, but are very creative,” he said.

Beach added that when hiring data scientists, he looks for people “who have the desire to solve a really challenging problem. There is a big disconnect between an abstract problem and a piece of code. In our organization, a regulatory agency dealing with patents and trademarks, there’s a lot of legalese and legal frameworks. Those don’t code well. Court decisions are not readily codable into a framework.”

‘Cloud not enough’

Like enterprises, government agencies also need to get the right tools to help facilitate data science. Peter Ranks, deputy CIO for information enterprise at the Department of Defense, said data is key to his department, even if DoD IT people often talk more about technologies such as cloud, AI, cybersecurity and the three Cs (command, control and communications) when they discuss digital modernization.

“What’s not on the list is anything about data,” he said. “And that’s unfortunate because data is really woven into every one of those. None of those activities are going to succeed without a focused effort to get more utility out of the data that we’ve got.”

Ranks said future battles will depend on the ability of forces on land, air, sea, space and cyber to interoperate in a coordinated fashion.

“That’s a data problem,” he said. “We need to be able to communicate and share intelligence with our partners. We need to be able to share situational awareness data with coalitions that may be created on demand and respond to a particular crisis.”

Ranks cautioned against putting too much emphasis on leaning on the cloud for data science. He described cloud as the foundation on the bottom of a pyramid, with software in the middle and data on top.

“Cloud is not enough,” he said. “Cloud is not a strategy. Cloud is not a destination. Cloud is not an objective. Cloud is a tool, and it’s one tool among many to achieve the outcomes that your agency is trying to get after. We find that if all we do is adopt cloud, if we don’t modernize software, all we get is the same old software in somebody else’s data center. If we modernize software processes but don’t tackle the data … we find that bad data becomes a huge boat anchor or that all those modernized software applications have to drive around. It’s hard to do good analytics with bad data. It’s hard to do good AI.”

Beach agreed. He said cloud is “100%” part of USPTO’s data strategy, but so is recognition of people’s roles and responsibilities.

“We’re looking at not just governance behavior as a compliance exercise, but talking about people, process and technology,” he said. “We’re not just going to tech our way out of a situation. Cloud is just a foundational step. It’s also important to understand the recognition of roles and responsibilities around data stewards, data custodians.”

This includes helping ensure that people can find the data they need, as well as denying access to people who do not need that data.

Nick Marinos, director of cybersecurity and data protection at the Government Accountability Office, said understanding your data is a key step in ensuring data protection and security.

“Thinking upfront about what data do we actually have, and what do we use the data for are really the most important piece questions to ask from a security or privacy perspective,” he said. “Ultimately, having an awareness of the full inventory within the federal agencies is really all the way that you can even start to approach protecting the enterprise as a whole.”

Marinos said data protection audits at government agencies often start with looking at the agency’s mission and its flow of data.

“Only from there can we as auditors — and the agency itself — have a strong awareness of how many touch points there are on these data pieces,” he said. “From a best practice perspective, that’s one of the first steps.”

Go to Original Article
Author:

Windows Server 2008 end of life: Is Azure the right path?

As the Windows Server 2008 end of life inches closer, enterprises should consider which retirement plan to pursue before security updates run out.

As of Jan. 14, Microsoft will end security updates for Windows Server 2008 and 2008 R2 machines that run in the data center. Organizations that continue to use these server operating systems will be vulnerable because hackers will inevitably continue to look for weaknesses in them, but Microsoft will not — except in rare circumstances — provide fixes for those vulnerabilities. Additionally, Microsoft will not update online technical content related to these operating systems or give any free technical support.

Although there are benefits to upgrading to a newer version of Windows Server, there may be some instances in which this is not an option. For example, your organization might need an application that is not compatible with or supported on newer Windows Server versions. Similarly, there are situations in which it is possible to migrate the server to a new operating system, but not quickly enough to complete the process before the impending end-of-support deadline.

Microsoft has a few options for those organizations that need to continue running Windows Server 2008 or 2008 R2. Although the company will no longer give updates for the aging operating system through the usual channels, customers can purchase extended security updates.

You can delay Windows Server 2008 end of life — if you can afford it

Those who wish to continue using Windows Server 2008 or 2008 R2 on premises will need Software Assurance or a subscription license to purchase extended updates. The extended updates are relatively expensive, or about 75% of the cost of a current version Windows Server license annually. This is likely Microsoft’s way of trying to get customers to migrate to a newer Windows Server version because the extended security updates cost almost as much as a Windows Server license.

The other option for those organizations that need to continue running Windows Server 2008 or 2008 R2 is to migrate those servers to the Azure cloud. Organizations that decide to switch those workloads to Azure will receive free extended security updates for three years.

Servers often have multiple dependencies, and you will need to address these as part of the migration planning.

Know what a move to Azure entails

Before migrating a Windows Server workload to the cloud, it is important to consider the pros and cons of making the switch to Azure. The most obvious benefit is financial and gives you a few years to run this OS without the hassle of having to pay for extended security updates.

Another benefit to the migration to Azure is a reduction in hardware-related costs. Windows Server 2008 was the first Windows Server version to include Hyper-V, but many organizations opted to install Windows Server 2008 onto physical hardware rather than virtualizing it. If your organization runs Windows Server 2008/2008 R2 on a physical server, then this is a perfect opportunity to retire the aging server hardware.

If your Windows Server 2008/2008 R2 workloads are virtualized, then moving those VMs to Azure can free up some capacity on the virtualization hosts for other workloads.

Learn about the financial and technical impact

One disadvantage to operating your servers in Azure is the cost. You will pay a monthly fee to run Windows Server 2008 workloads in the cloud. However, it is worth noting that Microsoft offers a program called the Azure Hybrid Benefit, which gives organizations with Windows Server licenses 40% off the cost of running eligible VMs in the cloud. To get an idea of how much your workloads might cost, you can use a calculator and find more details at this link.

Another disadvantage with moving a server workload to Azure is the increased complexity of your network infrastructure. This added complication isn’t limited just to the migrating servers. Typically, you will have to create a hybrid Active Directory environment and also create a VPN that allows secure communications between your on-premises network and the Azure cloud.

Factor in these Azure migration considerations

For organizations that decide to migrate their Windows Server 2008 workloads to Azure, there are a number of potential migration issues to consider.

Servers often have multiple dependencies, and you will need to address these as part of the migration planning. For instance, an application may need to connect to a database that is hosted on another server. In this situation, you will have to decide whether to migrate the database to Azure or whether it is acceptable for the application to perform database queries across a WAN connection.

Similarly, you will have to consider the migration’s impact on your internet bandwidth. Some of your bandwidth will be consumed by management traffic, directory synchronizations and various cloud processes. It’s important to make sure your organization has enough bandwidth available to handle this increase in traffic.

Finally, there are differences between managing cloud workloads and ones in your data center. The Azure cloud has its own management interface that you will need to learn. Additionally, you may find your current management tools either cannot manage cloud-based resources or may require a significant amount of reconfiguring. For example, a patch management product might not automatically detect your VM in Azure; you may need to either create a separate patch management infrastructure for the cloud or provide the vendor with a path to your cloud-based resources.

Go to Original Article
Author:

Kubernetes security opens a new frontier: multi-tenancy

SAN DIEGO — As enterprises expand production container deployments, a new Kubernetes security challenge has emerged: multi-tenancy.

Among the many challenges with multi-tenancy in general is that it is not easy to define, and few IT pros agree on a single definition or architectural approach. Broadly speaking, however, multi-tenancy occurs when multiple projects, teams or tenants, share a centralized IT infrastructure, but remain logically isolated from one another.

Kubernetes multi-tenancy also adds multilayered complexity to an already complex Kubernetes security picture, and demands that IT pros wire together a stack of third-party and, at times, homegrown tools on top of the core Kubernetes framework.

This is because core upstream Kubernetes security features are limited to service accounts for operations such as role-based access control — the platform expects authentication and authorization data to come from an external source. Kubernetes namespaces also don’t offer especially granular or layered isolation by default. Typically, each namespace corresponds to one tenant, whether that tenant is defined as an application, a project or a service.

“To build logical isolation, you have to add a bunch of components on top of Kubernetes,” said Karl Isenberg, tech lead manager at Cruise Automation, a self-driving car service in San Francisco, in a presentation about Kubernetes multi-tenancy here at KubeCon + CloudNativeCon North America 2019 this week. “Once you have Kubernetes, Kubernetes alone is not enough.”

Karl Isenberg, Cruise Automation
Karl Isenberg, tech lead manager at Cruise Automation, presents at KubeCon about multi-tenant Kubernetes security.

However, Isenberg and other presenters here said Kubernetes multi-tenancy can have significant advantages if done right. Cruise, for example, runs very large Kubernetes clusters, with up to 1,000 nodes, shared by thousands of employees, teams, projects and some customers. Kubernetes multi-tenancy means more highly efficient clusters and cost savings on data center hardware and cloud infrastructure.

“Lower operational costs is another [advantage] — if you’re starting up a platform operations team with five people, you may not be able to manage five [separate] clusters,” Isenberg added. “We [also] wanted to make our investments in focused areas, so that they applied to as many tenants as possible.”

Multi-tenant Kubernetes security an ad hoc practice for now

The good news for enterprises that want to achieve Kubernetes multi-tenancy securely is that there are a plethora of third-party tools they can use to do it, some of which are sold by vendors, and others open sourced by firms with Kubernetes development experience, including Cruise and Yahoo Media.

Duke Energy Corporation, for example, has a 60-node Kubernetes cluster in production that’s stretched across three on-premises data centers and shared by 100 web applications so far. The platform is comprised of several vendors’ products, from Diamanti hyper-converged infrastructure to Aqua Security Software’s container firewall, which logically isolates tenants from one another at a granular level that accounts for the ephemeral nature of container infrastructure.

“We don’t want production to talk to anyone [outside of it],” said Ritu Sharma, senior IT architect at the energy holding company in Charlotte, N.C., in a presentation at KubeSec Enterprise Summit, an event co-located with KubeCon this week. “That was the first question that came to mind — how to manage cybersecurity when containers can connect service-to-service within a cluster.”

Some Kubernetes multi-tenancy early adopters also lean on cloud service providers such as Google Kubernetes Engine (GKE) to take on parts of the Kubernetes security burden. GKE can encrypt secrets in the etcd data store, which became available in Kubernetes 1.13, but isn’t enabled by default, according to a KubeSec presentation by Mike Ruth, one of Cruise’s staff security engineers.

Google also offers Workload Identity, which matches up GCP identity and access management with Kubernetes service accounts so that users don’t have to manage Kubernetes secrets or Google Cloud IAM service account keys themselves. Kubernetes SIG-Auth looks to modernize how Kubernetes security tokens are handled by default upstream to smooth Kubernetes secrets management across all clouds, but has run into snags with the migration process.

In the meantime, Verizon’s Yahoo Media has donated a project called Athenz to open source, which handles multiple aspects of authentication and authorization in its on-premises Kubernetes environments, including automatic secrets rotation, expiration and limited-audience policies for intracluster communication similar to those offered by GKE’s Workload Identity. Cruise also created a similar open source tool called RBACSync, along with Daytona, a tool that fetches secrets from HashiCorp Vault, which Cruise uses to store secrets instead of in etcd, and injects them into running applications, and k-rail for workload policy enforcement.

Kubernetes Multi-Tenancy Working Group explores standards

While early adopters have plowed ahead with an amalgamation of third-party and homegrown tools, some users in highly regulated environments look to upstream Kubernetes projects to flesh out more standardized Kubernetes multi-tenancy options.

For example, investment banking company HSBC can use Google’s Anthos Config Management (ACM) to create hierarchical, or nested, namespaces, which make for more highly granular access control mechanisms in a multi-tenant environment, and simplifies their management by automatically propagating shared policies between them. However, the company is following the work of a Kubernetes Multi-Tenancy Working Group established in early 2018 in the hopes it will introduce free open source utilities compatible with multiple public clouds.

Sanjeev Rampal, Kubernetes Multi-Tenancy Working Group
Sanjeev Rampal, co-chair of the Kubernetes Multi-Tenancy Working Group, presents at KubeCon.

“If I want to use ACM in AWS, the Anthos license isn’t cheap,” said Scott Surovich, global container engineering lead at HSBC, in an interview after a presentation here. Anthos also requires VMware server virtualization, and hierarchical namespaces available at the Kubernetes layer could offer Kubernetes multi-tenancy on bare metal, reducing the layers of abstraction and potentially improving performance for HSBC.

Homegrown tools for multi-tenant Kubernetes security won’t fly in HSBC’s highly regulated environment, either, Surovich said.

“I need to prove I have escalation options for support,” he said. “Saying, ‘I wrote that’ isn’t acceptable.”

So far, the working group has two incubation projects that create custom resource definitions — essentially, plugins — that support hierarchical namespaces and virtual clusters that create self-service Kubernetes API Servers for each tenant. The working group has also created working definitions of the types of multi-tenancy and begun to define a set of reference architectures.

The working group is also considering certification of multi-tenant Kubernetes security and management tools, as well as benchmark testing and evaluation of such tools, said Sanjeev Rampal, a Cisco principal engineer and co-chair of the group.

Go to Original Article
Author:

Microsoft’s new approach to hybrid: Azure services when and where customers need them | Innovation Stories

As business computing needs have grown more complex and sophisticated, many enterprises have discovered they need multiple systems to meet various requirements – a mix of technology environments in multiple locations, known as hybrid IT or hybrid cloud.

Technology vendors have responded with an array of services and platforms – public clouds, private clouds and the growing edge computing model – but there hasn’t necessarily been a cohesive strategy to get them to work together.

We got here in an ad hoc fashion,” said Erik Vogel, global vice president for customer experience for HPE GreenLake at Hewlett Packard Enterprise. Customers didn’t have a strategic model to work from.

Instead, he said, various business owners in the same company may have bought different software as a service (SaaS) applications, or developers may have independently started leveraging Amazon Web Services, Azure or Google Cloud Platform to develop a set of applications.

At its Ignite conference this week in Orlando, Florida, Microsoft announced its solution to such cloud sprawl. The company has launched a preview of Azure Arc, which offers Azure services and management to customers on other clouds or infrastructure, including those offered by Amazon and Google.

John JG Chirapurath, general manager for Azure data, blockchain and artificial intelligence at Microsoft, said the new service is both an acknowledgement of, and a response to, the reality that many companies face today. They are running various parts of their businesses on different cloud platforms, and they also have a lot of data stored on their own new or legacy systems.

In all those cases, he said, these customers are telling Microsoft they could use the benefits of Azure cloud innovation whether or not their data is stored in the cloud, and they could benefit from having the same Azure capabilities – including security safeguards – available to them across their entire portfolio.

We are offering our customers the ability to take their services, untethered from Azure, and run them inside their own datacenter or in another cloud,” Chirapurath said.

Microsoft says Azure Arc builds on years of work the company has done to serve hybrid cloud needs. For example, Azure Resource Manager, released in 2014, was created with the vision that it would manage resources outside of Azure, including in companies’ internal servers and on other clouds.

That flexibility can help customers operate their services on a mix of clouds more efficiently, without purchasing new hardware or switching among cloud providers. Companies can use a public cloud to obtain computing power and data storage from an outside vendor, but they can also house critical applications and sensitive data on their own premises in a private cloud or server.

Then there’s edge computing, which stores data where the user is, in between the company and the public cloud for example, on their customers’ mobile devices or on sensors in smart buildings like hospitals and factories.

YouTube Video

That’s compelling for companies that need to run AI models on systems that aren’t reliably connected to the cloud, or to make computations more quickly than if they had to send large amounts of data to and from the cloud. But it also must work with companies’ cloud-based, internet-connected systems.

“A customer at the edge doesn’t want to use different app models for different environments,” said Mark Russinovich, Azure chief technology officer. “They need apps that span cloud and edge, leveraging the same code and same management constructs.”

Streamlining and standardizing a customer’s IT structure gives developers more time to build applications that produce value for the business instead of managing multiple operating models. And enabling Azure to integrate administrative and compliance needs across the enterprise – automating system updates and security enhancements brings additional savings in time and money.

“You begin to free up people to go work on other projects, which means faster development time, faster time to market,” said HPE’s Vogel. HPE is working with Microsoft on offerings that will complement Azure Arc.

Arpan Shah, general manager of Azure infrastructure, said Azure Arc allows companies to use Azure’s governance tools for their virtual machines, Kubernetes clusters and data across different locations, helping ensure companywide compliance on things like regulations, security, spending policies and auditing tools.

Azure Arc is underpinned in part by Microsoft’s commitment to technologies that customers are using today, including virtual machines, containers and Kubernetes, an open source system for organizing and managing containers. That makes clusters of applications easily portable across a hybrid IT environment – to the cloud, the edge or an internal server.

“It’s easy for a customer to put that container anywhere,” Chirapurath said. “Today, you can keep it here. Tomorrow, you can move it somewhere else.”

Microsoft says these latest Azure updates reflect an ongoing effort to better understand the complex needs of customers trying to manage their Linux and Windows servers, Kubernetes clusters and data across environments.

“This is just the latest wave of this sort of innovation,” Chirapurath said. “We’re really thinking much more expansively about customer needs and meeting them according to how they’d like to run their applications and services.”

Top image: Erik Vogel, global vice president for customer experience for HPE GreenLake at Hewlett Packard Enterprise, with a prototype of memory-driven computing. HPE is working with Microsoft on offerings that will complement Azure Arc. Photo by John Brecher for Microsoft.

Related:

Go to Original Article
Author: Microsoft News Center

DevOps security shifts left, but miles to go to pass hackers

DevOps security processes have matured within enterprises over the last year, but IT shops still have far to go to stem the tide of data breaches.

DevOps teams have built good security habits almost by default as they have increased the frequency of application releases and adopted infrastructure and security automation to improve software development. More frequent, smaller, automated app deployments are less risky and less prone to manual error than large and infrequent ones.

Microservices management and release automation demand tools such as infrastructure as code and configuration management software to manage infrastructure, which similarly cut down on human error. Wrapped up into a streamlined GitOps process, Agile and DevOps techniques automate the path to production while locking down access to it — a win for both security and IT efficiency.

However, the first six months of 2019 saw such a flood of high-profile data breaches that at least one security research firm called it the worst year on record. And while cybersecurity experts aren’t certain how trustworthy that measurement is — there could just be more awareness of breaches than there used to be, or more digital services to attack than in past years — they feel strongly that DevOps security teams still aren’t staying ahead of attackers, who have also learned to automate and optimize what they do.

Adrian Sanabria, advocate at Thinkst Applied ResearchAdrian Sanabria

“The attackers have innovated, and that’s one of the problems with our industry — we’re at least five years behind the attackers,” said Adrian Sanabria, advocate at Thinkst Applied Research, a cybersecurity research and software firm based in South Africa. “We’re in a mode where we’re convinced, with all this VC money and money spent on marketing, that we have to wait for a product to be available to solve these problems … and they’re never going to be ready in time.”

DevOps security tools aren’t enough

A cybersecurity tool is only as good as how it’s used, Sanabria said, citing the example of a Target breach in 2013, where security software detected potentially malicious activity, but IT staff didn’t act on its warnings. In part, this was attributed to alert fatigue, as IT teams increasingly deal with a fire hose of alerts from various monitoring systems. But it also has to do with IT training, Sanabria said.

“In the breach research I’ve done, generally everyone owned [the tools] they needed to own,” he said. “They either didn’t know how to use it, hadn’t set it up correctly, or they had some kind of process issue where the [tools] did try to stop the attacks or warn them of it, [but] they either didn’t see the alert or didn’t act on the alert.”

The attackers have innovated, and that’s one of the problems with our industry — we’re at least five years behind the attackers.
Adrian SanabriaAdvocate, Thinkst Applied Research

DevOps security, or DevSecOps, teams have locked down many of the technical weak points within infrastructure and app deployment processes, but all too often, the initial attack takes a very human form, such as a spoofed email that seems to come from a company executive, directing the recipient to transfer funds to what turns out to be an attacker’s account.

“Often, breaches don’t even require hacking,” Sanabria said. “It requires understanding of financial processes, who’s who in the company and the timing of certain transactions.”

Preventing such attacks requires that employees be equally familiar with that information, Sanabria said. That lack of awareness is driving a surge in ransomware attacks, which rely almost entirely on social engineering to hold vital company data hostage.

Collaboration and strategy vital for DevOps security

Thus, in a world of sophisticated technology, the biggest problems remain human, according to experts — and their solutions are also rooted in organizational dynamics and human collaboration, starting with a more strategic, holistic organizational approach to IT security.

Jeremy Pullen, PolodisJeremy Pullen

“Technology people don’t think of leadership skills and collaboration as primary job functions,” said Jeremy Pullen, CEO of Polodis, a digital transformation consulting firm in Atlanta. “They think the job is day-to-day technical threat remediation, but you can’t scale your organization when you have people trying to do it all themselves.”

An overreliance on individual security experts within enterprises leads to a ‘lamppost effect,’ where those individuals overcompensate for risks they’re familiar with, but undercompensate in areas they don’t understand as well, Pullen said. That kind of team structure also results in the time-honored DevOps bugaboo of siloed responsibilities, which increases security fragility in the same way it dampens application performance and infrastructure resilience.

“Developers and operations may be blind to application security issues, while security tends to focus on physical and infrastructure security, which is most clearly defined in their threat models,” Pullen said. “Then it becomes a bit of a game of Whac-a-Mole … where you’re trying to fix one thing and then another thing pops up, and it gets really noisy.”

Instead, DevSecOps teams must begin to think of themselves and their individual job functions as nodes in a network rather than layers of a stack, Pullen said, and work to understand how the entire organization fits together.

“Everyone’s unclear about what enterprise architecture is,” he said. “They stick Jenkins in the middle of a process but might not understand that they need to separate that environment into different domains and understand governance boundaries.”

Effective DevOps security requires more team practice

Strategically hardening applications and IT management processes to prevent attacks is important, but organizations must also strategically plan — and practice — their response to ongoing security incidents that can and will still happen.

“Cybersecurity so far has been focused on solitary study and being the best technical practitioner you can be, and building stand-alone applications and infrastructure to the best technical standard, which reminds me of golf,” said Nick Drage, principal consultant at Path Dependence Ltd., a cybersecurity consulting firm based in the U.K., in a presentation at DevSecCon in Seattle last month. “But in reality, cybersecurity is a fight with an opponent over territory — much more like American football.”

As long as security is practiced by isolated individuals, it will be as effective as taking the football field armed with golf clubs, Drage said. Instead, the approach should be more team-oriented, cooperative, and, especially, emphasize team practice to prepare for ‘game time.’

This is the future of governance — controlling risk on the human side of our systems.
Charles BetzAnalyst, Forrester Research

American football defenses are particularly instructive for DevOps security strategy ideas about defense in depth, Drage said in his presentation. Among other things, they demonstrate that an initial incursion into a team’s territory — yards gained — does not amount to a breach — points scored. IT teams should also apply that thinking as they try to anticipate and respond to threats — how to protect the ‘end zone,’ so to speak, and not just their half of the field.

Thinkst’s Sanabria uses a different analogy — the DevOps security team as firefighters.

“We’re not going to get good at this if we don’t practice it,” he said. “We buy all the tools, but imagine firefighters if they’d never donned the suits, never driven the truck, never used the hose and they’re not expecting the amount of force and it knocks them down. Going out to their first fire would look like a comedy.”

And yet that’s exactly what happens with many enterprise IT security teams when they must respond to incidents, Sanabria said, in part because companies don’t prioritize experiential learning over informational training.

The good news is that IT analysts expect the next wave of DevOps security to look very much like chaos engineering used in many organizations to improve system resiliency, but with a human twist. Organizations have begun to emerge such as OpenSOC, which sets up training workshops, including simulated ransomware attacks, for companies to practice security incident response. Companies can also do this internally by treating penetration tests as real attacks, otherwise known as red teaming. Free and open source tools such as Infection Monkey from Guardicore Labs also simulate attack scenarios.

Charles Betz, Forrester ResearchCharles Betz

Tech companies such as such as Google already practice their own form of human-based chaos testing, where employees are selected at random for a ‘staycation,’ directed to take a minimum of one hour to answer work emails, or to intentionally give wrong answers to questions, to test the resiliency of the rest of the organization.

“Despite the implications of the word ‘chaos,’ some companies are already presenting chaos engineering to their risk management leaders and auditors,” said Charles Betz, analyst at Forrester Research. “This is the future of governance — controlling risk on the human side of our systems.”

Go to Original Article
Author:

How to work with the WSUS PowerShell module

In many enterprises, you use Windows Server Update Services to centralize and distribute Windows patches to end-user devices and servers.

WSUS is a free service that installs on Windows Server and syncs Windows updates locally. Clients connect to and download patches from the server. Historically, you manage WSUS with a GUI, but with PowerShell and the PoshWSUS community module, you can automate your work with WSUS for more efficiency. This article will cover how to use some of the common cmdlets in the WSUS PowerShell module, found at this link.

Connecting to a WSUS server

The first task to do with PoshWSUS is to connect to an existing WSUS server so you can run cmdlets against it. This is done with the Connect-PSWSUSServer cmdlet. The cmdlet provides the option to make a secure connection, which is normally on port 8531 for SSL.

Connect-PSWSUSServer -WsusServer wsus -Port 8531 -SecureConnection
Name Version PortNumber ServerProtocolVersion
---- ------- ---------- ---------------------
wsus 10.0.14393.2969 8530 1.20

View the WSUS clients

There are various cmdlets used to view WSUS client information. The most apparent is Get-PSWSUSClient, which shows client information such as hostname, group membership, hardware model and operating system type. The example below gets information on a specific machine named Test-1.

Get-PSWSUSClient Test-1 | Select-Object *
ComputerGroup : {Windows 10, All Computers}
UpdateServer : Microsoft.UpdateServices.Internal.BaseApi.UpdateServer
Id : 94a2fc62-ea2e-45b4-97d5-10f5a04d3010
FullDomainName : Test-1
IPAddress : 172.16.48.153
Make : HP
Model : HP EliteDesk 800 G2 SFF
BiosInfo : Microsoft.UpdateServices.Administration.BiosInfo
OSInfo : Microsoft.UpdateServices.Administration.OSInfo
OSArchitecture : AMD64
ClientVersion : 10.0.18362.267
OSFamily : Windows
OSDescription : Windows 10 Enterprise
ComputerRole : Workstation
LastSyncTime : 9/9/2019 12:06:59 PM
LastSyncResult : Succeeded
LastReportedStatusTime : 9/9/2019 12:18:50 PM
LastReportedInventoryTime : 1/1/0001 12:00:00 AM
RequestedTargetGroupName : Windows 10
RequestedTargetGroupNames : {Windows 10}
ComputerTargetGroupIds : {59277231-1773-401f-bf44-2fe09ac02b30, a0a08746-4dbe-4a37-9adf-9e7652c0b421}
ParentServerId : 00000000-0000-0000-0000-000000000000
SyncsFromDownstreamServer : False

WSUS usually organizes machines into groups, such as all Windows 10 machines, to apply update policies. The command below measures the number of machines in a particular group called Windows 10 with the cmdlet Get-PSWSUSClientsinGroup:

Get-PSWSUSClientsInGroup -Name 'Windows 10' | Measure-Object | Select-Object -Property Count
Count
-----
86

How to manage Windows updates

With the WSUS PowerShell module, you can view, approve and decline updates on the WSUS server, a very valuable and powerful feature. The command below finds all the Windows 10 feature updates with the title “Feature update to Windows 10 (business editions).” The output shows various updates on my server for version 1903 in different languages:

Get-PSWSUSUpdate -Update "Feature update to Windows 10 (business editions)"  | Select Title
Title
-----
Feature update to Windows 10 (business editions), version 1903, en-gb x86
Feature update to Windows 10 (business editions), version 1903, en-us arm64
Feature update to Windows 10 (business editions), version 1903, en-gb arm64
Feature update to Windows 10 (business editions), version 1903, en-us x86
Feature update to Windows 10 (business editions), version 1903, en-gb x64
Feature update to Windows 10 (business editions), version 1903, en-us x64

Another great feature of this cmdlet is it shows updates that arrived after a particular date. The following command gives the top-five updates that were downloaded in the last day:

Get-PSWSUSUpdate -FromArrivalDate (Get-Date).AddDays(-1) | Select-Object -First 5
Title KnowledgebaseArticles UpdateType CreationDate UpdateID
----- --------------------- ---------- ------------ --------
Security Update for Microso... {4475607} Software 9/10/2019 10:00:00 AM 4fa99b46-765c-4224-a037-7ab...
Security Update for Microso... {4475574} Software 9/10/2019 10:00:00 AM 1e489891-3372-43d8-b262-8c8...
Security Update for Microso... {4475599} Software 9/10/2019 10:00:00 AM 76187d58-e8a6-441f-9275-702...
Security Update for Microso... {4461631} Software 9/10/2019 10:00:00 AM 86bdbd3b-7461-4214-a2ba-244...
Security Update for Microso... {4475574} Software 9/10/2019 10:00:00 AM a56d629d-8f09-498f-91e9-572...

The approval and rejection of updates is an important part of managing Windows updates in the enterprise. The WSUS PowerShell module makes this easy to do. A few years ago, Microsoft began releasing preview updates for testing purposes. I typically want to decline these updates to avoid their installation on production machines. The following command finds every update with the string “Preview of” in the title and declines them with the Deny-PSWSUSUpdate cmdlet.

Get-PSWSUSUpdate -Update "Preview of" | Where-Object {$_.IsDeclined -eq 'False' } | Deny-PSWSUSUpdate
Patch IsDeclined
----- ----------
2019-08 Preview of Quality Rollup for .NET Framework 3.5.1 on Windows Server 2008 R2 for Itanium-based Systems (KB4512193) True
2019-08 Preview of Quality Rollup for .NET Framework 3.5.1, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8 for Windows 7 (KB4512193) True
2019-08 Preview of Quality Rollup for .NET Framework 3.5.1, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8 for Windows 7 and Server 2008 R2 for x64 (KB4512193) True
2019-07 Preview of Quality Rollup for .NET Framework 2.0 on Windows Server 2008 SP2 for Itanium-based Systems (KB4512196) True
2019-08 Preview of Quality Rollup for .NET Framework 3.5, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8 for Windows Server 2012 for x64 (KB4512194) True
2019-07 Preview of Quality Rollup for .NET Framework 2.0, 3.0, 4.5.2, 4.6 on Windows Server 2008 SP2 (KB4512196) True
2019-08 Preview of Quality Rollup for .NET Framework 3.5, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8 for Windows 8.1 and Server 2012 R2 for x64 (KB4512195) True
2019-07 Preview of Quality Rollup for .NET Framework 2.0, 3.0, 4.5.2, 4.6 on Windows Server 2008 SP2 for x64 (KB4512196) True

Syncing WSUS with Microsoft’s servers

In the WSUS GUI, users can set up a daily synchronization between their WSUS server and the Microsoft update servers to download new updates. I like to synchronize more than once a day, especially on Patch Tuesday when you may get several updates in one day. For this reason, you can create a scheduled task that runs a WSUS sync hourly for a few hours per day. The script can be as simple as this command below:

Start-PSWSUSSync
Synchronization has been started on wsus.

Performing cleanups

A WSUS server can be fickle. I have had to rebuild WSUS servers several times, and it is a pretty lengthy process because you have to download all the updates to the new server. You can avoid this process by running a cleanup on the WSUS server. The Start-PSWSUSCleanup cmdlet performs many of these important actions, such as declining superseded updates, cleanup of obsolete updates and removing obsolete computers:

Start-PSWSUSCleanup -DeclineSupersededUpdates -DeclineExpiredUpdates -CleanupObsoleteUpdates -CompressUpdates -CleanupObsoleteComputers -CleanupUnneededContentFiles
Beginning cleanup, this may take some time...
SupersededUpdatesDeclined : 223
ExpiredUpdatesDeclined : 0
ObsoleteUpdatesDeleted : 0
UpdatesCompressed : 4
ObsoleteComputersDeleted : 6
DiskSpaceFreed : 57848478722

Go to Original Article
Author:

SAP HANA database pivotal to SAP’s past — and future

Twenty years ago, enterprises may have turned to SAP for back-office business enterprise software. But these days, SAP wants to be much more than that.

A big part of SAP’s strategy has to do with SAP HANA, an in-memory database the company initially released in 2010. It is now the gateway to what SAP calls the intelligent enterprise, where data is used to improve business processes and develop new business models.

The first part of this two-part series looks at how SAP, which has been around for 47 years, has transitioned from a company that focused primarily on back-office business enterprise software to one that endeavors to transform organizations into intelligent enterprises.

Broadening the scope

SAP’s story in the last 20 years has been one of continually broadening scope, according to Lloyd Adams, managing director of the East Region at SAP America Inc. He joined the company in 1998.

In the late 1990s and early 2000s, “we were known more as an ERP company — perhaps back office only,” Adams said. “But through the years, both through organic development and a combination of development and acquisition, we’ve positioned ourselves to bring the back office to the front office to help provide the intelligent enterprise.”

Anchored by SAP R/3, its pioneering client-server ERP platform, SAP entered a period of dramatic growth in the late 1990s. It rode the wave of Y2K fears, as businesses scrambled to consolidate IT on back-office ERP systems.

Joshua Greenbaum, principle, Enterprise Applications ConsultingJoshua Greenbaum

“The upgrade fever that Y2K created was really enormous and a lot of folks were pushing to use Y2K as a way to rationalize IT spending,” said Joshua Greenbaum, principal at Enterprise Applications Consulting. “Also the Euro changeover was coming, and there was a lot of interest in looking at SAP because of how it could help manage European currency changes. So those two phenomena were really operative in the late 1990s, and SAP was right at the forefront of it.”

At the same time that SAP’s ERP business was growing, however, it faced threats from the rise of internet-based business systems and on-premises best-of-breed applications like Siebel Systems, which created a popular CRM product that Oracle acquired in 2005, and Ariba, which sold a procurement product that SAP eventually acquired in 2012, according to Jon Reed, co-founder of the ERP news and analysis firm Diginomica.com.

“SAP was able to weather those storms while expanding their ERP footprint by building out a serviceable CRM module, as well as an HR module with a globalized payroll function that has stood the test of time,” Reed said. “Their core manufacturing base remained loyal and … preferred SAP’s ‘one throat to choke’ approach and extensive consulting partners.”

Not all of SAP’s efforts succeeded. Its SAP NetWeaver integration platform fell short, and the company failed to see Salesforce — or anything SaaS — coming, Reed said.

One of the main keys to SAP’s success was to encourage its customers to undergo IT and business process reengineering in the 1990s, even if it was extremely complex, according to analyst Dana Gardner, president of Interarbor Solutions LLC in Gilford, N.H.

“Such IT-instigated business change was — and is — not easy, and the stumbles by many companies to catch up to the client-server world and implement ERP were legendary,” he said. “But imagine if those companies had not made the shift to being digital businesses in the 1990s? When the web and internet hit, manual processes and nonintegrated business functions had to adapt to a connected world, so IT went from being for the business to being the whole business.”

The idea that applications and the supporting IT infrastructure work collectively using distributed yet common data and pervasive networking to provide the best information and processes is a given these days, but SAP made this possible first, Gardner said.

Milestones in SAP's 20-year journey from R/3 to the intelligent enterprise.

The SAP HANA big bang

But by the end of the 2000s, the radical new in-memory database SAP HANA was about to change SAP’s direction again.

The release of the SAP HANA database in 2010 was the critical development that allowed SAP to conceive and begin to sell the concept of the intelligent enterprise, according to Adams. If there was no HANA, there would not have been an intelligent enterprise.

Lloyd Adams, managing director, East Region at SAP America IncLloyd Adams

“It truly revolutionized the company, the industry and our ability to transcend conversations from a back-office perspective, but then be able to sit down with our customers and try and understand what were the main opportunities that they were looking to exploit or problems they were looking to solve,” he said.

The development of SAP HANA was driven in large part by the rivalry between SAP and Oracle, according to Greenbaum. The SAP ERP applications ran mostly on Oracle databases, and in the 2000s Oracle began to aggressively encroach on SAP’s territory in the enterprise software space with moves like the bitter acquisition of ERP vendor PeopleSoft.

“For SAP this was a real wake up call, because of the dependency that they had on the Oracle database,” Greenbaum said. “That realization that they needed to get out from under Oracle, along with some research that had already been going on with in-memory databases inside SAP, began the hunt for an alternative, and that’s where the HANA project started to bear fruit.”

It has been a long, slow process for SAP to move its customers off of Oracle, which is still something of a problem today, Greenbaum said. But he believes HANA is now firmly established as the database of choice for customers.

Missteps with the SAP HANA database?

However, the emphasis on the SAP HANA database might have also been a distraction that took the company away from innovating on the applications that form SAP’s core user base, according to analyst Vinnie Mirchandani, founder of Deal Architect.

Vinnie Mirchandani, analyst and founder, Deal ArchitectVinnie Mirchandani

“Every few years, SAP gets enamored with platforms and tools,” Mirchandani said. “NetWeaver and HANA, in particular, distracted the company from an application focus, without generating much revenue or market share in those segments.”

SAP was fundamentally correct that in-memory technology and real-time ERP were the ways of the future, but its push into databases with HANA is still a questionable strategy, according to Reed.

“Whether SAP should have entered the database business themselves is still open to second-guessing,” he said. “You can argue this move has distracted SAP from focusing on their homegrown and acquired cloud applications. For example, would SAP be much further ahead on SuccessFactors functionality if they hadn’t spent so much time putting SuccessFactors onto HANA?”

Buying into the cloud

SAP was slow to react to the rise of enterprise cloud computing and SaaS application like Salesforce, but it course corrected by going on a cloud application buying spree, acquiring SuccessFactors in 2011, Ariba in 2012, Hybris in 2013, Fieldglass and Concur in 2014.

Combining these cloud applications with SAP HANA “completely changed the game” for the company, Adams said.

“We eventually began to put those cloud line of business solutions on the HANA platform,” he said. “That’s given us the ability to tell a full intelligent enterprise story in ways that we weren’t fully poised to do [before HANA].”

SAP’s strategy of buying its way into the cloud has been largely successful, although efforts to move core legacy applications to the cloud have been mixed, Greenbaum said.

“SAP can claim to be one of the absolute leaders in the cloud enterprise software space,” he said. “It’s a legacy that is tempered by the fact that they’re still pulling the core legacy R/3 and ECC customers into the cloud, which has not worked out as well as SAP would like, but in terms of overall revenue and influence in the area, they’ve made their mark.”

Although SAP has proved to be adaptable to changing technologies and business trends, the future is in question. Part two of this series, will look at the release of SAP S/4HANA (the rewriting of SAP’s signature Business Suite on HANA), the emergence of the SAP intelligent enterprise, and SAP’s focus on customer experience.

Go to Original Article
Author:

FBI says $26B lost to business email compromise over last 3 years

Business email compromise has cost a staggering amount of money for enterprises, according to the FBI.

The bureau posted a public service announcement Tuesday that showed business email compromise (BEC) attacks have cost organizations worldwide more than $26 billion between June 2016 and July of this year. The three-year total is based on actual victim complaints reported to the FBI’s Internet Crime Complaint Center (IC3). Earlier this year, the IC3’s 2018 Internet Crime Report highlighted business email compromise as an evolving threat that accounted for a growing number of cybercrime-related losses for enterprises.

“The scam is frequently carried out when a subject compromises legitimate business or personal email accounts through social engineering or computer intrusion to conduct unauthorized transfers of funds,” the FBI wrote in its alert.

The FBI also said it tracked a 100% increase in global losses from business email compromise attacks between May 2018 and July of this year. The bureau said the increase was partially due to a greater awareness of the threat, which the FBI said “encourages reporting to the IC3 and international and financial partners.”

Losses from business email compromise attacks have alarmed some in the cyber insurance market. Jeffrey Smith, managing partner at Cyber Risk Underwriters, said during a Black Hat 2019 session that two most common cyber insurance claims his company saw were for ransomware and wire transfer fraud related to email attacks.

“Ransomware isn’t too surprising, but the wire transfer fraud claims we’re seeing are trending in a bad direction,” Smith said. “If you’re sending a wire [transfer], just pick up the phone and call the person who’s getting it.”

In July, insurance giant American International Group (AIG) Inc. reported that business email compromise attacks had become the leading cause of cyber insurance claims, surpassing ransomware. According to AIG’s report, business email compromise accounted for nearly a quarter of all reported cyber incidents in 2018 for the EMEA region.

The FBI alert recommended that employees enable two-factor authentication to protect against threat actors looking to assume control of email accounts. The alert also recommended employees “ensure the URL in emails is associated with the business it claims to be from,” though this step wouldn’t necessarily prevent business email compromise attacks where attackers have gained control of legitimate email accounts within an organization.

Law enforcement takedowns

Shortly after the FBI alert was issued, the U.S. Department of Justice (DOJ) announced that 281 individuals had been arrested in “Operation reWired,” a global law enforcement effort to take down business email compromise campaigns.

Operation reWired was conducted over a fourth-month period and resulted in seizures of nearly $3.7 million in assets. Arrests were made in the U.S., Nigeria, France, Italy, Japan, Turkey, the U.K. and other countries, with 74 arrests made in the U.S. and 167 arrests in Nigeria; the Justice Department said foreign individuals who conduct business email compromise scams “are often members of transnational criminal organizations, which originated in Nigeria but have spread throughout the world.”

The DOJ didn’t say what the total losses were for the business email compromise scams disrupted by Operation reWired, but it did note that suspects were involved in a range of attacks, including “lottery scams” — where threat actors convince victims to pay phony fees or taxes in order to receive lottery payouts — and “romance scams” — where fake online personas trick victims into making fraudulent transfers or transactions.

“Through Operation reWired, we’re sending a clear message to the criminals who orchestrate these BEC schemes: We’ll keep coming after you, no matter where you are,” said FBI Director Christopher Wray in a statement. “And to the public, we’ll keep doing whatever we can to protect you. Reporting incidents of BEC and other internet-enabled crimes to the IC3 brings us one step closer to the perpetrators.”

Go to Original Article
Author:

Low-code goes mainstream to ease app dev woes

Low-code/no-code application development has gone mainstream as demand grows for enterprises to turn out increasingly more applications with not enough skilled developers.

A recent Forrester Research study showed that 23% of the 3,200 developers surveyed said their firms have adopted low-code development platforms, and another 22% said their organizations plan to adopt low-code platforms in the next year. That data was gathered in late 2018, so by the end of this year, those numbers should combine to be close to 50% of developers whose organizations have adopted low-code platforms, said John Rymer, an analyst at Forrester.

“That seems like mainstream to me,” he said, adding that low-code/no-code comes up routinely with his clients nowadays. In fact, low-code development could possibly be as impactful on the computing industry as the creation of the internet or IBM’s invention of the PC, he said.

The industry is on the cusp of a huge change to incorporate business people into the way software is built and delivered, Rymer said.

If you look ahead five years or so, we can see maybe 100 million people — business people — engaged in producing software.
John RymerAnalyst, Forrester Research

“If you believe that there are six million developers in the world and we believe there are probably a billion business people in the world, if you look ahead five years or so, we can see maybe 100 million people –business people — engaged in producing software,” he said. “‘And I think that is the change we’re all starting to witness.”

Meanwhile, Forrester said there are eight key reasons for enterprises to adopt low-code platforms:

  • Support product or service innovation.
  • Empower departmental IT to deliver apps.
  • Empower employees outside of IT to deliver apps.
  • Make the app development processes more efficient.
  • Develop apps more quickly.
  • Reduce costs of app development.
  • Increase the number of people who develop applications.
  • Develop unique apps for specific business needs.

The top three types of apps built with low-code tools are complete customer-facing apps — web or mobile, business process and workflow apps, and web or mobile front ends, Rymer said. Meanwhile, the top three departments using low-code are IT, customer service or call center, and digital business or e-commerce, he added.

Low-code landscape shaped by business users

Surging interest in low-code/no-code adoption comes not just to help increase developers’ productivity, but also to empower enterprise business users.

A Gartner report on the low-code space, released in August 2019, predicted that by 2024, 75% of large enterprises will use at least four low-code development tools for both IT application development and citizen development, and over 65% of applications will be developed with low-code technology. Upwork, the web platform for matching freelance workers with jobs, recently identified low-code development skills as rapidly gaining in popularity, particularly for developers familiar with Salesforce’s Lightning low-code tools to build web apps.

Low-code analyses from Gartner and Forrester in 2018 did not rank Microsoft as a leader, but the software giant shot up in the rankings with the latest release of its Power Platform and PowerApps low-code environment that broadly supports both citizen developers and professional developers. This helps bring the vast community of Visual Studio and Visual Studio Code developers into the fold, said Charles Lamanna, general manager of application platform at Microsoft.

Other low-code platform vendors have shifted focus to business users. A study commissioned by low-code platform vendor OutSystems showed results quite similar to the Gartner and Forrester analyses. Out of 3,300 developers surveyed, 41% of respondents said their organization already uses a low-code platform, and another 10% said they were about to start using one, according to the study.

Mendix now offers a part of their product that’s aimed at business people as well. With the Mendix platform, eXp Realty, a Bellingham, Wash., cloud-based real estate brokerage, cut its onboarding process for new agents from 18 steps down to nine, said Steve Ledwith, the company’s vice president of engineering.

Gartner’s latest low-code report includes OutSystems and Salesforce Mendix, Microsoft and Appian. The most recent Forrester Wave report on the low-code space, in March 2019, saw the same four core leaders but swapped out Appian for Kony.

The rising popularity of low-code/no-code platforms also means the marketplace itself is active. “Low-code platform leaders are growing fast and the smaller companies are finding a niche,” said Mike Hughes, principal platform evangelist at OutSystems.

Last year, Siemens acquired Mendix for $730 million. And just this week, Temenos, a Geneva, Switzerland-based banking software company, acquired Kony for $559 million plus another $21 million if they meet unspecified goals. Both Temenos and Siemens said they acquired the low-code platforms to speed up their own internal application development, as well as to advance and sell the platforms to customers.

“We wanted to shore up our banking software with Kony’s low-code platform and particularly their own banking application built with their product,” said Mark Gunning, global business solutions director at Temenos. Kony also will help advance Temenos’ presence in the U.S., he added.

As enterprises rely more on these platforms to develop their applications, look for consolidation ahead in the low-code/no-code space. Gartner now tracks over 200-plus companies that claim to serve the low-code market. Acquisitions such as these are another strong indicator that the market is maturing.

Salvation Army recruits low-code

Like many not-for-profits, the Salvation Army was slow to move off of its old Lotus Notes platform. Yet, when they decided to move to Office 365 in 2016, there was no Power Platform or PowerApps, so the organization turned to low-code platform maker AgilePoint, based in Mountain View, Calif., said David Brown, director of applications at the Salvation Army USA West, in Rancho Palos Verdes, Calif. (Gartner ranks AgilePoint as a high-level niche player; the company doesn’t appear on Forrester’s rankings.)

The AgilePoint platform enabled the charitable organization to build more apps and be more responsive to the organization’s demands for new applications. The Salvation Army started to build apps with AgilePoint in 2017 and put 10 new apps into production that year. In 2018, they delivered 20 apps, and the goal for 2019 is 30 new apps, Brown said. The Salvation Army also is considering a training program for citizen developers, Brown said.

“We built an app that replaced a paper process that cost us $10,000 a month,” he said. “When I can invest in a new technology and in the first year save $120,000 using something that I am not spending anywhere that much for, that’s a huge return on investment.”

Low-code, no-code lines begin to blur

No-code typically means the platform is basic and requires no coding, while low-code platforms enable pro developers to go under the hood and hard-code portions if they choose to. However, the distinction between low-code and no-code is not absolute.

Jeffrey Hammond, ForresterJeffrey Hammond

“I don’t think you are either low-code or you are no code,” said Jeffrey Hammond, another Forrester analyst. “I think you might be less code or more code. I think the no-code vendors aspire to have you do less raw text entry.”

As a developer, there are times when you can only visually model so much before it’s just more efficient to drop into text and write something. It is the quickest, easiest way to express what you want to do.

“And if you’re typing text, to me you’re coding,” Hammond said.

Michael Beckley, CTO of Appian, based in Tysons, Va., said many of Appian’s developers would agree.

“A lot of our developers believe low-code exists to help developers write less code upfront,” he said. “And when the platform stops [when finished execution of its instructions] they should just start writing code all over the place.”

Next low-code hurdles are AI, serverless

The addition of artificial intelligence to the platforms to help developers build smart apps is one next hurdle for the low-code space. Another is to provide DevOps natively on the platforms, and that is already happening now with platforms from OutSystems and Mendix, among others.

However, there is a potential future connection point between the serverless and low-code spaces, Hammond said.

“They are looking to solve a similar problem, which is extracting developers from a lot of the lower-level grunt work so they can focus on building business logic,” he said.

The serverless side relies on network infrastructure and managed services, not so much with tools. The low-code space does it with tools and frameworks, but not necessarily as part of an open, standards-based approach.

With some standardization in the serverless space around Kubernetes and CloudEvents, there could be some intersections between the tools and the low-code space and the high-scale infrastructure in the cloud-native space.

“If you have a common event model, you can start to build events and you can string them together and you can start to build business rules around them,” Forrester’s Hammond said. “You can go into the editors to write the business logic for them. To me, that’s an extension of low-code — and I think it can open up the floodgates to an intersection of these two different technologies.”

Go to Original Article
Author:

Enterprises that use data will thrive; those that don’t, won’t

There’s a growing chasm between enterprises that use data, and those that don’t.

Wayne Eckerson, founder and principal consultant of Eckerson Group, calls it the data divide, and according to Eckerson, the companies that will thrive in the future are the ones that are already embracing business intelligence no matter the industry. They’re taking human bias out of the equation and replacing it with automated decision-making based on data and analytics.

Those that are data laggards, meanwhile, are already in a troublesome spot, and those that have not embraced analytics as part of their business model at all are simply outdated.

Eckerson has more than 25 years of experience in the BI industry and is the author of two books — Secrets of Analytical Leaders: Insights from Information Insiders and Performance Dashboards: Measuring, Monitoring, and Managing Your Business.  

In the first part of a two-part Q&A, Eckerson discusses the divide between enterprises that use data and those that don’t, as well as the importance of DataOps and data strategies and how they play into the data divide. In the second part, he talks about self-service analytics, the driving force behind the recent merger and acquisition deals, and what intrigues him about the future of BI.

How stark is the data divide, the gap between enterprises that use data and those that don’t?

Wayne Eckerson: It’s pretty stark. You’ve got data laggards on one side of that divide, and that’s most of the companies out there today, and then you have the data elite, the companies [that] were born on data, they live on data, they test everything they do, they automate decisions using data and analytics — those are the companies [that] are going to take the future. Those are the companies like Google and Amazon, but also companies like Netflix and its spinoffs like Stitch Fix. They’re heavily using algorithms in their business. Humans are littered with cognitive biases that distort our perception of what’s going on out there and make it hard for us to make objective, rational, smart decisions. This data divide is a really interesting thing I’m starting to see happening that’s separating out the companies [that] are going to be competitive in the future. I think companies are really racing, spending money on data technologies, data management, data analytics, AI.

How does a DataOps strategy play into the data divide?

Headshot of Wayne Eckerson, founder and principal consultant of Eckerson GroupWayne Eckerson

Eckerson: That’s really going to be the key to the future for a lot of these data laggards who are continually spending huge amounts of resources putting out data fires — trying to fix data defects, broken jobs, these bottlenecks in development that often come from issues like uncoordinated infrastructure for data, for security. There are so many things that prevent BI teams from moving quickly and building things effectively for the business, and a lot of it is because we’re still handcrafting applications rather than industrializing them with very disciplined routines and practices. DataOps is what these companies need — first and foremost it’s looking at all the areas that are holding the flow of data back, prioritizing those and attacking those points.

What can a sound DataOps strategy do to help laggards catch up?

Eckerson: It’s improving data quality, not just at the first go-around when you build something but continuous testing to make sure that nothing is broken and users are using clean, validated data. And after that, once you’ve fixed the quality of data and the business becomes more confident that you can deliver things that make sense to them, then you can use DataOps to accelerate cycle times and build more things faster. This whole DataOps thing is a set of development practices and testing practices and deployment and operational practices all rolled into a mindset of continuous improvement that the team as a whole has to buy into and work on. There’s not a lot of companies doing it yet, but it has a lot of promise.

Data strategy differs for each company given its individual needs, but as BI evolves and becomes more widespread, more intuitive, more necessary no matter the size of the organization and no matter the industry, what will be some of the chief tenets of data strategy going forward?

Eckerson: Today, companies are racing to implement data strategies because they realize they’re … data laggard[s]. In order to not be disrupted in this whole data transformation era, they need a strategy. They need a roadmap and a blueprint for how to build a more robust infrastructure for leveraging data, for internal use, for use with customers and suppliers, and also to embed data and analytics into the products that they build and deliver. The data strategy is a desire to catch up and avoid being disrupted, and also as a way to modernize because there’s been a big leap in the technologies that have been deployed in this area — the web, the cloud, big data, big data in the cloud, and now AI and the ability to move from reactive reporting to proactive predictions and to be able to make recommendations to users and customers on the spot. This is a huge transformation that companies have to go through, and so many of them are starting at zero.

So it’s all about the architecture?

Eckerson: A fundamental part of the data strategy is the data architecture, and that’s what a lot of companies focus on. In fact, for some companies the data strategy is synonymous with the data architecture, but that’s a little shortsighted because there are lots of other elements to a data strategy that are equally important. Those include the organization — the people and how they work together to deliver data capabilities and analytic capabilities — and the culture, because you can build an elegant architecture, you can buy and deploy the most sophisticated tools. But if you don’t have a culture of analytics, if people don’t have a mindset of using data to make decisions, to weigh options to optimize processes, then it’s all for naught. It’s the people, it’s the processes, it’s the organization, it’s the culture, and then, yes, it’s the technology and the architecture too.

Editors’ note: This interview has been edited for clarity and conciseness.

Go to Original Article
Author: