Verizon’s long-term strategy is to make mobile 5G a Wi-Fi killer. While analysts don’t see that happening this decade, it is technically possible for the next-generation wireless technology to drive Wi-Fi into obsolescence.
Ronan Dunne, CEO of Verizon Consumer Group, recently entered the ongoing 5G vs. Wi-Fi tech debate when he predicted the latter’s demise. Dunne said his company’s upcoming 5G service would eventually make high-speed internet connectivity ubiquitous for its customers.
“In the world of 5G millimeter wave deployment, we don’t see the need for Wi-Fi in the future,” Dunne told attendees at a Citigroup global technology conference in Las Vegas.
Today, the millimeter wave (MM wave) spectrum used to transmit 5G signals is often blocked by physical objects like buildings and trees, making service unreliable. Verizon believes its engineers can circumvent those limitations within 5 to 7 years, bringing 5G wireless broadband to its 150 million customers.
Most analysts agree that Wi-Fi will remain the preferred technology for indoor wireless networking through the current decade. Beyond that, it’s technically possible for 5G services to start eroding Wi-Fi’s market dominance, particularly as the number of 5G mobile and IoT devices rises over the next several years.
“If the CEO of a major cellular carrier says something, I will take that seriously,” said Craig Mathias, principal analyst at Farpoint Group. “He could be dead wrong over the long run, but, technically, it could work.”
As an alternative to Wi-Fi, Verizon could offer small mobile base stations, such as specially designed picocells and femtocells, to carry 5G signals from the office and home to the carrier’s small cell base stations placed on buildings, lampposts or poles. The small cells would send traffic to the carriers’ core network.
Early uses for 5G
Initially, 5G could become a better option for specific uses. Examples include sports stadiums that have an atypically high number of mobile devices accessing the internet at the same time. That type of situation requires a massive expenditure in Wi-Fi gear and software that could prove more expensive than 5G technology, said Brandon Butler, an analyst at IDC.
Another better-than-Wi-Fi use for 5G would be in a manufacturing facility. Those locations often have machinery that needs an ultra-low latency connection in an area where a radio signal is up against considerable interference, Butler said.
Nevertheless, Butler stops short of predicting a 5G-only world, advising enterprises to plan for a hybrid world instead. They should look to Wi-Fi and 5G as the best indoor and outdoor technology, respectively.
“The real takeaway point here is that enterprises should plan for a hybrid world into the future,” Butler said.
Ultimately, how far 5G goes in replacing Wi-Fi will depend on whether the expense of switching is justified by reducing overall costs and receiving unique services. To displace Wi-Fi, 5G will have to do much more than match its speed.
“It’ll come down to cost and economics, and the cost and economics do not work when the performance is similar,” said Rajesh Ghai, an analyst at IDC.
Today, Wi-Fi provides a relatively easy upgrade path. That’s because, collectively, businesses have already spent billions of dollars over the years on Wi-Fi access points, routers, security and management tools. They have also hired the IT staff to operate the system.
Verizon 5G Home
While stressing the importance of mobile 5G vs. Wi-Fi, Dunne lowered expectations for the fixed wireless 5G service for the home that the carrier launched in 2018. Verizon expected it’s 5G Home service to eventually compete with the TV and internet services provided by cable companies.
Today, 5G Home, which is available in parts of five metropolitan markets, has taken a backseat to Verizon’s mobile 5G buildout. “It’s very much a mobility strategy with a secondary product of home,” Dunne said.
Ghai of IDC was not surprised that Verizon would lower expectations for 5G Home. Delivering the service nationwide would have required spending vast amounts of money to blanket neighborhoods with small cells.
Verizon likely didn’t see enough interest for 5G Home among consumers to justify the cost, Ghai said. “It probably hasn’t lived up to the promise.”
IBM this week launched Cloud Pak for Security, which experts say represents a major strategy shift for Big Blue’s security business
The aim of IBM’s Cloud Pak for Security is to create a platform built on open-source technology that can connect security tools from multiple vendors and cloud platforms in order to help reduce vendor lock-in. IBM Cloud Paks are pre-integrated and containerized software running on Red Hat OpenShift, and previously IBM had five options for Cloud Paks — Applications, Data, Integration, Automation and Multicloud Management — which could be mixed and matched to meet enterprise needs.
Chris Meenan, director of offering management and strategy at IBM Security, told SearchSecurity that Cloud Pak for Security was designed to tackle two “big rock problems” for infosec teams. The first aim was to help customers get data insights through federated search of their existing data without having to move it to one place. Second was to help “orchestrate and take action across all of those systems” via built-in case management and automation.
Meenan said IT staff will be able to take actions across a multi-cloud environment, including “quarantining users, blocking IP addresses, reimaging machines, restarting containers and forcing password resets.”
“Cloud Pak for Security is the first platform to take advantage of STIX-Shifter, an open-source technology pioneered by IBM that allows for unified search for threat data within and across various types of security tools, datasets and environments,” Meenan said. “Rather than running separate, manual searches for the same security data within each tool and environment you’re using, you can run a single query with Cloud Pak for Security to search across all security tools and data sources that are connected to the platform.”
Meenan added that Cloud Pak for Security represented a shift in IBM Security strategy because of its focus on delivering “security solutions and outcomes without needing to own the data.”
“That’s probably the biggest shift — being able to deliver that to any cloud or on-premise the customer needs,” Meenan said. “Being able to deliver that without owning the data means organizations can deploy any different technology and it’s not a headwind. Now they don’t need to duplicate the data. That’s just additional overhead and introduces friction.”
One platform to connect them all
Meenan said IBM was “very deliberate” to keep data transfers minimal, so at first Cloud Pak for Security will only take in alerts from connected vendor tools and search results.
“As our Cloud Pak develops, we plan to introduce some capability to create alerts and potentially store data as well, but as with other Cloud Paks, the features will be optional,” Meenan said. “What’s really fundamental is we’ve designed a Cloud Pak to deliver applications and outcomes but you don’t have to bring the data and you don’t have to generate the alerts. Organizations have a SIEM in place, they’ve got an EDR in place, they’ve got all the right alerts and insights, what they’re really struggling with is connecting all that in a way that’s easily consumable.”
In order to create the connections to popular tools and platforms, IBM worked with clients and service providers. Meenan said some connectors were built by IBM and some vendors built their own connectors. At launch, Cloud Pak for Security will include integration for security tools from IBM, Carbon Black, Tenable, Elastic, McAfee, BigFix and Splunk, with integration for Amazon Web Services and Microsoft Azure clouds coming later in Q4 2019, according to IBM’s press release.
Ray Komar, vice president of technical alliances at Tenable, said that from an integration standpoint, Cloud Pak for Security “eliminates the need to build a unique connector to various tools, which means we can build a connector once and reuse it everywhere.”
“Organizations everywhere are reaping the benefits of cloud-first strategies but often struggle to ensure their dynamic environments are secure,” Komar told SearchSecurity. “With our IBM Cloud Pak integration, joint customers can now leverage vulnerability data from Tenable.io for holistic visibility into their cloud security posture.”
Jon Oltsik, senior principal analyst and fellow at Enterprise Strategy Group, based in Milford, Mass., told SearchSecurity that he likes this new strategy for IBM and called it “the right move.”
“IBM has a few strong products but other vendors have much greater market share in many areas. Just about every large security vendor offers something similar, but IBM can pivot off QRadar and Resilient and extend its footprint in its base. IBM gets this and wants to establish Cloud Pak for Security as the ‘brains’ behind security. To do so, it has to be able to fit nicely in a heterogeneous security architecture,” Oltsik said. “IBM can also access on-premises data, which is a bit of unique implementation. I think IBM had to do this as the industry is going this way.”
Martin Kuppinger, founder and principal analyst at KuppingerCole Analysts AG, based in Wiesbaden, Germany, said Cloud Pak for Security should be valuable for customers, specifically “larger organizations and MSSPs that have a variety of different security tools from different vendors in place.”
“This allows for better incident response processes and better analytics. Complex attacks today might span many systems, and analysis requires access to various types of security information. This is simplified, without adding yet another big data lake,” Kuppinger told SearchSecurity. “Obviously, Security Cloud Pak might be perceived competitive by incident response management vendors, but it is open to them and provides opportunities by building on the federated data. Furthermore, a challenge with federation is that the data sources must be up and running for accessing the data — but that can be handled well, specifically when it is only about analysis; it is not about real-time transactions here.”
The current and future IBM Security products
Meenan told SearchSecurity that Cloud Pak for Security would not have any special integration with IBM Security products, which would “have to stand on their own merits” in order to be chosen by customers. However, Meenan said new products in the future will leverage the connections enabled by the Cloud Pak.
“Now what this platform allows us to do is to deliver new security solutions that are naturally cross-cutting, that require solutions that can sit across an EDR, a SIEM, multiple clouds, and enable those,” Meenan said. “When we think about solutions for insider threat, business risk, fraud, they’re very cross-cutting use cases so anything that we create that cuts across and provides that end-to-end security, absolutely the Cloud Pak is laying the foundation for us — and our partners and our customers — to deliver that.”
Oltsik said IBM’s Security Cloud Pak has a “somewhat unique hybrid cloud architecture” but noted that it is “a bit late to market and early versions won’t have full functionality.”
“I believe that IBM delayed its release to align it with what it’s doing with Red Hat,” Oltsik said. “All that said, IBM has not missed the market, but it does need to be more aggressive to compete with the likes of Cisco, Check Point, FireEye, Fortinet, McAfee, Palo Alto, Symantec, Trend Micro and others with similar offerings.”
Kuppinger said that from an overall IBM Security perspective, this platform “is rather consequent.”
“IBM, with its combination of software, software services, and implementation/consultancy services, is targeted on such a strategy of integration,” Kuppinger wrote via email. “Not owning data definitely is a smart move. Good architecture should segregate data, identity, and applications/apps/services. This allows for reuse in modern, service-oriented architectures. Locking-in data always limits that reusability.”
SEATTLE — DevSecOps strategy is as much an art as a science, but experienced practitioners have a few pointers about how to approach it, including what not to do.
The first task DevSecOps newcomers should undertake, according to Julien Vehent, security engineer at web software firm Mozilla, is to design an effective IT security team structure. In his view, the ideal is an org chart that embeds security engineers with DevOps teams but has them report to a centralized security department. This structure helps to balance their impact on day-to-day operations with maintaining a cohesive set of broad goals for the organization.
This embedding can and should go both ways, Vehent added — security champions from DevOps teams should also have access to the central security organization to inform their work.
“Sys admins are great at security,” he said in a presentation here at DevSecCon this week. “Talk to people who do red teaming in their organization, and half the time they get caught by the sys admins.”
Once DevOps and IT security teams are aligned, the most important groundwork for improved DevOps security is to gather accurate data on IT assets and the IT environment, and give IT teams access to relevant data in context, practitioners said.
“What you really want from [DevSecOps] models is to avoid making assumptions and to test those assumptions, because assumptions lead to vulnerability,” Vehent said, recalling an incident at Mozilla where an assumption about SSL certificate expiration data brought down Mozilla’s add-ons service at launch.
Since then, Vehent’s mantra has been, “Avoid assumptions, trust the data.”
Effective DevSecOps tools help make data-driven decisions
Julien VehentSecurity engineer, Mozilla
Once a strategy is in place, it’s time to evaluate tools for security automation and visibility. Context is key in security monitoring, said Erkang Zheng, chief information security officer at LifeOmic Security, a healthcare software company, which also markets its internally developed security visibility tools as JupiterOne.
“Attackers think in graphs, defenders think in lists, and that’s how attackers win,” Zheng said during a presentation here. “Stop thinking in lists and tables, and start thinking in entities and relationships.”
For example, it’s not enough to know how many AWS Elastic Cloud Compute instances an organization has, but to understand their context by analyzing multiple factors, such as which ones are exposed to the internet, both directly and through cross-account access methods.
IT pros can configure such security visibility graphs with APIs and graphing databases, or use prepackaged tools. There are also open source tools available to help developers assess the security of their own applications, such as Mozilla’s Observatory.
LifeOmic also takes a code-driven, systematized approach to DevOps security documentation, Zheng said. Team members create “microdocuments,” similar to microservices, and check them into GitHub as version-controlled JSON and YAML files.
Another speaker urged IT pros new to DevSecOps to take a templatized approach to IT training documentation for cybersecurity that explains, in jargon-free language, the reasons for best practices, and give specific examples of how developers often want to do things, versus how they should do things to ensure application security.
“The important thing is to make the secure way the easy way to do things for developers,” said Morgan Roman, application penetration tester at electronic signature software maker DocuSign. “Find bad patterns, and make the safer way to do things the default.”
DevSecOps how tos — and how NOT to dos
Strategic planning and assessments are important, but certain lessons about DevOps security can only be learned through experience. A panel of cybersecurity practitioners from blue-chip software companies shared their lessons learned, along with tips to help attendees avoid learning the hard way.
Erkang ZhengChief information security officer, LifeOmic Security
Multiple panelists said they struggled to get effective results from code analysis tools and spent time setting up software that returned very little value or, worse, created false-positive security alerts.
“We tried to do things like, ‘Hey, let’s make sure that we aren’t just checking in secrets to the code repository,'” said Hongyi Hu, security engineer at SaaS file-sharing firm Dropbox, based in San Francisco. “It turns out that there’s not really a standardized way of doing these things. … You can find things that look like secrets, but secrets don’t always look like secrets — a really weak secret might not be captured by a tool.”
Ultimately, no tool can replace effective communication within DevOps teams to improve IT security, panelists said. It sounds like a truism, but it represents a major shift in the way cybersecurity teams work, from being naysayers to acting as a consulting resource to apps teams. Often, a light-handed approach is best.
“The most effective strategy we got to with threat modeling was throwing away any heavyweight process we had,” said Zane Lackey, co-founder and CSO at WAF vendor Signal Sciences in Los Angeles. “As product teams were developing new features, someone from the security team would just sit in the back of their meeting and ask them, ‘How would you attack this?’ and then shut up.”
It takes time to gain DevOps teams’ trust after years of adversarial relationships with security teams, panelists said, but when all else fails, security pros can catch more flies with honey — or candy.
“We put out bowls of candy in the security team’s area and it encouraged people to come and ask them questions,” Lackey said. “It was actually wildly successful.”
Oracle’s strategy going into 2020 is to support users wherever they are, while not-so-subtly urging them to move onto Oracle cloud services – particularly databases.
In fact, some say its Oracle’s legacy as a database vendor that may be the key to the company’s long-term success as a major cloud player.
To reconcile the Oracle cloud persona of today with the identity of database giant that the company still holds, it helps to look back at key milestones in Oracle’s history over the past 20 years, beginning with Oracle database releases at the turn of the century.
Oracle releases Database 8i, 9i
Two major versions of Oracle’s database arrived in 1998 and 2001. Oracle Database 8i was the first written with a heavy emphasis on web applications — the “i” stood for Internet.
Then Oracle 9i introduced the feature Real Application Clusters (RAC) for high-availability scenarios. RAC is a widely popular and lucrative database option for Oracle, one it has held very close to date. RAC is only supported and certified for use on Oracle’s cloud service at this time.
With the 9i update, Oracle made a concerted effort to improve the database’s administrative tooling, said Curt Monash, founder of Monash Research in Acton, Mass.
“This was largely in reaction to growing competition from Microsoft, which used its consumer software UI expertise to have true ease-of-administration advantages versus Oracle,” Monash said. “Oracle narrowed the gap impressively quickly.”
Oracle acquires PeopleSoft and Siebel
Silicon Valley is littered with the bones of once-prominent application software vendors that either shut down or got swallowed up by larger competitors. To that end, Oracle’s acquisitions of PeopleSoft and Siebel still resonate today.
The company launched what many considered to be a hostile takeover of PeopleSoft, the second-largest software vendor in 2003 after SAP. It ultimately succeeded with a $10.3 billion bid the following year. Soon after the deal closed, Oracle laid off more than half of PeopleSoft’s employees in a widely decried act.
Oracle also gained J.D. Edwards, known for its manufacturing ERP software, through the PeopleSoft purchase.
The PeopleSoft deal, along with Oracle’s $5.8 billion acquisition of Siebel in 2005, reinvented the company as a big player in enterprise applications and set up the path toward Fusion.
Oracle realized that to catch up to SAP in applications, it needed acquisitions, said Holger Mueller, an analyst with Constellation Research in Cupertino, Calif., who worked in business and product development roles at Oracle during much of the 2000s.
“To cement ownership within complex CRM, they needed Siebel,” Mueller said. Those Siebel customers largely remain in the fold today, he added. While rival HCM software vendor Workday has managed to poach some of Oracle’s PeopleSoft customers, Salesforce hasn’t had the same luck converting Siebel users over to its CRM, according to Mueller.
Oracle’s application deals were as much or more about acquiring customers as they were about technology, said Frank Scavo, president of IT consulting firm Strativa in Irvine, Calif.
“Oracle had a knack for buying vendors when they were at or just past their peak,” he said. “PeopleSoft was an example of that.”
The PeopleSoft and Siebel deals also gave Oracle the foundation, along with its homegrown E-Business Suite, for a new generation of applications in the cloud era.
Oracle’s Fusion Applications saga
Oracle first invoked the word “Fusion” in 2005, under the promise it would deliver an integrated applications suite that comprised a superset of functionality from its E-Business Suite, PeopleSoft and Siebel software, with both cloud and on-premises deployment options.
The company also pledged that Fusion apps would deliver a consumer-grade user experience and business intelligence embedded throughout processes.
Fusion Applications were supposed to become generally available in 2008, but Oracle didn’t make these applications generally available to all customers until 2011.
It’s been suggested that Oracle wanted to take its time and had the luxury of doing so, since its installed base was still weathering a recession and had little appetite for a major application migration, no matter how useful the new software was.
Fusion Applications’ sheer scope was another factor. “It takes a long time to build software from scratch, especially if you have to replace things that were strong category leaders,” Mueller said.
Oracle’s main shortcoming with Fusion Applications was its inability to sell very much of them early on, Mueller added.
Oracle acquires Hyperion and BEA
After its applications shopping spree, Oracle eyed other areas of software. First, it bought enterprise performance management vendor Hyperion in 2007 for $3.3 billion to bolster its financials and BI business.
“Hyperion was a smart acquisition to get customers,” Mueller said. “It helped Oracle sell financials. But it didn’t help them in the move to cloud.”
In contrast, BEA and its well-respected application server did. The $8.5 billion deal also gave Oracle access to a large customer base and many developers, Mueller added.
BEA’s products also gave a boost to Oracle’s existing Fusion Middleware portfolio, said John Rymer, an analyst at Forrester. “At the time, Oracle’s big competitor in middleware was IBM,” he said. “[Oracle] didn’t have credibility.”
Exadata packs servers, networking and storage, along with Oracle database and other software, into preconfigured racks. Oracle also created storage processing software for the machines, which its marketing arm initially dubbed “engineered systems.”
With the move, Oracle sought to take a bigger hold in the data warehousing market against the likes of Teradata and Netezza, which was subsequently acquired by IBM.
Exadata was a huge move for Oracle, Monash said.
“They really did architect hardware around software requirements,” he said. “And they attempted to change their business relationship with customers accordingly. … For context, recall that one of Oracle’s top features in its hypergrowth years in the 1980s was hardware portability.”
In fact, it would have been disastrous if Oracle didn’t come up with Exadata, according to Monash.
“Oracle was being pummeled by independent analytics DBMS vendors, appliance-based or others,” he said. “The competition was more cost-effective, naturally, but Exadata was good enough to stem much of the bleeding.”
Exadata and its relatives are foundational to Oracle’s IaaS, and the company also offers the systems on-premises through its Cloud at Customer program.
“We offer customers choice,” said Steve Daheb, senior vice president of Oracle Cloud. “If customers want to deploy [Oracle software] on IBM or HP [gear], you could do that. But we also continue to see this constant theme in tech, where things get complicated and then they get aggregated.”
Oracle buys Sun Microsystems
Few Oracle acquisitions were as controversial as its $7.4 billion move to buy Sun Microsystems. Critics of the deal bemoaned the potential fate of open-source technologies such as the MySQL database and the Java programming language under Oracle’s ownership, and the deal faced serious scrutiny from European regulators.
Oracle ultimately made a series of commitments about MySQL, which it promised to uphold for five years, and the deal won approval in early 2010.
Sun’s hardware became a platform for Exadata and other Oracle appliances. MySQL has chugged along with regular updates, contrary to some expectations that it would be killed off.
But many other Sun-related technologies fell into the darkness, such as Solaris and Sun’s early version of an AWS-style IaaS. Oracle also moved Java EE to the Eclipse Foundation, although it maintains tight hold over Java SE.
The Sun deal remains relevant today, given how it ties into Ellison’s long-term vision of making Oracle the IBM for the 21st century, Mueller said.
That aspiration realized would see Oracle become a “chip-to-click” technology provider, spanning silicon to end-user applications, he added. “The verdict is kind of still out over whether that is going to work.”
Oracle Database 12c
The company made a slight but telling change to its database naming convention with the 2013 release of 12c, swapping consonants for one that denoted “cloud,” rather than “g” for grid computing.
Oracle’s first iteration of 12c had multitenancy as a marquee feature. SaaS vendors at the time predominantly used multitenancy at the application level, with many customers sharing the same instance of an app. This approach makes it easier to apply updates across many customers’ apps, but is inherently weaker for security, Ellison contended.
Oracle 12c’s multi-tenant option provided an architecture where one container database held many “pluggable” databases.
Oracle later rolled out an in-memory option to compete with SAP’s HANA in-memory database. SAP hoped its customers, many of which used Oracle’s database as an underlying store, would migrate onto HANA.
2016: Oracle acquires NetSuite
Oracle’s $9.3 billion purchase of cloud ERP vendor NetSuite came with controversy, given Ellison’s large personal financial stake in the vendor. But on a strategic level, the move made plenty of sense.
NetSuite at the time had more than 10,000 customers, predominantly in the small and medium-sized business range. Oracle, in contrast, had 1,000 or so customers for its cloud ERP aimed at large enterprises, and not much presence in SMB.
Thus, the move plugged a major gap for Oracle. It also came as Oracle and NetSuite began to compete with each other at the margins for customers of a certain size.
Oracle’s move also gave it a coherent two-tier ERP strategy, wherein a customer that opens new offices would use NetSuite in those locations while tying it back to a central Oracle ERP system. This is a practice rival SAP has used with Business ByDesign, its cloud ERP product for SMBs, as well as Business One.
The NetSuite acquisition was practically destined from the start, said Scavo of Strativa.
“I always thought Larry was smart not to do the NetSuite experiment internally. NetSuite was able to develop its product as a cloud ERP system long before anyone dreamed of doing that,” Scavo said.
NetSuite customers could benefit as the software moves onto Oracle’s IaaS if they receive the promised benefits of better performance and elasticity, which NetSuite has grappled with at times, Scavo added. “I’m looking forward to seeing some evidence of that.”
Oracle launches its second-generation IaaS cloud
The IaaS market has largely coalesced around three players in hyperscale IaaS: AWS, Microsoft and Google. Other large companies such as Cisco and HPE tried something similar, but ceded defeat and now position themselves as neutral middle players keen to help customers navigate and manage multi-cloud deployments.
Oracle, meanwhile, came to market with an initial public IaaS offering based in part on OpenStack, but it failed to gain much traction. It subsequently made major investments in a second-generation IaaS, called Oracle Cloud Infrastructure, which offers many advancements at the compute, network and storage layers over the original.
Steve DahebSenior vice president, Oracle Cloud
Oracle has again shifted gears, evidenced by its partnership with Microsoft to boost interoperability between Oracle Cloud Infrastructure and Azure. One expected use case is for IT pros to run their enterprise application logic and presentation tiers on Azure, while tying back to Oracle’s Autonomous Database on the Oracle cloud.
“We started this a while back and it’s something customers asked for,” Oracle’s Daheb said. There was significant development work involved and given the companies’ shared interests, the deal was natural, according to Daheb.
“If you think about this world we came from, with [on-premises software], we had to make it work with everybody,” Daheb said. “Part of it is working together to bring that to the cloud.”
Oracle Autonomous Database marks the path forward
Ellison will unveil updates to Oracle database 19c, which runs both on-premises and in the cloud, in a talk at OpenWorld. While details remain under wraps, it is safe to assume the news will involve autonomous management and maintenance capabilities Oracle first discussed in 2017.
Oracle database customers typically wait a couple of years before upgrading to a new version, preferring to let early adopters work through any remaining bugs and stability issues. Version 19c arrived in January, but is more mature than the name suggests. Oracle moved to a yearly naming convention and update path in 2018, and thus 19c is considered the final iteration of the 12c release cycle, which dates to 2013.
Oracle users should be mindful that autonomous database features have been a staple of database systems for decades, according to Monash.
But Oracle has indeed accomplished something special with its cloud-based Autonomous Database, according to Daheb. He referred to an Oracle marketing intern who was able to set up databases in just a couple of minutes on the Oracle Cloud version. “For us, cloud is the great democratizer,” Daheb said.
Storage virtualization pioneer DataCore Software revamped its strategy with a new hyper-converged infrastructure appliance, cloud-based predictive analytics service and subscription-based licensing option.
DataCore launched the new offerings this week as part of an expansive DataCore One software-defined storage (SDS) vision that spans primary, secondary, backup and archival storage across data center, cloud and edge sites.
For the last two decades, customers have largely relied on authorized partners and OEMs, such as Lenovo and Western Digital, to buy the hardware to run their DataCore storage software. But next Monday, they’ll find new 1U and 2U DataCore-branded HCI-Flex appliance options that bundle DataCore software and VMware vSphere or Microsoft Hyper-V virtualization technology on Dell EMC hardware. Pricing starts at $21,494 for a 1U box, with 3 TB of usable SSD capacity.
The HCI-Flex appliance reflects “the new thinking of the new DataCore,” said Gerardo Dada, who joined the company last year as chief marketing officer.
DataCore software can pool and manage internal storage, as well as external storage systems from other manufacturers. Standard features include parallel I/O to accelerate performance, automated data tiering, synchronous and asynchronous replication, and thin provisioning.
New DataCore SDS brand
In April 2018, DataCore unified and rebranded its flagship SANsymphony software-defined storage and Hyperconverged Virtual SAN software as DataCore SDS. Although the company’s website continues to feature the original product names, DataCore will gradually transition to the new name, said Augie Gonzalez, director of product marketing at DataCore, based in Fort Lauderdale, Fla.
With the product rebranding, DataCore also switched to simpler per-terabyte pricing instead of charging customers based on a-la-carte features, nodes with capacity limits and separate expansion capacity. With this week’s strategic relaunch, DataCore is adding the option of subscription-based pricing.
Just as DataCore faced competitive pressure to add predictive analytics, the company also needed to provide a subscription option, because many other vendors offer it, said Randy Kerns, a senior strategist at Evaluator Group, based in Boulder, Colo. Kerns said consumption-based pricing has become a requirement for storage vendors competing against the public cloud.
“And it’s good for customers. It certainly is a rescue, if you will, for an IT operation where capital is difficult to come by,” Kerns said, noting that capital expense approvals are becoming a bigger issue at many organizations. He added that human nature also comes into play. “If it’s easier for them to get the approvals with an operational expense than having to go through a large justification process, they’ll go with the path of least resistance,” he said.
DataCore Insight Services
DataCore SDS subscribers will gain access to the new Microsoft Azure-hosted DataCore Insight Services. DIS uses telemetry-based data the vendor has collected from thousands of SANsymphony installations to detect problems, determine best-practice recommendations and plan capacity. The vendor claimed it has more than 10,000 customers.
Like many storage vendors, DataCore will use machine learning and artificial intelligence to analyze the data and help customers to proactively correct issues before they happen. Subscribers will be able to access the information through a cloud-based user interface that is paired with a local web-based DataCore SDS management console to provide resolution steps, according to Steven Hunt, a director of product management at the company.
DataCore customers with perpetual licenses will not have access to DIS. But, for a limited time, the vendor plans to offer a program for them to activate new subscription licenses. Gonzalez said DataCore would apply the annual maintenance and support fees on their perpetual licenses to the corresponding DataCore SDS subscription, so there would be no additional cost. He said the program will run at least through the end of 2019.
Shifting to subscription-based pricing to gain access to DIS could cost a customer more money than perpetual licenses in the long run.
“But this is a service that is cloud-hosted, so it’s difficult from a business perspective to offer it to someone who has a perpetual license,” Dada said.
Johnathan Kendrick, director of business development at DataCore channel partner Universal Systems, said his customers who were briefed on DIS have asked what they need to do to access the services. He said he expects even current customers will want to move to a subscription model to get DIS.
“If you’re an enterprise organization and your data is important, going down for any amount of time will cost your company a lot of money. To be able to see [potential issues] before they happen and have a chance to fix that is a big deal,” he said.
Customers have the option of three DataCore SDS editions: enterprise (EN) for the highest performance and richest feature set, standard (ST) for midrange deployments, and large-scale (LS) for secondary “cheap and deep” storage, Gonzalez said.
Pricing is $416 per terabyte for a one-year subscription of the ST option, with support and software updates. The cost for a perpetual ST license is $833 per terabyte, inclusive of one year of support and software updates. The subsequent annual support and maintenance fees are 20%, or $166 per year, Gonzalez said. He added that loyalty discounts are available.
The new PSP 9 DataCore SDS update that will become generally available in mid-July includes new features, such as AES 256-bit data-at-rest encryption that can be used across pools of storage arrays, support for VMware’s Virtual Volumes 2.0 technology and UI improvements.
This week’s DataCore One strategic launch comes 15 months after Dave Zabrowski replaced founder George Teixeira as CEO. Teixeira remains with DataCore as chairman.
“They’re serious about pushing toward the future, with the new CEO, new brand, new pricing model and this push to fulfill more of the software-defined stack down the road, adding more long-term archive type storage,” Jeff Kato, a senior analyst at Taneja Group in West Dennis, Mass., said of DataCore. “They could have just hunkered down and stayed where they were at and rested on their installed base. But the fact that they’ve modernized and gone for the future vision means that they want to take a shot at it.
“This was necessary for them,” Kato said. “All the major vendors now have their own software-defined storage stacks, and they have a lot of competition.”
An effective strategy to manage APIs calls for more than just building and publishing APIs. It can enable API-led connectivity, DevOps agility and easier implementation of new technologies, like AI and function as a service, or FaaS.
Real-time data access and delivery are critical to create excellent consumer experiences. The industry’s persistent appetite for API management and integration to connect apps and data is exemplified by Salesforce’s MuleSoft acquisition in March 2018.
In this Q&A, MuleSoft CTO Ross Mason discusses the importance of a holistic strategy to manage APIs that connect data to applications and that speed digital transformation projects, as well as development innovation.
Why do enterprises have so much trouble with data access and delivery?
Ross Mason: Historically, enterprises have considered IT a cost center — one that typically gets a budget cut every year and must do more with less. It doesn’t make sense to treat as a cost center the part of the organization that has a treasure-trove of data and functionality to build new consumer experiences.
In traditional IT, every project is built from the ground up, and required customer data resides separately in each project. There really is no reuse. They have used application integration architectures, like ESBs [enterprise service buses], to suck the data out from apps. That’s why enterprise IT environments have a lot of point-to-point connectivity inside and enterprises have problems with accessing their data.
Today, if enterprises want easy access to their data, they can use API-led connectivity to tap into data in real time. The web shows us that building software blocks with APIs enables improvements in connection experiences.
How does API-led connectivity increase developers’ productivity?
Mason: Developers deliver reusable API and reusable templates with each project. The next time someone needs access to the API, that data or a function, it’s already there, ready to use. The developer doesn’t need to re-create anything.
Reuse allows IT to keep costs down. It also allows people in other ecosystems within the organization to discover and get access to those APIs and data, so they can build their own applications.
In what ways can DevOps extend an API strategy beyond breaking down application and data silos?
Mason: Once DevOps teams deliver microservices and APIs, they see the value of breaking down other IT problems into smaller, bite-size chunks. For example, they get a lot of help with change management, because one code change does not impact a massive, monolithic application. The code change just impacts, say, a few services that rely on a piece of data or a capability in a system.
APIs make applications more composable. If I have an application that’s broken down into 20 APIs, for example, I can use any one of those APIs to fill a feature or a need in any other application without impacting each other. You remove the dependencies between other applications that talk to these APIs.
Ross MasonCTO, MuleSoft
Overall, a strong API strategy allows software development to move faster, because you don’t build from the ground up each time. Also, when developers publish APIs, they create an interesting culture dynamic of self-service. This is something that most businesses haven’t had in the past, and it enables developers to build more on their own without going through traditional project cycles.
Which new technologies come next in an API strategy?
Mason: Look at FaaS and AI. Developers now comfortably manage APIs and microservices together to break up monolithic applications. A next step is to add function as a service. This type of service typically calls out other to APIs to get anything done. FaaS allows you a way to stitch these things together for specific purposes.
It’s not too early to get into AI for some use cases. One use of machine learning is to increase developer productivity. Via AI, we learn what the developer is doing and can suggest better approaches. On our runtime management pane, we use machine learning to understand tracking patterns and spot anomalies, to get proactive about issues that might occur.
An API strategy can be extended easily to new technologies, such as IoT, AI and whatever comes next. These systems rely on APIs to interact with the world around them.
Datrium has a new CEO, and a new strategy for pushing hyper-convergence into the enterprise.
Tim Page replaced Brian Biles, one of Datrium’s founders, as CEO in June. Biles moved into the chief product officer role, one he said he is better suited for, to allow Page to build out an enterprise sales force.
The startup is also changing its market focus. Its executives previously avoided calling Datrium DVX primary storage systems hyper-converged, despite its disaggregated architecture that included storage and Datrium Compute Nodes and Data Nodes. They pitched the Datrium DVX architecture as “open convergence” instead because customers could also use separate x86 or commodity servers. As a software-defined storage vendor, Datrium played down its infrastructure architecture.
Now Datrium positions itself as hyper-converged infrastructure (HCI) on both the primary and secondary storage sides. The use cases and reasons for implementation are the same as hyper-converged — customers can collapse storage and servers into a single system.
“You can think of us as a big-a– HCI,” Biles said. “We’re breaking all the HCI rules.”
Datrium DVX is nontraditional HCI with stateless servers, large caches and shared storage but is managed as a single entity.
Brian Bileschief product officer, Datrium
“We mean HCI in a general way,” Biles said. “We’re VM- or container-centric, we don’t have LUNs. DVX includes compute and storage, it can support third-party servers. But when you look at our architecture, it is different. To build this, we had to break all the rules.”
Datrium’s changed focus is opportunistic. The HCI market is growing at a far faster rate than traditional storage arrays, and that trend is expected to continue. Vendors who have billed themselves as software-defined storage without selling underlying hardware have failed to make it.
Secondary storage is also taking on a converged focus with the rise of newcomers Rubrik and Cohesity. Datrium also wants to compete there with a cloud-native version of DVX for backup and recovery.
However, Datrium will find a highly competitive landscape in enterprise storage and HCI. It will go against giants Dell EMC, Hewlett Packard Enterprise and NetApp on both fronts, and Cisco and Nutanix in HCI. Besides high-flying Cohesity and Rubrik, its backup competition includes Veritas, Dell EMC, Veeam and Commvault.
A new Datrium DVX customer, the NFL’s San Francisco 49ers, buys into the vendor’s HCI story. Jim Bartholomew, the 49ers IT director, said the football team collapsed eight storage platforms into one when it installed eight DVX Compute Nodes and eight DVX Data Nodes. It will also replace its servers and perhaps traditional backup with DVX, 49ers VP of corporate partnerships Brent Schoeb said.
“The problem was, we had three storage vendors and always had to go to a different one for support,” Bartholomew said.
Schoeb said the team stores its coaching and scouting video on Datrium DVX, as well as all of the video created for its website and historical archives.
“We were fragmented before,” Schoeb said of the team’s IT setup. “Datrium made it easy to consolidate our legacy storage partners. We rolled it all up into one.”
Roadmap: Multi-cloud support for backup, DR
Datrium parrots the mantra from HCI pioneer Nutanix and others that its goal is to manage data from any application wherever it resides on premises or across clouds.
Datrium is building out its scale-out backup features for secondary storage. Datrium DVX includes read on write snapshots, deduplication, inline erasure coding and a built-in backup catalog called Snapstore.
Another Datrium founder, CTO Sazzala Reddy, said the roadmap calls for integrating cloud support for data protection and disaster recovery. Datrium added support for AWS backup with Cloud DVX last fall, and is working on support for VMware Cloud on AWS and Microsoft Azure.
“We want to go where the data is,” Reddy said. “We want to move to a place where you can move any application to any cloud you want, protect it any way you want, and manage it all in the data center.”
New CEO: Datrium’s ready to pivot
Page helped build out the sales organization as COO at VCE, the EMC-Cisco-VMware joint venture that sold Vblock converged infrastructure systems. He will rebuild the sales structure at DVX, shifting the focus from SMB and midmarket customers to the enterprise.
DVX executives claim they have hundreds of customers and hope to hit 1,000 by the end of 2018, although that goal is likely overambitious. The startup is far from profitable, and will require more than the $110 million in funding it has raised. Industry sources say Datrium already has about $40 million in venture funding lined up for a D round, and is seeking strategic partners before disclosing the round. Datrium has around 200 employees.
“Datrium’s at an interesting point,” Page said of his new company. “They’re getting ready to pivot in a hyper-growth space now into the enterprise. What we didn’t have was an enterprise sales motion — it’s different selling into the Nimble, Tintri, Nutanix midmarket world. It’s hard to port anyone from that motion into the enterprise motion. We’re going to get into that growth phase, and make sure we do it right.”
Biles said he is following the same model as in his previous company, Data Domain. The backup deduplication pioneer took off after bringing Frank Slootman in as CEO during its early days of shipping products in 2003. Data Domain became a public company in 2007, and EMC acquired it for $2.1 billion two years later.
“I knew a lot less then than I know now, but I know there are many better CEOs than me,” Biles said. “Customer opportunities are much bigger than they used to be, and the sales cycle is much bigger than our team was equipped for. We needed to do a spinal transplant. There’s a bunch of things to deal with as you get to hundreds of employees and a lot of demanding customers. My training is on the product side.”
BOSTON — When IT professionals develop a strategy for user password and authentication management, they must consider the two key metrics of security and usability.
IT professionals are looking for ways to minimize the reliance on passwords as the lone authentication factor, especially because 81% of hacking breaches occur due to stolen or weak passwords, according to Verizon’s 2017 Data Breach Investigations Report. Adding other types of authentication to supplement — or even replace — user passwords can ensure security improves without hurting usability.
“Simply put, the world has a password problem,” said Brett McDowell, executive director of the FIDO Alliance, based in Wakefield, Mass., here in a session at Identiverse.
A future without passwords?
Types of authentication that only require a single verification factor could be much more secure if users adopted complex, harder-to-predict passwords, but this pushes up against the idea of usability. The need for complex passwords, along with the 90- to 180-day password refreshes that are an industry standard in the enterprise, means that reliance on passwords alone can’t meet security and usability standards at the same time.
“If users are being asked to create and remember incredibly complex passwords, IT isn’t doing its job,” said Don D’Souza, a cybersecurity manager at Fannie Mae, based in Washington, D.C.
IT professionals today are turning to two-factor authentication, relying on biometric and cryptographic methods to supplement passwords. The FIDO Alliance, a user authentication trade association, pushes for two-factor authentication that entirely excludes passwords in their current form.
Brett McDowellexecutive director, FIDO Alliance
McDowell broke down authentication methods into three categories:
something you know, such as a traditional password or a PIN;
something you possess, such as a mobile device or a token card; and
something you are, which includes biometric authentication methods, such as voice, fingerprint or gesture recognition.
The FIDO Alliance advocates for organizations to shift toward the latter two of these options.
“We want to take user vulnerability out of the picture,” McDowell said.
Taking away password autonomy from the user could improve security in many areas, but none more directly than phishing. Even if a user falls for a phishing email, his authentication is not compromised if two-factor authentication is in place, because the hacker lacks the cryptographic or biometric authentication access factor.
“With user passwords as a single-factor authentication, the only real protection against phishing is testing and training,” D’Souza said.
Trickle-down benefits of new types of authentication
Added types of authentication increase the burden on IT when it comes to privileged access management (PAM) and staying up-to-date on user information. But as organizations move away from passwords entirely, IT doesn’t need to worry as much about hackers gaining access to authentication information, because that is only one piece of the puzzle. This also leads to the benefit of cutting down on account access privileges, said Ken Robertson, a principal technologist at GE, based in Boston.
With stronger types of authentication in place, for example, IT can feel more comfortable handing over some simple administrative tasks to users — thereby limiting its own access to user desktops. IT professionals won’t love giving up access privilege, however.
“People typically start a PAM program for password management,” Robertson said. “But limiting IT logon use cases minimizes vulnerabilities.”
Organizations are taking steps toward multifactor authentication that doesn’t include passwords, but the changes can’t happen immediately.
“We will have a lot of two-factor authentication across multiple systems in the next few years, and we’re looking into ways to limit user passwords,” D’Souza said.