I am making some changes to my setup which includes mounting my Unifi AC Pro access point on he ceiling (previously was sitting on a shelf). However, i am missing the plastic mounting bracket. Does anyone have one that they don’t require any more?
SAP has promised the end of SAP ECC support in 2025, and that means big changes for most SAP users.
Companies using SAP ERP Central Component are faced with some major decisions. The most obvious is whether to stay on ECC or migrate their systems to S4/HANA. This is not easy decision to make, as each has its own set of pros and cons. No matter which choice a company makes, it will face business consequences, and must prepare accordingly.
From the vendor perspective, support staff and developers should focus on a new product. As part of this, most software vendors push their clients to adopt the latest platform, partly by imposing an end-of-support deadline. And this strategy has some success. Most clients don’t want to be left with an unsupported system that might cause work delays. But moving to a new product can also be problematic.
For an SAP ECC customer, moving to S4/HANA comes with its own set of challenges and poses risks. Implementing the latest SAP platform does not always equate to better and faster systems, as seen in Revlon’s disastrous SAP S/4HANA implementation. Revlon experienced shipping delays and revenue losses as a result of system, operational and implementation challenges. It was also sued by shareholders.
Such failures can’t always be blamed only on the new software. Other factors that can contribute to ERP implementation failure — whether a new SAP system or another vendor’s system — include lack of operational maturity, poor leadership, lack of experienced resources and cultural challenges. These can turn a potentially successful ERP implementation into a complete disaster.
One of the biggest changes for administrators in recent years is the cloud. Its presence requires administrators to migrate from their on-premises way of thinking.
The problem isn’t the cloud. After all, there should be less work if someone else looks after the server for you. The arrival of the cloud has brought to light some of the industry’s outdated methodologies, which is prompting this IT modernization movement. Practices in many IT shops were not as rigid or regimented before the cloud came along because external access was limited.
Changing times and new technologies spur IT modernization efforts
When organizations were exclusively on premises, it was easy enough to add finely controlled firewall rules to only allow certain connections in and out. Internal web-based applications did not need HTTPS — just plain HTTP worked fine. You did not have to muck around with certificates, which seem to always be difficult to comprehend. Anyone on your network was authorized to be there, so it didn’t matter if data was unencrypted. The risk versus the effort wasn’t worthwhile — a lot of us told ourselves — to bother with and the users would have no idea anyway.
You would find different ways to limit the threats to the organization. You could implement 802.1X, which only allowed authorized devices on the network. This reduced the chances of a breach because the attacker would need both physical access to the network and an approved device. Active Directory could be messy; IT had a relaxed attitude about account management and cleanup, which was fine as long as everyone could do their job.
The pre-cloud era allowed for a lot of untidiness and shortcuts, because the risk of these things affecting the business in a drastic way was smaller. Administrators who stepped into a new job would routinely inherit a mess from the last IT team. There was little incentive to clean things up; just keep those existing workloads running. Now that there is increased risk with exposing the company’s systems to the world via cloud, it’s no longer an option to keep doing things the same way just to get by.
One example of how the cloud forces IT practices to change is the default configuration when you use Microsoft’s Azure Active Directory. This product syncs every Active Directory object to the cloud unless you apply filtering. The official documentation states that this is the recommended configuration. Think about that: Every single overlooked, basic password that got leaked several years ago during the LinkedIn breach is now in the cloud for use by anyone in the world. Those accounts went from a forgotten mess pushed under the rug years ago to a ticking time bomb waiting for attackers to hit a successful login as they spin through their lists of millions of username and password combos.
Back on the HTTP/HTTPS side, users now want to work from home or anywhere they might have an internet connection. They also want to do it from any device, such as their personal laptop, mobile phone or tablet. Exposing internal websites was once — and still is in many scenarios — a case of poking a hole in the firewall and hoping for the best. With an unencrypted HTTP site, all data it pushed in and out to that endpoint, from anything the user sees to anything they enter such as username and password is at risk. Your users could be working from a free McDonald’s Wi-Fi connection or at any airport in the world. It’s not hard for attackers to set up fake relay access points and listen to all the data and read anything that is not encrypted. Look up WiFi Pineapple for more information about the potential risks.
How to accommodate your users and tighten security
As you can see, it’s easy to end up in a high-risk situation if IT focuses on making users happy instead of company security. How do you make the transition to a safer environment? At the high level, there’s several immediate actions to take:
Clean up Active Directory. Audit accounts, disable ones not in use, organize your organizational units so they are clear and logical. Implement an account management process from beginning to end.
Review your password policy. If you have no other protection, cycle your passwords regularly and enforce some level of complexity. Look at other methods for added protection such as multifactor authentication (MFA), which Azure Active Directory provides, which can do away with password cycling. For more security, combine MFA with conditional access, so a user in your trusted network or using a trusted device doesn’t even need MFA. The choice is yours.
Review and report on account usage. When something is amiss with account usage, you should know as soon as possible to take corrective action. Technologies such as the identity protection feature Azure Active Directory issues alerts and remediates on suspicious activity, such a login from a location that is not typical for that account.
Implement HTTPS on all sites. You don’t have to buy a certificate for each individual site to enable HTTPS. Save money and generate them yourself if the site is only for trusted computers on which you can deploy the certificate chain. Another option is to buy a wildcard certificate to use everywhere. Once the certificate is deployed, you can expose the sites you want with Azure Active Directory Application Proxy rather than open ports in your firewall. This gives the added benefit of forcing an Azure Active Directory login to apply MFA and identity protection before the user gets to the internal site, regardless of the device and where they are physically located.
These are a few of the critical aspects to think about when changing your mindset from on-premises to cloud. This is a basic overview of the areas to give a closer look. There’s a lot more to consider, depending on the cloud services you plan to use.
“When your technology changes the world,” he writes, “you bear a responsibility to help address the world that you have helped create.” And governments, he writes, “need to move faster and start to catch up with the pace of technology.”
In a lengthy interview, Mr. Smith talked about the lessons he had learned from Microsoft’s past battles and what he saw as the future of tech policymaking – arguing for closer cooperation between the tech sector and the government. It’s a theme echoed in the book, “Tools and Weapons: The Promise and the Peril of the Digital Age,” which he wrote with Carol Ann Browne, a member of Microsoft’s communications staff.
In 2019, a book about tech’s present and future impact on humankind that was relentlessly upbeat would feel out of whack with reality. But Smith’s Microsoft experience allowed him to take a measured look at major issues and possible solutions, a task he says he relished.
“There are some people that are steeped in technology, but they may not be steeped in the world of politics or policy,” Smith told me in a recent conversation. “There are some people who are steeped in the world of politics and policy, but they may not be steeped in technology. And most people are not actually steeped in either. But these issues impact them. And increasingly they matter to them.”
In ‘Tools & Weapons: The Promise and the Peril of the Digital Age,’ the longtime Microsoft executive and his co-author Carol Ann Browne tell the inside story of some of the biggest developments in tech and the world over the past decade – including Microsoft’s reaction to the Snowden revelations, its battle with Russian hackers in the lead up to the 2016 elections and its role in the ongoing debate over privacy and facial recognition technology.
The book goes behind-the-scenes at the Obama and Trump White Houses; explores the implications of the coming wave of artificial intelligence; and calls on tech giants and governments to step up and prepare for the ethical, legal and societal challenges of powerful new forms of technology yet to come.
Tensions between the U.S. and China feature prominently in Smith’s new book, ‘Tools and Weapons: The Promise and the Peril of the Digital Age.’ While Huawei is its own case, Smith worries that broader and tighter strictures could soon follow. The Commerce Department is considering new restrictions on the export of emerging technologies on which Microsoft has placed big bets, including artificial intelligence and quantum computing. “You can’t be a global technology leader if you can’t bring your technology to the globe,” he says.
BOSTON — Microsoft’s proposed licensing changes for PowerApps, the cloud-based development tools for Office 365 and Dynamics 365, have confused users and made them fearful the software will become prohibitively expensive.
Last week, at Microsoft’s SPTechCon user conference, some organizations said the pricing changes, scheduled to take effect Oct. 1, were convoluted. Others said the new pricing — if it remains as previewed by Microsoft earlier this summer — would force them to limit the use of the mobile app development tools.
“We were at the point where we were going to be expanding our usage, instead of using it for small things, using it for larger things,” Katherine Prouty, a developer at the nonprofit Greater Lynn Senior Services, based in Lynn, Mass., said. “This is what our IT folks are always petrified of; [the proposed pricing change] is confirmation of their worst nightmares.”
Planned apps the nonprofit group might have to scrap if the pricing changes take effect include those for managing health and safety risks for its employees and clients in a regulatory-compliant way, and protecting the privacy of employees as they post to social media on behalf of the organization, Prouty said.
Developers weigh in
The latest pricing proposal primarily affects organizations building PowerApps that tap data sources outside of Office 365 and Dynamics 365. People connecting to Salesforce, for example, would pay $10 per user, per month, unless they opt to pay $40 per user, per month for unlimited use of data connectors to third-party apps.
The new pricing would take effect even if customers were only connecting Office 365 to Dynamics 365 or vice versa. That additional cost for using apps they’re already paying for does not sit well with some customers, while others find the pricing scheme perplexing.
“It’s all very convoluted right now,” said David Drever, senior manager at IT consultancy Protiviti, based in Menlo Park, Calif.
Manufacturing and service companies that create apps using multiple data sources are among the businesses likely to pay a lot more in PowerApps licensing fees, said IT consultant Daniel Christian of PowerApps911, based in Maineville, Ohio.
Annual PowerApps pricing changes
However, pricing isn’t the only problem, Christian said. Microsoft’s yearly overhaul of PowerApps fees also contributes to customer handwringing over costs.
“Select [a pricing model] and stick with it,” he said. “I’m OK with change; we’ll manage it and figure it out. It’s the repetitive changes that bug me.”
Microsoft began restricting PowerApps access to outside data sources earlier this year, putting into effect changes announced last fall. The new policy required users to purchase a special PowerApps plan to connect to popular business applications such as Salesforce Chatter, GotoMeeting and Oracle Database. The coming changes as presented earlier this summer would take that one step further by introducing per-app fees and closing loopholes that were available on a plan that previously cost $7 per user per month.
Matt Wade, VP of client services at H3 Solutions Inc., based in Manassas, Va., said customers should watch Microsoft’s official PowerApps blog for future information that might clarify costs and influence possible tweaks to the final pricing model. H3 Solutions is the maker of AtBot, a platform for developing bots for Microsoft’s cloud-based applications.
“People who are in charge of administering Office 365 and the Power Platform need to be hyper-aware of what’s going on,” Wade said. “Follow the blog, comment, provide feedback — and do it respectfully.”
As the networking industry rapidly changes, so could your networking career. Maybe you’re just starting out, or you want to take your career to the next level. Or maybe you want to hit the reset button and start over in your career. Regardless of experience, knowledge and career trajectory, everybody can use advice along the way.
Network engineer role requirements vary depending on a candidate’s experience, education and certifications, but one requirement is constant: Network engineers should have the skills to build, implement and maintain a computer network that supports an organization’s required services.
This compilation of expert advice brings together helpful insights for network engineers at any point in their networking careers in any area of networking. It includes information about telecommunications and Wi-Fi careers and discusses how 5G may affect job responsibilities.
The following expert advice can help budding, transforming and still-learning network engineers in their networking career paths.
What roles are included in a network engineer job description?
Network engineers have a variety of responsibilities that fall within multiple categories and require varying skills. All potential network engineers, however, should have a general understanding of the multiple layers of network communication protocols, like IP and TCP. Engineers that know how these protocols work can better develop fundamental networking wisdom, according to Terry Slattery, principal architect at NetCraftsmen.
The role of a network engineer is complex, which is why it’s often divided into subcategories. Potential responsibilities include the following:
Each of these paths has different responsibilities, requirements and training. For most networking careers, certifications and job experience are comparable to advanced degrees, Slattery said. Engineers should renew their certifications every few years to ensure they maintain updated industry knowledge, he added. As of mid-2019, network engineer salaries ranged from $60,000 to $180,000 a year. However, these salaries vary by location, market, experience and certifications of the candidate.
What steps should I take to improve my networking career path?
As the networking industry transforms, network engineers eager to advance their networking careers have to keep up. One way to ensure engineers maintain relevant networking skills is for those engineers to get and retain essential certifications, said Amy Larsen DeCarlo, principal analyst at Current Analysis. The Cisco Certified Network Associate (CCNA) certification, in particular, provides foundational knowledge about how to build and maintain network infrastructures.
Network engineers should renew their certifications every few years, which requires a test to complete the renewal. Certifications don’t replace experience, DeCarlo said, but they assure employers that candidates have the essential, basic networking knowledge. Continuing education or specializing in a certain expertise area can also help engineers advance their networking careers, as can a maintained awareness of emerging technologies, such as cloud services.
Different types of certifications can benefit different aspects of networking. For a telecom networking career, the three main certification categories are vendor-based, technology-based or role-based, said Tom Nolle, president of CIMI Corp. Vendor-based certifications are valuable for candidates that mostly use equipment from a single vendor. However, these certifications can be time-consuming and typically require prior training or experience.
Technology-based certifications usually encompass different categories of devices, such as wireless or security services. These include certifications from the International Association for Radio, Telecommunications and Electromagnetics and the Telecommunications Certification Organization. These certifications are best for entry-level engineers or those who want to specialize in a specific area of networking. They are also equivalent to an advanced degree, Nolle said.
Role-based certifications are more general and ideal for candidates without degrees or those who want a field technician job. Certifications can make candidates more attractive to employers, as these credentials prove the candidate has the skills and experience the employer requires. One example of this type of certification is the NCTI Master Technician, which specializes in field and craft work for the cable industry.
One of the most complicated areas of networking is wireless LAN (WLAN) — Wi-Fi, in particular. Yet, Wi-Fi is essential in today’s networking environment. Like other networking career paths, WLAN engineers should refresh their Wi-Fi training every so often to remain credible, according to network engineer Lee Badman.
The history of Wi-Fi has been complicated, and the future can be daunting. But Wi-Fi training is a helpful way to understand common issues. In the past, many issues stemmed from the lack of an identical, holistic understanding of Wi-Fi among organizations and network teams, Badman said. Without a consistent Wi-Fi education plan, Wi-Fi training was a point of both success and failure.
While some training inconsistencies still linger now, Badman recommended the Certified Wireless Specialist course from Certified Wireless Network Professionals as a starting point for those interested in WLANs. A variety of vendor-agnostic courses are also available for other wireless roles, he said.
Will 5G networks require new network engineer skills?
Mobile network generations seem to change as rapidly as Wi-Fi does, causing many professionals to wonder what 5G will mean for networking careers in the future. In data centers, job requirements won’t change much, according to John Fruehe, an independent analyst. But 5G could launch a new era for cloud-based and mobile applications and drive security changes as well.
Network engineers should watch out for gaps in network security due to this new combination of enterprise networks, cloud services and 5G, Fruehe said. However, employees working in carrier networks may already see changes in how their organizations construct and provision communication services as a result of current 5G deployments. For example, 5G may require engineers to adhere to a new, fine-grained programmability to manage the increased volume of services organizations plan to run on 5G.
Networking areas where network engineer skills will be crucial are software-defined networking, software-defined radio access networks, network functions virtualization, automation and orchestration. This transformation is because manual command-line interfaces will no longer suffice when engineers program devices, as virtualization and automation are better suited to program devices.
Upcoming changes to Microsoft Dynamics 365 pricing will lead to lower licensing fees for some users while possibly raising the cost of the cloud-based business applications platform for organizations with 100-plus seats.
Microsoft described the changes, scheduled to take effect Oct. 1, in a blog post this week. The company gave partners advance notice of the new pricing scheme at last week’s Inspire conference in Las Vegas.
Midsize and larger organizations that use more than one application could face significant increases, because of Microsoft’s decision to unbundle Dynamics 365 apps and sell them a la carte. Currently, the cloud apps come priced as bundles, with many companies on a Customer Engagement Plan at $115 per user/month. The plan includes five core applications — Sales, Customer Service, Field Service, Project Service Automation and Marketing.
The new individual pricing would cost $95 per user/month for one app, with $20 “attach licenses” for additional apps. Alysa Taylor, corporate vice president for business applications at Microsoft, said in the blog post that customers preferred having the option of adding or removing applications as their companies grew and changed over time.
But users of Dynamics 365 CRM software would likely pay more for that convenience, Dolores Ianni, an analyst at Gartner, said. Those customers typically employ multiple applications in the Customer Engagement Plan.
Dolores IanniAnalyst, Gartner
“If you’re using four applications and you had a thousand users, well, your price went up 158% — it varies wildly,” Ianni said. “I feel that a majority of renewing customers are going to be substantially impacted by this change.”
Microsoft claims 80% of its customers are using only one application, but anecdotal evidence indicates otherwise. Conversations with Microsoft customers — and a review of their contracts — show that the largest enterprises with 1,000 or more users could pay substantially more in some cases, Ianni said. Organizations with 100 or more users could also pay more, even if they’re using only one application. Companies with more than one application would get hit harder.
Microsoft could offer its largest customers promotional deals that would mitigate the price hike, Ianni said. The company often provides such breaks when changing pricing.
Readiness tips for Dynamics 365 pricing changes
Organizations should prepare for the new pricing structures by analyzing which employees use which applications today. Businesses can sometimes find ways to cut costs after getting a complete understanding of how workers are using the software.
“They’re going to have to do their homework to a much greater degree than they did in the past,” Ianni said.
Pending Microsoft Partner Network policy changes affecting product licensing have alarmed some partners, with more than 5,000 people signing a petition to register their disapproval.
A key area of contention is Microsoft’s plan to eliminate the internal use rights (IUR) association with product licenses included in Microsoft Action Pack and those included with a competency. Action Pack gives partners access to product licenses and technical enablement services, through which they can create applications and develop service offerings. Microsoft positions Action Pack, which ranges from OSes to business applications, as a way for new MPN members to get started. Competencies are business specializations in areas such as cloud business applications and data analytics.
The revised IUR association policy will compel Microsoft partners to pay for licenses they have been using in-house under the current Microsoft Partner Network membership terms. The new policy goes into effect July 1, 2020.
Paul Katz, president and chief software architect at EfficiencyNext, a software developer in Washington, D.C., said the policy change will cause the company to purchase five Office 365 Enterprise E3 seats. In addition, EfficiencyNext stands to lose the Microsoft Azure credits the company uses to run its website, although Katz said the policy change’s effect on the Azure benefit is somewhat ambiguous at this point. The licensing fees coupled with the potential loss of Azure credits would result in an annual net cost of about $2,400 a year, he added.
“That’s a thorn in the side, but it doesn’t change our world,” Katz said.
The stakes are much higher, he said, for larger partners with more licenses they will need to pay for. A partner with 100 Office 365 E3 licenses, for example, would need to shell out $24,000 annually, based on the $20 per user, per month seat fee.
Charles Weaver, CEO of MSPAlliance, an association representing managed service providers (MSPs), said he found out about the Microsoft policy change when a board member sent him the online petition. “It’s going to sting most of them,” he said of the licensing shift’s effect on service providers. “It is probably not going to be received well by the rank-and-file MSPs.”
The partner petition, posted on Change.org, stated Microsoft’s policies represent unfair treatment, noting partners “have been so loyal to the Microsoft business.” Microsoft couldn’t be reached for comment.
Microsoft Partner Network: Policy consequences
Katz advised partners to “get licensed up” in light of the IUR change, noting that Microsoft has been aggressive in the past with software asset management engagements.
Weaver, however, said he hopes that won’t be the case.
“I can’t think of anything more destructive to the relationship between Microsoft and the channel than that,” he said, noting the audits software vendors pursue tend to target large customers, where millions of dollars are at stake.
Stanley LouissaintPresident, Fluid Designs Inc.
In addition to causing some partners to incur higher licensing costs, the Microsoft IUR policy shift could also hinder partners’ use-what-you-sell strategies. Resellers and service providers that use a vendor’s products to help run their business gain technology experience, which they can transfer to end customers when deploying those products.
Katz said “dogfooding” — as in, eating one’s own dog food — is the best way to test products, especially for companies that can’t afford to set up a separate test environment.
But the restriction on IUR would discourage this approach and could cause Microsoft to miss out on opportunities down the road.
Weaver pointed to a potential unintended consequence of Microsoft’s action: “They stop the freeloading of MSPs from using their software, as they look at it, and they lose potentially thousands of MSPs who no longer try that stuff out and no longer have access to it and may go to different vendors and different solutions.”
A part of doing business
Stanley Louissaint, president of Fluid Designs Inc., an IT services provider in Union, N.J., said the MPN policy changes don’t affect his company but noted the unease among partners. Louissaint suggested changes in vendor policies are simply part of doing business as a channel partner.
“People don’t want to come to terms with the fact that we are resellers and we don’t, in any way, shape or form, control the products,” he said. “If [Microsoft] changes how they want to deal with us, it is what it is.”
Louissaint said the bottom line is Microsoft wants partners to become paying customers when using the vendor’s products to run their businesses. As for creating test beds to assess products, channel partners still can download software on a trial basis — for up to 180 days, in some cases, he added.
Jeff Aden, executive vice president of marketing and business development at 2nd Watch, a Seattle MSP, said the new policy “is not going to change what we do” unless there is an unforeseen effect. 2nd Watch is a Microsoft Gold partner and an AWS Premier Consulting Partner.
EfficiencyNext’s Katz said the licensing changes don’t mean Microsoft is greedy. He noted Windows Insider members can download preview versions of Windows for free, and there is a community version of Visual Studio that is free for up to five users in nonenterprise organizations.
“They are still a great company, and we are still happy to be working with them,” he said.
The U.S. Dept. of Homeland Security wants dramatic changes in hiring and management of cybersecurity professionals. It seeks 21st Century HR practices and technologies, with a goal of making the federal HR program as competitive as the private sector.
This effort will streamline hiring and improve cybersecurity recruiting. DHS wants a pay system for cybersecurity professionals based on “individual’s skills and capabilities.” New HR technologies are sought as well.
The proposed federal HR improvements are in a request for information to vendors. In this knowledge gathering effort vendors are asked to estimate the cost, and outline the expertise and technologies needed to achieve this reform. It doesn’t obligate the government but sets the stage for contract proposals. Its goals are sweeping.
DHS, for instance, said it wanted to end 20th Century federal HR practices, such as annual reviews. Instead, it wants 21st Century methods, such as continuous performance management.
The goal is modernizing federal HR technologies and processes, but with a focus on improving cybersecurity recruiting and retention.
Analysts see DHS moving in the right direction
HR analysts contacted about the planned federal cybersecurity recruiting reform seemed impressed.
“The scope of this is really big and it’s very ambitious,” said Kyle Lagunas, research manager in IDC’s talent acquisition and staffing research practice. “I’m really encouraged to see this. It really captures, I think, where the industry is going.”
Josh Bersinfounder and principal, Bersin by Deloitte Consulting
“This sounds like good stuff to me,” said Josh Bersin, founder and principal of Bersin by Deloitte Consulting. “It’s all in the right direction,” he said.
Both analysts said that if DHS achieves its goals it will rank with leading businesses in HR best practices.
DHS employs some 11,000 cybersecurity professionals and leads government efforts to secure public and private critical infrastructure systems.
The U.S. said in 2016 that there weren’t enough cybersecurity professionals to meet federal HR needs. President Barack Obama’s administration called for a “government-wide” federal HR cybersecurity recruitment strategy. President Donald Trump’s administration is reaching out to vendors for specifics.
DHS published its request for information for reforming federal HR in early May, asking for cost estimates and ideas for modernizing cybersecurity hiring and management. It sought specific capabilities such as the ability to process as many as 75,000 applicants per year. It wants, as well, applicant assessment technologies. This can include virtual environments, for testing “real-world application of technical cybersecurity competencies.”
Feds boldly make a case for reform of cybersecurity recruiting
But what distinguished this particular federal HR request, from so many other government requests for information, was its dramatic framing of the goal.
The 20th Century way of recruiting involves posting a job and “hoping the right candidates apply,” said DHS in its request to vendors. The new 21st Century method — is to “strategically recruit from a variety of sources on an ongoing basis, and use up-to-date, cybersecurity-focused standards and validated tools to screen, assess and select talent.”
DHS also wants to adopt “market-sensitive pay” to more readily compete for people, a smart move, according to Lagunas. “If they want to bring in top cybersecurity talent they are going to have to make sure they are very competitive in their pay and practices.”
In what may be a nod to the growing contingent workforce, DHS wants a federal HR plan for “dynamic careers.” This involves “streamlined movement” from the private sector to government and back again.
The deadline for vendor responses to the government’s request for information is May 25.
Sometimes big changes sneak up on you, especially when you’re talking about the future of data storage technology. For example, when exactly did full-on cloud adoption become fully accepted by all those risk-averse organizations, understaffed IT shops and disbelieving business executives? I’m not complaining, but the needle of cloud acceptance tilted over sometime in the recent past without much ado. It seems everyone has let go of their fear of cloud and hybrid operations as risky propositions. Instead, we’ve all come to accept the cloud as something that’s just done.
Sure, cloud was inevitable, but I’d still like to know why it finally happened now. Maybe it’s because IT consumers expect information technology will provide whatever they want on demand. Or maybe it’s because everything IT implements on premises now comes labeled as private cloud. Influential companies, such as IBM, Microsoft and Oracle, are happy to help ease folks formerly committed to private infrastructure toward hybrid architectures that happen to use their respective cloud services.
In any case, I’m disappointed I didn’t get my invitation to the “cloud finally happened” party. But having missed cloud’s big moment, I’m not going to let other obvious yet possibly transformative trends sneak past as they go mainstream with enterprises in 2018. So when it comes to the future of data storage technology, I’ll be watching the following:
Containers arose out of a long-standing desire to find a better way to package applications. This year we should see enterprise-class container management reach maturity parity with virtual machine management — while not holding back any advantages containers have over VMs. Expect modern software-defined resources, such as storage, to be delivered mostly in containerized form. When combined with dynamic operational APIs, these resources will deliver highly flexible programmable infrastructures. This approach should enable vendors to package applications and their required infrastructure as units that can be redeployed — that is, blueprinted or specified in editable and versionable manifest files — enabling full environment and even data center-level cloud provisioning. Being able to deploy a data center on demand could completely transform disaster recovery, to name one use case.
Everyone is talking about AI, but it is machine learning that’s slowly permeating through just about every facet of IT management. Although there’s a lot of hype, it’s worth figuring out how and where carefully applied machine learning could add significant value. Most machine learning is conceptually made up of advanced forms of pattern recognition. So think about where using the technology to automatically identify complex patterns would reduce time and effort. We expect the increasing availability of machine learning algorithms to give rise to new storage management processes. These algorithms can produce storage management processes that can learn and adjust operations and settings to optimize workload services, quickly identify and fix the root causes of abnormalities, and broker storage infrastructure and manage large-scale data to minimize cost.
Management as a service (MaaS) is gaining traction, when looking at the future of data storage technology. First, every storage array seemingly comes with built-in call home support replete with management analytics and performance optimization. I predict that the interval for most remote vendor management services to quickly drop from today’s daily batch to five-minute streaming. I also expect cloud-hosted MaaS offerings are the way most shops will manage their increasingly hybrid architectures, and many will start to shift away from the burdens of on-premises management software. It does seem that all the big and even small management vendors are quickly ramping up MaaS versions of their offerings. For example, this fall, VMware rolled out several cloud management services that are basically online versions of familiar on-premises capabilities.
More storage arrays now have in-cloud equivalents that can be easily replicated and failed over to if needed. Hewlett Packard Enterprise Cloud Volumes (Nimble); IBM Spectrum Virtualize; and Oracle cloud storage, which uses Oracle ZFS Storage Appliance internally, are a few notable examples. It seems counterproductive to require in-cloud storage to run the same or a similar storage OS as on-premises storage to achieve reliable hybrid operations. After all, a main point of a public cloud is that the end user shouldn’t have to care, and in most cases can’t even know, if the underlying infrastructure service is a physical machine, virtual image, temporary container service or something else.
However, there can be a lot of proprietary technology involved in optimizing complex, distributed storage activities, such as remote replication, delta snapshot syncing, metadata management, global policy enforcement and metadata indexing. When it comes to hybrid storage operations, there simply are no standards. Even the widely supported Amazon Web Services Simple Storage Service API for object storage isn’t actually a standard. I predict cloud-side storage wars will heat up, and we’ll see storage cloud sticker shock when organizations realize they have to pay both the storage vendor for an in-cloud instance and the cloud service provider for the platform.
Despite the hype, nonvolatile memory express (NVMe) isn’t going to rock the storage world, given what I heard at VMworld and other fall shows. Yes, it could provide an incremental performance boost for those critical workloads that can never get enough, but it’s not going to be anywhere near as disruptive to the future of data storage technology as what NAND flash did to HDDs. Meanwhile, NVMe support will likely show up in most array lineups in 2018, eliminating any particular storage vendor advantage.
On the other hand, a bit farther out than 2018, expect new computing architectures, purpose-built around storage-class memory (SCM). Intel’s initial releases of its “storage” type of SCM — 3D XPoint deployed on PCIe cards and accessed using NVMe — could deliver a big performance boost. But I expect an even faster “memory” type of SCM, deployed adjacent to dynamic RAM, would be far more disruptive.
How did last year go by so fast? I don’t really know, but I’ve got my seatbelt fastened for what looks to be an even faster year ahead, speeding into the future of data storage technology.