Tag Archives: cloudbased

Remote monitoring and management: Netgear Insight Pro debuts

Netgear has launched Insight Pro, a cloud-based remote monitoring and management platform that the company said will bring managed service providers more network management capabilities, as well as attractive revenue opportunities when they resell the service.

Netgear executives said Insight Pro is a multi-tenancy platform designed for MSPs that want to manage numerous customers remotely. This is a change from the previous version of the product, called Insight, which was designed to manage the network ecosystem of only one small or medium-sized business.

The networking company, based in San Jose, Calif., introduced Netgear Insight Pro in August in North America and Asia, and it featured the product earlier this month at the CEDIA Expo conference in San Diego.

John McHugh, general manager and senior vice president for Netgear’s commercial business unit, said Insight Pro can help MSPs and their customers build a better business relationship. The aim is to help those parties gain transparency, greater efficiency and control over network operations.

Remote monitoring and management reporting

Once an MSP buys a Netgear Insight Pro subscription at $15 per device, per year, and resells the subscription service, customers that sign on can see a read-only view of their network. The remote monitoring and management offering generates reports that give users details on power usage, data consumption and storage utilization, among other usage statistics that show the health and vulnerabilities that exist across the network. 

“Insight will detect a hardware failure, bandwidth or loading issues and configuration problems,” McHugh said. “It will also help the MSP determine what the ‘peak’ loading is, which is critical to provide customers with guidance on where they might need additional capacity either now or in the future.”

Customers don’t want to commit to a cloud model and then get stuck in an arrangement that’s unaffordable.
John McHughgeneral manager and senior vice president for Netgear’s commercial business unit

To guard against network slowdowns, mitigate the impact of outages and protect the network against security breaches, Netgear Insight Pro is supported by a suite of Netgear products that include apps, firmware, wireless LANs, storage devices, network security tools and switches that run on Amazon Web Services’ cloud computing platforms.

As the cloud subscription model continues to reduce the need for value-added resellers to install hardware at customer sites, the Insight Pro product will help VARs transition to a service provider business, according to McHugh. He said many VARs are intimidated by the idea of managing a customer’s network on a 24/7 basis under a subscription model.

“In the case of a VAR who is becoming an IT service provider, they don’t have to purchase any equipment, and they don’t have to stand up a 24-by-7 data center or call centers to manage their customer’s network. All the notifications and alerts go straight to their phone,” McHugh said.

Netgear Insight Pro: Toggling the cloud

Another feature of the remote monitoring and management product: MSPs using Insight Pro can switch access to the cloud on or off. Once an MSP has authenticated itself and started a subscription, McHugh said, the MSP will have the option to choose whether it wants to manage a customer’s network locally or manage it through the cloud.

“Customers don’t want to commit to a cloud model and then get stuck in an arrangement that’s unaffordable,” McHugh said. “Partners and their customers demand that they have this flexibility as they work through their concerns over user experience and the cost of operations. Customers of Insight Pro only pay for what they use.”

Cisco acquires July Systems for its location, analytics services

Cisco announced this week the acquisition of a company that provides cloud-based location services through retailers’ Wi-Fi networks, while Extreme Networks and Ruckus Networks launched improvements to their wired and wireless LANs.

Cisco plans to use July Systems technology to improve its enterprise Wi-Fi platform for indoor location services. July, a privately held company headquartered in Burlingame, Calif., sells its product by subscription.

July Systems’ platform integrates with a company’s customer management system to identify people walking into a retail store or mall. The July software can then interact with the people through text messages, email or push notifications.

The system also continuously maps the physical location of retail customers and uses the information to calculate their behavior patterns. July Systems software can also send collected data to business intelligence applications for further analysis.

Before the acquisition, July Systems was a Cisco partner. The company made its location services and analytics available through the Cisco Connected Mobile Experiences. CMX is a set of location-based products that use Cisco’s wireless infrastructure.

Cisco plans to complete the acquisition by the end of October. The company did not release financial details.

Extreme, Ruckus releases

Extreme Networks has introduced wired and wireless LAN infrastructure called Smart OmniEdge that incorporates technology Extreme acquired when it bought Avaya’s enterprise networking business last year.

The latest release includes an on-premises version of Extreme’s cloud-based management application, called ExtremeCloud. Both versions provide a single console for overseeing the vendor’s wired and wireless infrastructure, including access points and edge switches. They are also engineered for zero-touch provisioning, enabling customers to configure and activate devices without manual intervention.

Other infrastructure additions include hosted software for radio frequency management on the wireless network, which in today’s workplace has to serve a variety of devices, including PCs, mobile phones, printers and projectors. Automated features in the technology include access point tuning and optimization, load balancing and troubleshooting.

Smart OmniEdge utilizes Avaya’s software-defined networking product for simpler provisioning, management and troubleshooting of switches and access points. Extreme has also added APIs to integrate third-party network products and hardware adapters that companies can plug into medical devices to download and enforce policies.

Extreme has designed Smart OmniEdge for networking a campus, hotel, healthcare facility and large entertainment venue. The company’s wired and wireless networking portfolio incorporates technology from acquisitions over several years, including wireless LAN vendor Zebra Technologies, Avaya’s software-based networking technology and Brocade’s data center network products.

Extreme’s acquisition strategy helped boost sales in its latest quarter ended in May by 76% to $262 million. However, results for the quarter, coupled with modest guidance for the current quarter, disappointed analysts, driving its stock down by 19.5%, according to the financial site Motley Fool.

Meanwhile, Ruckus Networks, an Arris company, released a new version of the operating system for its SmartZone controllers for the wired and wireless LAN. SmartZoneOS 5 provides a central console for controlling, managing and securing Ruckus access points and switches.

SmartZoneOS customers can build a single network control cluster to serve up to 450,000 clients. The controller also contains RESTful APIs, so managed service providers can invoke SmartZoneOS features and configurations.

In February, Ruckus launched SmartZoneOS software that provides essential management and security features for IoT devices. The software works in conjunction with a Ruckus IoT module plugged into the USB port on each of the company’s access points.

Silver Peak SD-WAN adds service chaining, partners for cloud security

Silver Peak boosted its software-defined WAN security for cloud-based workloads with the introduction of three security partners.

Silver Peak Unity EdgeConnect customers can now add security capabilities from Forcepoint, McAfee and Symantec for layered security in their Silver Peak SD-WAN infrastructure, the vendor said in a statement. The three security newcomers join existing Silver Peak partners Check Point, Fortinet, OPAQ Networks, Palo Alto Networks and Zscaler.

Silver Peak SD-WAN allows customers to filter application traffic that travels to and from cloud-based workloads through security processes from third-party security partners. Customers can insert virtual network functions (VNFs) through service chaining wherever they need the capabilities, which can include traffic inspection and verification, distributed denial-of-service protection and next-generation firewalls.

These partnership additions build on Silver Peak’s recent update to incorporate a drag-and-drop interface for service chaining and enhanced segmentation capabilities. For example, Silver Peak said a typical process starts with customers defining templates for security policies that specify segments for users and applications. This segmentation can be created based on users, applications or WAN services — all within Silver Peak SD-WAN’s Unity Orchestrator.

Once the template is complete, Silver Peak SD-WAN launches and applies the security policies for those segments. These policies can include configurations for traffic steering, so specific traffic automatically travels through certain security VNFs, for example. Additionally, Silver Peak said customers can create failover procedures and policies for user access.

Enterprises are increasingly moving their workloads to public cloud and SaaS environments, such as Salesforce or Microsoft Office 365. Securing that traffic — especially traffic that travels directly over broadband internet connections — remains top of mind for IT teams, however. By service chaining security functions from third-party security companies, Silver Peak SD-WAN customers can access those applications more securely, the company said.

Silver Peak SD-WAN holds 12% of the $162 million SD-WAN market, according to a recent IHS Markit report, which ranks the vendor third after VMware-VeloCloud and Aryaka.

ONF pinpoints four technology areas to develop

The Open Networking Foundation unveiled four new supply chain partners that are working to develop technology reference designs based on ONF’s strategic plan. Along with the four partners — Adtran, Dell EMC, Edgecore Networks and Juniper Networks — ONF finalized the focus areas for the initial reference designs.

ONF’s reference designs provide blueprints to follow while building open source platforms that use multiple components, the foundation said in a statement. While the broad focus for these blueprints looks at edge cloud, ONF targeted four specific technology areas:

  • SDN-enabled broadband access. This reference design is based on a variant of the Residential Central Office Re-architected as a Datacenter project, which is designed to virtualize residential access networks. ONF’s project likewise supports virtualized access technologies.
  • Network functions virtualization fabric. This blueprint develops work on leaf-spine data center fabric for edge applications.
  • Unified programmable and automated network. ONF touts this as a next-generation SDN reference design that uses the P4 language for data plane programmability.
  • Open disaggregated transport network. This reference design focuses on open multivendor optical networks.

Adtran, Dell EMC, EdgeCore and Juniper each apply its own technology expertise to these reference design projects, ONF said. Additionally, as supply chain partners, they’ll aid operators in assembling deployment environments based on the reference designs.

Hybrid cloud security architecture requires rethinking

Cloud security isn’t for the squeamish. Protecting cloud-based workloads and designing a hybrid cloud security architecture has become a more difficult challenge than first envisioned, said Jon Oltsik, an analyst at Enterprise Strategy Group in Milford, Mass.

“The goal was simple,” he said. Enterprises wanted the same security they had for their internal workloads to be extended to the cloud.

But using existing security apps didn’t work out so well. In response, enterprises tried to concoct their own, but that meant the majority of companies had separate security foundations for their on-premises and cloud workloads, Oltsik said.

The answer in creating a robust hybrid cloud security architecture is central policy management, where all workloads are tracked, policies and rules applied and networking components displayed in a centralized console. Firewall and security vendors are beginning to roll out products supporting this strategy, Oltsik said, but it’s still incumbent upon CISOs to proceed carefully.

“The move to central network security policy management is a virtual certainty, but which vendors win or lose in this transition remains to be seen.”

Read the rest of what Oltsik had to say about centralized cloud security.

User experience management undergoing a shift

User experience management, or UEM, is a more complex concept than you may realize.

Dennis Drogseth, an analyst at Enterprise Management Associates in Boulder, Colo., described the metamorphosis of UEM, debunking the notion that the methodology is merely a subset of application performance management.

Instead, Drogseth said, UEM is multifaceted, encompassing application performance, business impact, change management, design, user productivity and service usage.

According to EMA research, over the last three years the two most important areas for UEM is application performance and portfolio planning and optimization. Valuable insights can be provided by UEM to assist both IT and business.

One question surrounding UEM is whether it falls into the realm of IT or business. In years past EMA data suggested 20% of networking staffers considered UEM a business concern, 21% an IT concern and 59% said UEM should be equally an IT and business concern. Drogseth agreed wholeheartedly with the latter group.

Drogseth expanded on the usefulness of UEM in his blog, including how UEM is important to DevOps and creating an integrated business strategy.

Mixed LPWAN results, but future could be bright

GlobalData analyst Kitty Weldon examined the evolving low-power WAN market in the wake of the 2018 annual conference in London.

Mobile operators built out their networks for LPWAN in 2017, Weldon said,  and are now starting to look for action. Essentially every internet of things (IoT) service hopped on the LPWAN bandwagon; now they await the results.

So far, there have been 48 launches by 26 operators.

The current expectation remains lowered costs and improved battery life will eventually usher in thousands of new low-bandwidth IoT devices connecting to LPWANs. However, Weldon notes that it’s still the beginning of the LPWAN era, and right now feelings are mixed.

“Clearly, there is some concern in the industry that the anticipated massive uptake of LPWANs will not be realized as easily as they had hoped, but the rollouts continue and optimism remains, tempered with realistic concerns about how best to monetize the investments.”

Read more of what Weldon had to say here.

Spoken acquisition the highlight of Avaya Engage 2018

Avaya took a big step toward building a competitive cloud-based unified communications portfolio with the acquisition of contact-center-as-a-service provider Spoken Communications.

Avaya announced the all-cash deal this week at the Avaya Engage 2018 user conference — the first since Avaya exited bankruptcy late last year. The company also launched at the show a desktop phone series, an all-in-one huddle-room video conferencing system and cloud-based customer support software, called Ava.

Avaya plans to offer Spoken services as an option for customers who want to move to the cloud slowly. Companies using Avaya on-premises software can swap out call-center features one at a time and replace them with the Spoken cloud version.

“With the acquisition of Spoken, it’s clear that Avaya is putting more of an emphasis on building out its own hosted offerings that it can either sell direct or through channels,” said Irwin Lazar, an analyst at Nemertes Research, based in Mokena, Ill.

Avaya’s cloud strategy

The current executive team is determined to shift Avaya’s focus to the cloud in terms of both technology development and business model.
Elka Popovaanalyst at Frost & Sullivan

Only a small percentage of Avaya’s customers use its cloud-based services, which lag behind those of rivals Cisco and Microsoft. Nevertheless, the market for contact center and UC as a service is growing much faster than on-premises software, analysts said.

“The current executive team is determined to shift Avaya’s focus to the cloud in terms of both technology development and business model,” said Elka Popova, an analyst at consulting firm Frost & Sullivan, based in San Antonio. “The team acknowledges they are a bit late to the game, and most of their cloud portfolio is a work in progress, but the determination is there.”

Since last year, Avaya has worked with Spoken on bringing contact center as a service (CCaaS) to Avaya customers through product integrations. The joint effort has led to integration between Spoken’s cloud-based services and Avaya’s on-premises Call Center Elite and Aura Communication Manager. The latter is Avaya’s UC platform.

Spoken uses speech recognition in its CCaaS offering to automate call-center processes and make customer service agents more efficient. For example, Spoken can transcribe conversations agents have with each customer, which frees customer reps from having to type notes into the system manually.

Spoken technology can also listen for keywords. If it hears the word invoice, for example, it can retrieve the customer’s bill automatically for the agent.

Spoken has more than 170 patents and patent applications that will go to Avaya, which expects to close the transaction by the end of March. The company did not release financial details.

Other Avaya Engage 2018 announcements

In other Avaya Engage 2018 news, the vendor introduced a cloud-based messaging platform for reaching customers on social media, such as Facebook, Twitter, WeChat and Line. Avaya’s Ava can provide immediate self-service support or send customers to an agent. If the latter occurs, then all information gathered during the automated service is handed to the service rep.

Ava supports 34 languages and has APIs Avaya partners can use for product integration. Last year, Avaya launched an initiative called A.I.Connect to encourage other vendors to connect products with artificial intelligence or machine learning capabilities with Avaya communication software.

Despite its cloud focus, Avaya is still paying attention to hardware. The company announced at Engage the J series line of desktop phones. The three phones come with Bluetooth and Wi-Fi connectivity. Avaya plans to release the hardware in the second quarter.

Also, the company introduced a second Vantage touchscreen phone. Unlike the first one unveiled last year, the latest hardware comes with the option of a traditional keyboard. It also supports Avaya IP Office, which provides a combination of cloud-based and on-premises UC services.

Finally, Avaya launched the CU-360 all-in-one video conferencing system for huddle rooms, which small teams of corporate workers use for meetings. The hardware can connect to mobile devices for content sharing.

Overall, the Avaya Engage 2018 conference reflected positively on the executive team chosen by Avaya CEO Jim Chirico, analysts said. Formerly Avaya’s COO, Chirico replaced former CEO Kevin Kennedy, who retired Oct. 1.

“Overall, the event did not produce a wow effect,” Popova said. “There was nothing spectacular, but the spirits were high, and the partner and customer sentiments were mostly positive.”

ExtremeLocation latest addition to Extreme wireless portfolio

Extreme Networks is offering retail customers cloud-based tools that provide actionable intelligence from customer-activity data gathered through a store’s beacons and guest Wi-Fi.

Extreme debuted its ExtremeLocation service this week at the National Retail Federation conference in New York. The service is designed to work best with ExtremeWireless WiNG, a combined access point and Bluetooth Low Energy beacon. Extreme received the WiNG technology in the 2016 acquisition of Zebra Technologies’ wireless LAN business.

For ExtremeLocation to gather the maximum amount of customer data, shoppers would have to launch the retailer’s mobile app and log into the guest network of an Extreme-based Wi-Fi. At that point, where customers move in the store and where they linger would be recorded by the system.

ExtremeLocation tracks people within 5 to 7 meters of their actual location — a distance acceptable to many retailers. However, higher accuracy is possible by adding access points.

“The more access points you have, the more triangulation we can use and the more accurate you can get,” said Bob Nilsson, the director of vertical solutions at Extreme, based in San Jose, Calif.

Depending on the desired level of accuracy, a large department store could deploy from hundreds to thousands of access points. ExtremeLocation supports up to 100,000 access points across multiple locations.

Insight from customer activity on Extreme wireless

The collected information provides retailers with a view of where shoppers go, which products or displays they stop at and the amount of time spent in the store or at a specific location. Retailers can also track salespeople to ensure they are in high-trafficked areas.

Customers who turn on the mobile app can become targets for in-store promotions and coupons that the system sends through the beacons. Retailers can create policies for push notifications through a third-party system, such as customer relationship management or point-of-sale software. Extreme provides the APIs for integrating with those systems.

The ExtremeWireless WiNG access points send customer activity data to Extreme’s cloud-based software, which aggregates the information and displays the results on graphs, charts and other visuals, including a heat map of the store that shows where most shoppers are gathering. “It’s designed more for the store manager, the sales manager and the marketing side, rather than the IT side,” Nilsson said of the software.

Retailers are using location-based services for more than customer tracking. Cisco, for example, is demonstrating at the NRF conference the use of radio frequency identification tags to automatically notify a store employee that it’s time to restock a shelf.

Cisco is also demonstrating ad signage that’s attached to products in a store. When customers handle an item, the sign will change to a message enticing them to purchase the product.

Colleges to share Oracle ERP system in effort to cut costs

Three Vermont private colleges plan to share a cloud-based Oracle ERP system. It’s taking cooperation and agreement to change their business practices. But if it’s successful, the colleges expect to save millions in costs.

What Champlain College, Middlebury College and Saint Michael’s College are doing is rare for private institutions. But as more schools seek ways to control back-office costs, this idea may spread.

The three independent, nonprofit colleges did not have a history of working together. But in 2013, they formed the Green Mountain Higher Education Consortium to examine sharing an ERP system.

“Private liberal arts schools and the education that they offer is becoming more and more unaffordable for many students,” said Corinna Noelke, the consortium’s executive director and a doctorate-holding economist who was director of special projects at Middlebury.

The consortium went through a request-for-proposal process and picked Oracle ERP systems. They will use Oracle’s Human Capital Management Cloud, its ERP Cloud and the Enterprise Performance Management Cloud.

“The schools discovered that they could really work together on one software platform, as long as it allowed them to separate the schools efficiently,” Noelke said.

The implementation begins this year. The three colleges are now using Ellucian systems: One is using Banner, and two are using Colleague. Two of the colleges ran the ERP systems on premises, and the third outsourced.

A goal for the three schools was to implement best practices in a SaaS environment and to take advantage of using shared services.

Using best practices “has nothing to do with your culture and nothing to do with your special niche as a school,” Noelke said.

Public colleges have long shared IT platforms

[Using best practices] has nothing to do with your culture and nothing to do with your special niche as a school.
Corinna Noelkeexecutive director, Green Mountain Higher Education Consortium

Public universities have long shared systems across the various campuses, but it’s rare for nonprofit, private colleges to share services, said Kenneth Green, the founding director of The Campus Computing Project, which runs a continuing study of the role of IT in higher education

The Vermont colleges’ effort “is interesting and it is innovative, and it will be carefully watched,” Green said.

Back-office systems collaboration may be a growing trend in higher education. In a separate effort, over 100 smaller colleges recently banded together to collectively negotiate ERP pricing with major vendors.

“Why can’t we leverage our collective voices with you, the vendors, to get better pricing,” said Carol Smith, CIO of DePauw University and president of the board of directors of the Higher Education Systems & Services Consortium (HESS). This effort to negotiate as a group with ERP systems vendors began in 2016.

Most of the HESS Consortium schools have student full-time-equivalent populations of less than 8,000 and some only a few thousand students. But a goal of HESS is to give its members the contract negotiation clout of a large university system.

Transparency on ERP pricing is one goal

HESS is also working to normalize ERP pricing and services between vendors to make it easier for colleges to conduct apples-to-apples comparisons. The schools, some of which previously have gotten little vendor attention, hope that now changes. They are meeting collectively with their respective vendors to discuss their needs.

Instead of negotiating the ERP contracts independently, and then wondering whether they got a good price, Smith said participating HESS colleges will “feel confident” that they got the best consortium price for the ERP system.

The Green Mountain effort takes the idea of ERP collaboration a step further.

The consortium had four ERP candidates: Oracle, Workday, Unit4 and Campus Management.

In its selection, the Oracle ERP system gained the edge with its pricing, functionality and ability to set up a shared environment, Noelke said.

The Oracle ERP system architecture allows you to be in one instance and have three separate and distinct operations for each of the campuses. Employees at their respective schools don’t see information from other colleges unless they want to have a shared service. Each college will have independent user interfaces and data will be separated, but otherwise, they are operating on one platform.

“The architecture is very elegant and is really letting you be separate where you want to be separate, but also to come together where you want to come together,” Noelke said.

Configurations are being discouraged

The introduction of the SaaS platform is requiring the schools to make substantive changes to their business practices. They are holding workshops involving finance and HR and working with implementation firms. The schools may do some processes differently as they shift to a “best practices environment.” They are holding “process reimagine and redesign” workshops facilitated by CampusWorks Inc., and their implementation contractor is Hitachi Consulting for Oracle.

The basic premise is the schools will only customize configurations where needed, and the departments will have to make a business case for it. As soon as you configure differently between the three schools, it makes it harder to update the system, Noelke said.

The implementations will continue through much of the year. They will have to make “a million little decisions every day” with the implementers.

The three colleges expect to pay less with the Oracle ERP system. They have cut licensing costs by about 20% by acting together. The implementation costs are much less, because they are doing it together, Noelke said. This doesn’t account for long-term productivity gains helped by the elimination or reduction of manual, paper-based processes.

Over an eight-year period — fiscal year 2018 through fiscal year 2025 — buying software together and implementing it together at the same time is saving $20 million versus each school buying the software themselves and implementing it themselves, Noelke said.

Education systems require specialized software related to student needs, such as registration, class schedules and financial assistance. Oracle is developing a new student system using the knowledge they have on needs and requirements from the PeopleSoft product. This work is still in development. The consortium is likely to use Oracle’s approach, but will make a final determination once the development work is completed.

The motivation for these joint efforts is clear. At DePauw, Smith said she personally believes these types of collaborations among private schools will expand.

“We’re here to provide an educational experience for our students, so that they can be the best they can be,” Smith said. “I think we have to try to preserve every ounce of resources that we possibly can.”

Master the seven key DevOps engineer skills for 2018

This year will be an exciting year in DevOps. Cloud-based technologies will continue to grow in 2018, as will the use of AI in day-to-day operations. We’re going to see a renewed focus on the role of hardware in both the cloud and on-premises installation. Also, quantum computing will become a regular part of commercial computing.

All of these trends will require developers and operations professionals to acquire new DevOps engineer skills to adapt to this evolving landscape. Below, you’ll find 2018’s technological trends and the skills DevOps pros will have to develop to be viable in the coming year.

Serverless computing is here, so get used to it

The big three of web service providers — Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform — now provide serverless computing environments. AWS has Lambda, and here’s Azure Functions and Google Cloud Functions. These technologies were significant investments; they are not going away. In fact, the big three are promoting serverless computing as a first-order way for developing for the web, particularly around the internet of things (IoT).

And so moving into 2018, key DevOps engineer skills will include understanding the basic concepts of serverless computing in terms of architecture, version control, deployment and testing. There are still outstanding problems to be solved, particularly around real-world unit testing of serverless functions in a continuous integration and continuous delivery pipeline.

Get with IoT, or IoT will get you

IoT is on pace to eat the internet. Back in 2014, Business Insider‘ predicted IoT will become the internet’s predominant technology.
 
This year, we’ll see even more activity. IoT will have a growing impact in two significant areas: processing and security. In terms of processing, these IoT devices will emit a lot of data, which obviously needs to be processed. The increased demand will put a burden on infrastructure. Understanding how to accommodate the increase in volume due to IoT devices is going to be an important DevOps engineer skill in 2018.

In terms of security, new practices still need to be adopted. One type of consumer hazard is home invasion, in which a nefarious agent takes over household appliances. Imagine some bad tech turning off the heating system in a house during a Boston winter. After a few hours, all the pipes in the house burst. The damage will be significant. In the commercial enterprise, things can get much worse — think a nuclear reactor.

Given the risks at hand, DevOps personnel need to get a firm understanding of the intricacies of IoT. The technologies go beyond the standard practices of controlling a standard data center. The risks are real, and the consequences of not being well-informed are significant.

IoT smart home
Security in your smart home — and all your IoT devices — will become even more essential, as code-crunching quantum computers become more readily available.

Get ready for the resurrection of hardware

The days of using any old type of hardware to run a cloud-based VM are coming to a close, particularly as more applications are used in life-and-death situations — driverless vehicles, for example. The most telling change is the growing attraction of GPU as the processor of choice for AI and machine learning computation. Hardware is indeed making a comeback.

Cloud providers are listening. Amazon allows you to attach GPUs to cloud instances — so does Azure and Google Compute Engine. Along with this GPU rise, you are also going to see companies going back to “down to the metal” installations. There are providers out there, such as Packet.net, BareMetalCloud and Storm, that offer hourly rates on actual hardware.

As specialized big data processing becomes more a part of the everyday computing workload, alternatives to multicore commodity hardware will become essential. This hardware resurrection will have a definite impact on DevOps engineer skills and practices. DevOps personnel will need to know the basics of chip architecture — for example, how is a GPU different from a CPU? We’re going to have to refresh our understanding of network hardware and architecture.

Put learning about RPA firmly on your roadmap

Robotic process automation (RPA) is the practice of applying robotic technology to do physical work within a given workflow. In other words, RPA is the about teaching robots to do work with, or instead of, humans.

Over the last few years, RPA has become a standard discipline on the factory floor, and it’s getting more prominent in general IT. Short of a Luddite revolution, RPA is not going away. A quote in the Institute for Robotic Process Automation primer is quite telling:  “Though it is expected that automation software will replace up to 140 million full-time employees worldwide by the year 2025, many high-quality jobs will be created for those who are able to maintain and improve RPA software.”

As hard as it is to imagine today, teaching robots is going to become an essential DevOps skill. It makes sense in a way. We’ve been automating since Day One. Applying robotic technology to physical work in physical locations such as a data center is a natural extension of DevOps activity.

Prepare for the impact of quantum computing on your security infrastructure

Quantum computing is no longer a science-fiction fantasy. It’s here. IBM has a quantum computer available for public use via the cloud. D-Wave Systems is selling quantum computers commercially. They go for around $10 million each. Google and Lockheed Martin are already customers.

Quantum computing is no longer a science-fiction fantasy. It’s here.

The key benefit of quantum computing is speed. There are still problems out there that take classical computers — computers that use standard binary processors — billions of years to solve. Decoding encrypted data is one such problem. Such complex code breaking can be done by a quantum computer in a few hundred seconds.

The impact of quantum computing on security practices is going to be profound. At the least, quantum computing is going to allow any text-based password to be deciphered in seconds. Also, secure access techniques, such as fingerprinting and retinal scan, will be subject to hacking. Quantum computing will allow malicious actors to perform highly developed, digital impersonation in cyberspace.

To paraphrase Alan Turing in Imitation Game, “The only way to beat a machine is with another machine.”

It’ll be the same with quantum computing. DevOps security pros — whose primary concern is providing a state-of-the-art security infrastructure — will do well to start learning how to use quantum computing. Quantum computing will provide the defensive techniques required to ensure the safety of the digital enterprise as we move into the era of infinite computing.

Get good at extreme edge cases

In the old days, we’d need a human to look over reports from automated system agents to figure out how to address anomalies. Now, with the growth of AI and machine learning, technology can identify more anomalies. The more anomalies AI experiences, the smarter it gets. Thus, the number of anomalies — aka edge cases that require human attention — is going to diminish. AI will have it covered.

But the cases that do warrant human attention are going to be harder to resolve. And the type of human that will be needed to address an edge case will need to be very smart and very specialized, to the point that only a few people on the planet will have qualifications necessary to do the work.

In short, AI is going to continue to grow. But there will be situations in which human intelligence will be required to address issues AI can’t. Resolving these edge cases is going to require very deep understanding of a very precise knowledge set, coupled with highly developed analytical skills. If part of your job is to do troubleshooting, start developing expertise in a well-defined specialty to a level of understanding that only a few will have. For now, keep your day job. But understand that the super-specialization and extreme analysis are going to be a DevOps skill trend in the future.

Relearn the 12 Principles of Agile Software

The Agile Manifesto, released in 2001, describes a way of making software that’s focused on getting useful, working code into the hands of users as fast as possible. Since the Manifesto’s release, the market has filled with tools that support the philosophy. There have been arguments at the process level, and there are a number of permutations of Agile among project managers. Still, the 12 principles listed in the Manifesto are as relevant today as when they first appeared.

Sometimes, we in DevOps get so muddled down in the details of our work that we lose sight of the essential thinking that gave rise to our vocation. Though not a DevOps skill exactly, reviewing the 12 Principles of Agile Software is a good investment of time not only to refresh one’s sense of how DevOps came about, but also to provide an opportunity to recommit oneself the essential thinking that makes DevOps an important part of the IT infrastructure.

Windows Server version 1709 hits turbulence upon release

Enterprises that use DevOps methodologies for advanced cloud-based applications will likely gain a new appreciation…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

for Microsoft and Windows Server. That’s because the release of Windows Server version 1709, which came out in October, improves container support and has added functions that fortify its software-defined networking capabilities.

Every six months, Microsoft plans to introduce a new edition of Windows Server for the needs of these businesses that want the newest features and updates. Admins need to know what’s in Windows Server version 1709 and how it differs from the original Windows Server 2016 release that was introduced in October 2016. Here is a roundup of those changes and several others that are worthy of further scrutiny.

Microsoft makes containers the focus

Microsoft changed the mission of Nano Server in Windows Server version 1709. No longer considered a lighter version of Server Core to host various infrastructure workloads, Nano Server is now only available as a base image for containers. This role change allowed Microsoft to shrink Nano Server to about 80 MB, a drop from about 400 MB. This reduction means Nano Server no longer includes Windows PowerShell, .NET Core and Windows Management Instrumentation by default. Microsoft also removed the servicing stack from Nano Server, so admins have to redeploy the image for every update or patch. And all troubleshooting? That’s done in Docker, too.

There are other container improvements in Windows Server version 1709:

  • The Server Core container image is much smaller. According to Microsoft, it is just under 3 GB when it had been nearly 6 GB in the Windows Server 2016 release-to-manufacturing (RTM) version.
  • Windows Server version 1709 supports Linux containers on Hyper-V. These containers act like Docker containers but have kernel isolation provided by Hyper-V to so that they are completely independent. By comparison, traditional containers share a kernel but virtualize the rest of the OS.

For admins with significant investments in containers, these are great changes. For a business without a need for application virtualization, Microsoft says the updated Server Core in the Semi-Annual Channel release is where admins in those enterprises should put their infrastructure workloads.

Say aloha to Project Honolulu

Around the time Microsoft released Windows Server version 1709, the company also provided a technical preview of Project Honolulu — a free, GUI-based remote server management tool. Project Honolulu makes it easier to manage Server Core for admins who aren’t fluent in PowerShell.

Project Honolulu is a responsive web interface that enables admins to manage multiple remote servers, both on premises and in the cloud. It runs on a client machine or on a Windows Server instance and has similar functionality to local Microsoft Management Console-based GUI tools and Server Manager. Admins can use Project Honolulu to manage machines that run Windows Server 2012, including Server Core and the free Hyper-V Server.

Project Honolulu wraps up a number of administrative tools into a unified interface. It makes Server Core management less onerous and improves things to the point where I can recommend Server Core as the preferred installation option for any infrastructure servers you plan to deploy.

Microsoft improves SDN features

Windows Server version 1709 also added enhancements to its networking features, such as these two that were designed specifically for software-defined networking (SDN).

  • This Semi-Annual Channel release extends support for shielded VMs to Linux workloads. Microsoft introduced shielded VMs in Windows Server 2016 RTM. The feature enables these VMs to only run on authentic, verified hypervisor hosts. They remain encrypted and unbootable if an admin tries to access them from another host.
  • Microsoft added Virtual Network Encryption, which enables admins to mark subnets that connect different VMs as “Encryption Enabled” to require nonclear text transmissions over those links.

There were also several improvements in IPv6 support as that technology moves closer to widespread use in production. Those changes include support for domain name system configuration using router advertisements, flow labels for more efficient load balancing and the deprecation of Intra-Site Automatic Tunnel Addressing Protocol and 6to4 support.

Storage Spaces Direct drops from version 1709

In a curious move, Microsoft pulled support for Storage Spaces Direct (S2D) clusters, one of the better aspects of the original Windows Server 2016 release, in Windows Server version 1709.

S2D creates clusters of file servers with directly attached storage. This provides an easier and more cost-effective storage option for companies that would normally take a cluster of servers and attach them to a storage area network or a just a bunch of disks enclosure. S2D displays all of the directly attached disks as one big storage space, which the admin divvies into volumes.

Admins cannot create new S2D clusters on version 1709, and a machine cannot participate in any existing S2D cluster. If you use S2D clusters — or plan to — version 1709 is not for you. Microsoft says S2D is alive and well as a technology, but the company just couldn’t get it right in time for the 1709 release.

Growing pains for Windows Server

As Microsoft will offer a new version of Windows Server every six months, the removal of S2D should make admins wonder if the company will continue to play feature roulette in the Semi-Annual Channel. If an organization adopts a new feature, what happens if it’s pulled in the next release? More conservative businesses might want to wait for Windows Server version 1803 to make sure further features don’t fall by the wayside.

This raises another question: If Microsoft can’t hit the six-month targets, then why promise them at all? It’s too early to make a final judgment, but businesses that aren’t all-in on containers might want to wait until version 1803 to make sure other features aren’t removed before they commit to the Semi-Annual Channel.

Next Steps

Server Core can help and hinder IT

Pets vs. cattle and the future of management

Windows Server 2016 innovations challenge admins