Adobe has extended its Adobe digital experience product portfolio to small and midmarket businesses in an effort to provide enterprise-grade capabilities such as agility, scalability and flexibility to businesses with fewer resources.
The product portfolio for SMBs includes:
Magento Commerce: According to Adobe, this product provides agility and scalability through a portfolio of cloud-based omnichannel platforms. It is designed to enable users to integrate digital and physical shopping experiences. Through the integration of Adobe Stock with Magento Commerce, SMBs with an Adobe Stock subscription will be able to access more than 130 million assets such as images, templates, 3-D assets and stock videos.
Marketo Engage: As part of Adobe Marketing Cloud, Marketo Engage enables users to target individual leads or accounts at scale, as well as measure business impact across customer touchpoints. Additionally, according to Adobe, Marketo Engage offers access to more than 65,000 markets globally to enable users to share best practices to build and formalize marketing strategies.
Adobe Analytics Foundation: Adobe Analytics Foundation was designed to bring the enterprise-grade features of Adobe Analytics to SMBs through the Adobe digital experience platform. Customers can implement the tool at the appropriate level for their organization, and then scale up as needed.
Adobe Sign for Small Business: According to Adobe, the new Adobe Sign for Small Business offers enterprise-grade e-signature capabilities tailored to small businesses in an effort to help digitize signing documents for customer onboarding, contracts, approvals, payments and invoices.
Creative Cloud for Teams: This product enables companies to deploy Adobe digital experience applications. The Creative Cloud Libraries let teams share assets and folders securely, while collaborating and managing changes.
While digitalization was once more of an enterprise-centric theme, SMBs have increasingly taken on the challenge. Historically, it has been more difficult for smaller businesses to digitize their operations due to cost and scale, but in recent years, it has been on the rise. According to Gartner Research, SMBs’ IT spending is predicted to be at a 4.2% compound annual growth rate for the next five years.
Vonage plans to add a homegrown video conferencing app to its cloud-based business communications portfolio in December. The move is the latest example of a UC vendor combining calling, messaging and meetings.
Vonage Meetings, currently in beta, is scheduled to launch in December for businesses subscribed to Vonage’s cloud UC product. The vendor said it would not make the meetings platform available as a stand-alone offering.
Vonage currently provides video conferencing capabilities to customers through a partnership with Amazon Web Services, which makes the meetings app Amazon Chime. Vonage built the new platform using technology inherited through its acquisition of TokBox in 2018.
The release of Vonage Meetings follows moves by competitors, including 8×8, which launched a revamped meetings product in September. Market leaders Microsoft and Cisco have also built out all-in-one communications suites that include video over the last couple of years.
Vonage has a strategy of building a technology stack that doesn’t rely on third parties, said Raúl Castañón-Martinez, analyst at 451 Research. “This is a bold move but will allow them more flexibility in terms of defining their roadmap.”
Vonage Meetings will be fully integrated with the vendor’s voice platform to let users quickly move between voice and video calls. Guests will be able to join meetings using a web browser without installing a client or plug-in.
Vonage said it would provide customers with a log of past meetings, including a record of in-meeting chats.
Vonage now has a single cloud platform from which it can deliver voice and video services, said Zeus Kerravala, principal analyst at ZK Research. “I think that will work as a very good competitive advantage for them moving forward.”
In the future, Vonage will need to integrate Vonage Meetings with conference room equipment and software, Kerravala said. Also, the vendor should focus on improving its relatively basic messaging app.
Vonage announced the meetings platform this week at Vonage Campus 2019, a user conference in San Francisco. The company also released a new logo as it continues to pivot away from the consumer market.
Founded in 2001, Vonage was among the first vendors to offer internet-based phone service to consumers, but, more recently, has transformed into a business-to-business company.
“I think the Vonage that we knew as the consumer-first company is quickly winding down,” Kerravala said.
Unified communications vendor Intermedia has added contact center software to its cloud portfolio. The move is the latest example of how the markets for UC and contact center technologies are converging.
Intermedia follows the lead of other cloud UC vendors, including RingCentral, Vonage and 8×8, in building or acquiring a contact center as a service (CCaaS) platform. Intermedia’s CCaaS software stems from the acquisition of Toronto-based Telax in August.
The Intermedia Contact Center will be available as a stand-alone offering or bundled with Intermedia Unite, a cloud-based suite of calling, messaging and video conferencing applications. Intermedia will sell the offering in three tiers: Express, Pro and Elite.
Express — sold only as an add-on to Intermedia Unite — is a basic call routing platform for small businesses. Pro includes more advanced call routing, analytics, and support for additional contact channels, such as chat.
Elite, the most expensive tier, integrates with CRM platforms and includes support for self-service voice bots, outbound notification campaigns and quality assurance monitoring.
Intermedia has already integrated Express with its UC platform. It’s planning to do the same for Pro and Elite early next year.
Integrating UC and contact center platforms can save money by letting customer service agents transfer calls outside of the contact center without going through the public telephone network. Plus, communication between agents and others in the organization is more effective when everyone uses the same chat and video apps.
Based in Sunnyvale, Calif., Intermedia sells its technology to small and midsize businesses through 6,600 channel partners. Most of them are managed service providers that brand Intermedia’s service as their own.
In addition to UC and contact center, Intermedia offers email archiving and encryption, file backup and sharing systems, and hosted Microsoft email services.
Roughly 1.4 million people across 125,000 businesses use Intermedia’s technology. The company, founded in 1995 and now owned by private equity firm Madison Dearborn Partners, said its acquisition of Telax brought annual revenue to around $250 million.
Founded in 1997, Telax sold its CCaaS platform exclusively through service providers, which rebranded it mostly for small and midsize businesses.
The Pivot3 storage portfolio has expanded into different directions over the years. Pivot3 started out selling — and still sells — storage for media and surveillance before moving into hyper-converged infrastructure after HCI gained popularity.
CEO Ron Nash said the acquisition of NexGen Storage in 2017 was a watershed moment for the vendor, based in Austin, Texas. NexGen gave Pivot3 storage quality of service and nonvolatile memory express (NVMe) PCIe flash — capabilities that underpinned the 2017 launch of the Pivot3 Acuity HCI system.
In the last month, Pivot3 added HCI products to expand its use cases. The vendor launched a ruggedized Intelligent Edge Command and Control system that’s optimized for analytics and virtual desktops and built for defense and intelligence operations in the field. It also forged a partnership with Lenovo to sell a set of edge computing systems for smart city security, based on Pivot3 HCI software and Lenovo ThinkSystem servers.
He added that Pivot3’s average selling price per unit increased 74% since the Acuity launch. With the NexGen integration complete, composable infrastructure is part of Pivot3’s strategy to sell more to enterprises with sprawling application workflows that need both automation and scalability, particularly in a multi-cloud environment.
“When you first start out, you put your blinders on and try to go as fast as possible,” Nash said. “It’s almost like a demonstration market. But you have to keep innovating. You can’t keep still. Things that were seen as innovative a few years ago aren’t so innovative anymore. That’s both the beauty and the challenge of technology.”
We recently spoke with Nash on the Pivot3 storage strategy and product roadmap.
How do you evaluate the advent of composable storage? Some observers claim it is more flexible than hyper-converged infrastructure, which pacakges compute, networking, storage and virtualization on a single hardware appliance.
Ron Nash: Well, you could run composable units on a hyper-converged platform now, if you wanted to. In fact, that’s the direction we’re heading.
The composable stuff is really just a packaging exercise: You deliver processing, storage, networking and such as definable units that you can mix and match in different measures to meet different service levels. But once you start running composable units, you can run it on a hyper-converged infrastructure plaform just fine.
It is additive. It gives you one higher level of abstraction, above the hyper-converged infrastructure platform you have now. That’s why we’re interested in it. We’ll have a policy-driven engine that allows you to specify those pieces and, in the parlance, compose those units such that we can provide modules of performance that people will need.
Would this be a different licensing model for Pivot3?
Nash: Yes, it would be another software layer on top of the hardware modules. We’re abstracting just one step farther from the hardware into the software zone.
The direction we’re taking is to make the composable stuff into a software mechanism. That software layer will manipulate the platform underneath to [provision] the right type of units to provide the right service level at the right cost level and right security level. They don’t have to think about the hardware underneath. They’ll think in terms of the load they have.
How soon do you expect this feature to go live on Pivot3 storage?
Nash: We are working on it right now. We have parts of it operating in our lab. We won’t have one grand step where we announce composable as a part of a big picture. We’ll have a series of releases over quarters that add different capabilities as we move down that road.
How rapidly are Pivot3 storage customers adopting a multi-cloud strategy? And how is it reflected in your growth?
Nash: I read a research survey that 80% of CIOs [are using] multiple clouds. To me, that’s [the equivalent] of crossing the chasm. Everybody put their toe in the water, figured out what the cloud was and then had to bring stuff back on premises. You need the ability to go into and out of the public cloud and private clouds.
We’re making our platform with high performance, such that it runs far more of a company’s applications than our competitors. That means we’re getting more load and bigger orders.
We have had people buying a bigger platform from the get-go, but our average selling price increased 74% in the last year. We’ve gone from a baseline of hundreds of thousands of dollars [per sale] to 74% higher on average. That’s massive. Usually, you’re happy if you get a 5% increase year over year.
Which vertical sectors account for the most growth in Pivot3 storage?
Ron NashCEO, Pivot3
Nash: We recently announced our revenue growth at 70% year over year. Growth is coming in the hybrid cloud and software-defined data center. People in that market are becoming more educated and discriminating. If you’ve got a little bitty job, there are a whole bunch of [vendors] for that.
The strengths of Pivot3 storage are ultrahigh performance, large application environments that need to scale, automated quality of service and policy management. We have broadcast that to the market, and enterprise customers are getting a bead on us. They seem to know when to call us.
Pivot3 and NexGen Storage are integrated in a single code base. Trace the impact the NexGen acquisition has had on your cloud business.
Nash: We are benefiting from the amalgamation of the NexGen and Pivot3 teams. You’ve got to have automation if you’re [managing] data that’s scattered across private, hybrid and public clouds. The fact we had the forethought to start working on the automation layer is really paying off now.
People are starting to get away from the early romance of the cloud — the idea that I could pick one cloud and put everything on it. That was never going to happen. People now have a more sophisticated view. They say, ‘I need some private cloud capabilities inside and probably need two or three different public clouds, based on the job and service level.’ And they need the ability to connect these clouds and move data back and forth. That plays to our strength as a platform, with automation layers that let you keep up with it.
What’s been the most surprising feature that customers are requesting?
Nash: The one people are asking about most is encryption. We have customers in government and security that need to encrypt data, so we’ve always done that. But we’re starting to see the early edge of a wave for encryption in many more companies. They have growing concern about [cloud] data breaches and being able to protect people’s personal identities. They think, ‘Why not just encrypt everything?’
As NVMe over Fabric gets going, we’ll move from a world of electromechanical devices and storage controllers to a world that looks more like memory. If you’re working in memory, the cost of encryption is in single digits, in terms of the impact on performance. If you’ve got the right architectural platform, the cost of encryption is well worth it.
Our erasure coding is not content-aware, but it uses a linear algebra algorithm that works just fine. That’s going to make a huge difference for us on encrypted data. We can get storage efficiency that is as good on encrypted as ion unencrypted data.
Polycom has expanded its VoIP endpoint portfolio with the release of four new open SIP phones. The vendor also launched a new cloud-based device management service to help partners provision and troubleshoot Polycom devices.
The release builds upon the Polycom VVX series of IP desk phones. The more advanced models include color LCD displays and gigabit Ethernet ports, unlike any of the previous phones in the Polycom VVX series.
The VVX 150 is the most basic of the new devices. Designed for home offices or common areas, the VVX 150 supports two lines and does not have a USB port or a color display.
The VVX 250 is targeted at small and midsize businesses, with a 2.8-inch color LCD display, HD audio, one USB port and support for up to four lines.
The VVX 350 is for cubicle workers, call centers and small businesses. It has a 3.5-inch color LCD display, two USB ports and support for six lines.
The most advanced of the four new models, the VVX 450, can host 12 lines and comes with a 4.3-inch color LCD display. Polycom said the phones are meant for front-line staff in small and midsize businesses.
The new phones rely on the same unified communications software as the rest of the Polycom VVX series, which should simplify the certification process for service providers, Polycom said. 8×8, Nextiva and The Voice Factory were the first voice providers to certify the devices.
Unlike traditional propriety phones, open SIP phones can connect to the IP telephony services of a wide range of vendors. This simplifies interoperability for businesses that get UC services from multiple vendors.
Polycom embraces cloud to help sell hardware
Polycom has launched two new cloud services in an attempt to make its hardware more attractive to enterprises and service providers.
Polycom Device Management Service for Service Providers, released this week, gives partners a web-based application for managing Polycom devices. This should help service providers improve uptimes and enhance end-user control panels. Polycom launched a similar service for enterprises earlier this year.
Eventually, Plantronics may look to combine its cloud management platform with Polycom’s, allowing partners to control phones and headsets from the same application, said Irwin Lazar, analyst at Nemertes Research, based in Mokena, Ill. This would give Plantronics and Polycom an advantage over competitors such as Yealink and AudioCodes.
“The endpoint market is fairly competitive, so wrapping management capabilities around the devices is an attractive means to provide a differentiated offering,” Lazar said.
The latest addition to HPE’s HCI portfolio aims to give smaller IT shops a little less bang for a lot less buck.
The HPE SimpliVity 2600 configures up to four compute modules in a 2U space, and features “always-on” deduplication and compression. Those capabilities often appeal to businesses with space-constrained IT environments or with no dedicated data center at all, particularly ones that deploy VDI applications on remote desktops for complex workloads and require only moderate storage.
Examples include branch offices, such as supermarkets or retailers with no dedicated data center room, who might likely keep a server in a manager’s office, said Thomas Goepel, director of HPE’s product management for hyper-converged systems.
Higher-end HPE HCI products, such as the SimpliVity 380, emphasize operational efficiencies, but their compute power may exceed the needs of many remote branch offices, and at a higher cost, so the 2600’s price-performance ratio may be more attractive, said Dana Gardner, principal analyst at Interarbor Solutions LLC in Gilford, N.H.
“Remote branch offices tend to look at lower-cost approaches over efficiencies,” he said. “Higher-end [HPE HCI systems] and in some cases the lower-end boxes, may not be the right fit for what we think of as a ROBO server.”
On the other hand, many smaller IT shops lack internal technical talent and may struggle to implement more complex VDI workloads.
“[VDI] requires a lot of operational oversight to get it up and rolling and tuned in with the rest of the environment,” Gardner said.
The market for higher compute density HCI to run complex workloads that involve VDI applications represents a rich opportunity, concurred Steve McDowell, a senior analyst at Moor Insights & Strategy. “It’s a smart play for HPE, and should compete well against Nutanix,” he said.
Dana Gardnerprincipal analyst, Interarbor Solutions
The HPE SimpliVity 2600, based on the company’s Apollo 2000 platform, also overlaps with HPE’s Edgeline systems unveiled last month, although there are distinct differences in the software stack and target applications, McDowell said. The 2600 is more of an appliance with a fixed feature set contained in a consolidated management framework.
The Edgeline offering, meanwhile, targets infrastructure consolidation out on the edge with a more even balance of compute, storage and networking capabilities.
Higher-end HPE HCI offerings have gained traction among corporate users. Revenues for these systems surged 280% in this year’s first quarter compared with a year ago, versus 76% growth for the overall HCI market, according to IDC, the market research firm based in Framingham, Mass.
“There has been a tremendous appetite for HCI products in general because they come packaged and ready to install,” Gardner said. “HPE is hoping to take advantage of this with iterations that allow them to expand their addressable market, in this case downward.”
The 2600 will be available sometime by mid-July, according to HPE.
Data visualization specialist Reflect enlivens the growing Puppet DevOps tool portfolio, but it’s unclear if Puppet’s wares will catch enterprise customers’ attention in a busy marketplace.
The purchase of Reflect, a startup company based in Portland, Ore., shows that Puppet has little choice but to reinvent itself as containers pull users’ attention away from traditional configuration management, analysts said. Data visualization, a way to portray data so that it’s easily understood by people, will also be increasingly important as microservices architectures expand and IT management complexity skyrockets.
“The ability to paint pretty pictures [of data] is not just a ‘nice to have’ feature,” said Charles Betz, analyst at Forrester Research. “It’s important as microservices become more difficult to visualize and manage.”
“Puppet has non-trivial data stores already, a lot of it systems configuration data that’s very close to the metal in Puppet Enterprise’s core data repository,” he said.
Puppet lacks a data warehouse or data analytics offering to feed into Reflect’s visual tools, but company CEO Sanjay Mirchandani declined to say whether another acquisition or internal IP will fill in that layer of the architecture.
Containers, infrastructure as code invade configuration management’s turf
Enterprise IT shops are overwhelmed by a wall of marketing noise from vendors that want to be their one-stop shop for DevOps. But one vendor or one tool won’t necessarily solve technical problems in infrastructure automation, said Ernest Mueller, director of engineering operations at AlienVault, an IT security firm based in San Mateo, Calif., which plans to reduce its use of Puppet’s configuration management tools.
“As we move to Docker and immutable infrastructure deployments, our goal is to cut the lines of Puppet code we use in half,” Mueller said. “We’re trying to shift configuration management left — adding it at the end just creates problems, because if you try to do the same configuration operation on a thousand different servers, it’s bound to fail on one of them.”
Mueller monitors upgraded capabilities from vendors such as Chef and Puppet, and is interested in a CI/CD process for infrastructure as code. Puppet’s reusable manifests appeal to Mueller more than Chef’s community-maintained cookbooks, but competitor Chef InSpec’s continuous integration-style security and compliance testing intrigues him for infrastructure code.
Overall, though, infrastructure as code testing and deployment still needs a lot of development, and tools are still emerging to help, Mueller said.
“You can’t just apply an application CI/CD tool to infrastructure code,” he said. “In our application unit tests, for example, the best practice is never to call a public API, but what if the code is creating an Amazon Machine Image? The nature of infrastructure as code means there’s no one answer for CI/CD today, and figuring out how to stitch together multiple tools takes a lot of work, without a good reference architecture.”
Andy Domeierdirector of technology operations, SPS Commerce
Presumably, the Puppet DevOps portfolio means it will expand its CI/CD tools’ integrations and coverage beyond Puppet Enterprise code, but right now Continuous Delivery for Puppet Enterprise doesn’t cover other infrastructure as code tools such as HashiCorp’s Terraform, which Mueller’s shop also uses.
A former Puppet user that switched to Red Hat’s Ansible infrastructure automation tool said despite Puppet’s acquisitions he likely won’t re-evaluate its CI/CD tools.
“We’re more interested in things like Netflix’s Spinnaker, which plugs in well to Kubernetes [for container orchestration],” said Andy Domeier, director of technology operations at SPS Commerce, a communications network for supply chain and logistics businesses based in Minneapolis. Spinnaker is a multi-cloud continuous delivery platform open sourced by the same company that made Chaos Monkey.
“Distelli is good for heavy Puppet users, but I wish it had been around earlier. Now there’s just a proliferation of tools to consider.”
Puppet and Chef face game of DevOps musical chairs
As containers and container orchestration tools begin to replace the need for server-level automation in enterprise data centers, configuration management tool vendors such as Puppet and Chef have refocused on higher-ordered IT infrastructure and application automation. Chef has attacked the space with its homegrown Chef Automate, Chef Habitat and Chef InSpec tools, which add application-focused IT automation to complement the company’s configuration management products. Puppet has expanded its product portfolio through acquisition under Mirchandani, who took over as CEO in 2016. Puppet bought CI/CD and container orchestration vendor Distelli in 2017 and rereleased some of Distelli’s software as Continuous Delivery for Puppet Enterprise, which performs continuous integration testing and continuous deployment tasks for Puppet’s infrastructure as code, in early 2018.
“Puppet hasn’t had much choice but to develop a strategy that moves into some adjacencies — otherwise Kubernetes is an existential threat,” Betz said.
In addition to Chef, Electric Cloud and XebiaLabs, a Puppet DevOps bid must fend off a horde of competitors from Red Hat to Docker to AWS and Microsoft Azure, and all seek revenues in a relatively small market, Betz said. Forrester estimates the total DevOps tools market size at $1 billion, compared to $2 to $3 billion for application performance monitoring, another relatively niche space. Both those markets are dwarfed by the market for IT service management tools, which Forrester estimates to be an order of magnitude bigger.
“It’s a game of musical chairs, and many of those chairs will be suddenly pulled out, especially if the economy even hiccups,” Betz said. “There’s no question this market will further consolidate.”
Avaya took a big step toward building a competitive cloud-based unified communications portfolio with the acquisition of contact-center-as-a-service provider Spoken Communications.
Avaya announced the all-cash deal this week at the Avaya Engage 2018 user conference — the first since Avaya exited bankruptcy late last year. The company also launched at the show a desktop phone series, an all-in-one huddle-room video conferencing system and cloud-based customer support software, called Ava.
Avaya plans to offer Spoken services as an option for customers who want to move to the cloud slowly. Companies using Avaya on-premises software can swap out call-center features one at a time and replace them with the Spoken cloud version.
“With the acquisition of Spoken, it’s clear that Avaya is putting more of an emphasis on building out its own hosted offerings that it can either sell direct or through channels,” said Irwin Lazar, an analyst at Nemertes Research, based in Mokena, Ill.
Avaya’s cloud strategy
Elka Popovaanalyst at Frost & Sullivan
Only a small percentage of Avaya’s customers use its cloud-based services, which lag behind those of rivals Cisco and Microsoft. Nevertheless, the market for contact center and UC as a service is growing much faster than on-premises software, analysts said.
“The current executive team is determined to shift Avaya’s focus to the cloud in terms of both technology development and business model,” said Elka Popova, an analyst at consulting firm Frost & Sullivan, based in San Antonio. “The team acknowledges they are a bit late to the game, and most of their cloud portfolio is a work in progress, but the determination is there.”
Since last year, Avaya has worked with Spoken on bringing contact center as a service (CCaaS) to Avaya customers through product integrations. The joint effort has led to integration between Spoken’s cloud-based services and Avaya’s on-premises Call Center Elite and Aura Communication Manager. The latter is Avaya’s UC platform.
Spoken uses speech recognition in its CCaaS offering to automate call-center processes and make customer service agents more efficient. For example, Spoken can transcribe conversations agents have with each customer, which frees customer reps from having to type notes into the system manually.
Spoken technology can also listen for keywords. If it hears the word invoice, for example, it can retrieve the customer’s bill automatically for the agent.
Spoken has more than 170 patents and patent applications that will go to Avaya, which expects to close the transaction by the end of March. The company did not release financial details.
Other Avaya Engage 2018 announcements
In other Avaya Engage 2018 news, the vendor introduced a cloud-based messaging platform for reaching customers on social media, such as Facebook, Twitter, WeChat and Line. Avaya’s Ava can provide immediate self-service support or send customers to an agent. If the latter occurs, then all information gathered during the automated service is handed to the service rep.
Ava supports 34 languages and has APIs Avaya partners can use for product integration. Last year, Avaya launched an initiative called A.I.Connect to encourage other vendors to connect products with artificial intelligence or machine learning capabilities with Avaya communication software.
Despite its cloud focus, Avaya is still paying attention to hardware. The company announced at Engage the J series line of desktop phones. The three phones come with Bluetooth and Wi-Fi connectivity. Avaya plans to release the hardware in the second quarter.
Also, the company introduced a second Vantage touchscreen phone. Unlike the first one unveiled last year, the latest hardware comes with the option of a traditional keyboard. It also supports Avaya IP Office, which provides a combination of cloud-based and on-premises UC services.
Finally, Avaya launched the CU-360 all-in-one video conferencing system for huddle rooms, which small teams of corporate workers use for meetings. The hardware can connect to mobile devices for content sharing.
Overall, the Avaya Engage 2018 conference reflected positively on the executive team chosen by Avaya CEO Jim Chirico, analysts said. Formerly Avaya’s COO, Chirico replaced former CEO Kevin Kennedy, who retired Oct. 1.
“Overall, the event did not produce a wow effect,” Popova said. “There was nothing spectacular, but the spirits were high, and the partner and customer sentiments were mostly positive.”
Talari added another SD-WAN appliance to its portfolio this week. The Talari E50 appliance is tailored to customers that require easy SD-WAN deployments to connect small branch locations, like retail, mobile and remote home-office sites, according to Talari’s website.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Talari’s SD-WAN appliance supports 20, 50 and 100 Mbps performance across multiple WAN links, with a “pay-as-you-grow” purchase model. It also consolidates routing, firewall and WAN optimization features within the E50 platform, the company said in a statement. Additionally, the SD-WAN appliance integrates Zscaler security, with traffic passing through the Zscaler cloud over IPsec tunnels.
With the Talari E50 SD-WAN appliance, a managed service provider or IT team can preconfigure the appliance, which can then be shipped to the site and set up. According to Talari, this capability benefits customers with limited IT staff and resources.
Talari offers a range of SD-WAN appliances for large data centers and offices to call centers and SMBs.
Cradlepoint expands to subscription-based networking packages
Cradlepoint introduced subscription-based packages for its branch, mobile and internet-of-things networking services.
Cradlepoint’s NetCloud Solution Packages, available now, are tailored to specific markets in an attempt to simplify deployment and management, according to a Cradlepoint statement. Customers can purchase a Cradlepoint package as a subscription service on a one-, three- or five-year basis.
One of the first available subscription-based packages is Cradlepoint’s wireless branch networking offering. Cradlepoint said its new AER2200 edge router can replace multiple boxes by converging multiple functions and support into the single router. The router supports 10 switched Ethernet ports, and it supports 802.11ac Wi-Fi, with a guest portal and advanced Long Term Evolution integration. The package also comes with the new AP22 access point to expand wireless LAN coverage. Additionally, Cradlepoint’s branch subscription service offers 4G LTE and wireless LAN support with its SD-WAN functionality, fine-tuned for customers with smaller networks.
“Instead of navigating a myriad of separately priced software, hardware and support options, just two or three SKUs deliver a compete wireless branch solution with the cloud management and support customers need to be deployed and operational quickly and easily,” said Ian Pennell, chief product officer at Cradlepoint, based in Boise, Idaho, in a statement.
Verizon joins ONAP
Verizon joined the Open Network Automation Platform, a project hosted by The Linux Foundation.
Verizon said its work with ONAP will concentrate on network function onboarding, network management, service creation, provisioning and standards for consistent deployment.
In November, ONAP introduced its first code, dubbed Amsterdam, which offers a modular automation platform for service providers and carriers focused on service delivery. ONAP architecture contains a good portion of code from AT&T’s Enhanced Control, Orchestration, Management and Policy architecture and the Open-Orchestrator project.
ONAP expects to release its second code, Beijing, in 2018. While the Amsterdam code aimed to improve service lifecycle automation and management for service providers, Beijing will target enterprise workloads, wireless 5G and the internet of things.
Today we announced the Public Preview availability of Azure Location Based Services (LBS). LBS is a portfolio of geospatial service APIs natively integrated into Azure that enable developers, enterprises and ISVs to create location aware apps and IoT, mobility, logistics and asset tracking solutions. The portfolio currently comprises of services for Map Rendering, Routing, Search, Time Zones and Traffic. In partnership with TomTom and in support of our enterprise customers, Microsoft has added native location capabilities to the Azure public cloud.
Azure LBS uses key-based authentication. To get a key, go to the Azure portal and create and Azure LBS account. By creating an Azure LBS account, you automatically generate two Azure LBS keys. Both keys will authenticate requests to the various Azure LBS services. Once you have your account and your keys, you’re ready to start accessing Azure Location Based Services. And, the API model is simple to use. Simply parameterize your URL request to get rich responses from the service:
Sample Address Search Request: atlas.microsoft.com/search/address/json?api-version=1&query=1 Microsoft Way, Redmond, WA
Azure Map Control
The Azure Map Control makes it simple for developers to jumpstart their development. By adding a few lines of code to any HTML document, you get a fully functional map.
Hello Azure LBS
In the above code sample, be sure to replace [AZURE_LBS_KEY] with your actual Azure LBS Key created with your Azure LBS Account in the Azure portal.
The Azure LBS Render Service is use for fetching maps. The Render Service is the basis for maps in Azure LBS and powers the visualizations in the Azure Map Control. Users can request vector-based map tiles to render data and apply styling on the client. The Render Service also provides raster maps if you want to embed a map image into a web page or application. Azure LBS maps have high fidelity geographic information for over 200 regions around the world and is available in 35 languages and two versions of neutral ground truth.
The Azure LBS cartography was designed from the ground up and created with the enterprise customer in mind. There are lower amounts of information at lower levels of delineation (zooming out) and higher fidelity information as you zoom in. The design is meant to inspire enterprise customers to render their data atop of Azure LBS Maps without additional detail bleeding through disrupting the value of customer data.
The Azure LBS Routing Service is used for getting directions, but not just point A to point B directions. The Azure LBS Routing Service has a slew of map data available to the routing engine allowing it to modify the calculated directions based on a variety of scenarios. First, the Routing Service provides customers the standard routing capabilities they would expect with a step-by-step itinerary. The calculation of the route can use the faster, shortest or avoiding highly congested roads or traffic incidents. For traffic-based routing, this comes in two flavors: “historic” which is great for future route planning scenarios when users would like to have a general idea of what traffic tends to look like on a given route; and, “live” which is ideal for active routing scenarios when a user is leaving now and wants to know where traffic exists and the best ways to avoid it.
Azure LBS Routing will allow for commercial vehicle routing providing alternate routes made just for trucks. The commercial vehicle routing supports parameters such as vehicle height, weight, the number of axels and hazardous material contents all to choose the best, safest and recommend roads for transporting their haul. The Routing Service provides a variety of travel modes, including walking, biking, motorcycling, taxiing or van routing.
Customers can also specify up to 50 waypoints along their route if they have pre-determined stops to make. If customers are looking for the best order in which to stop along their route, they can have Azure LBS determine the best order in which to route to multiple stops by passing up to 20 waypoints into the Routing Service where an itinerary will be generated for them.
Using the Azure LBS Route Service, customers can also specify arrival times when they need to be at a specific location by a certain time. Using the massive amount of traffic data, almost a decade of probes captured per geometry and high frequency intervals Azure LBS can let customers know given day or the week and time when is the best time of departure. Additionally, Azure LBS can use current traffic conditions to notify customers of a road change that may impact their route and provide updated times and/or alternate routes.
Azure LBS can also take into considering the engine type being used. By default, Azure LBS assumes a combustion engine is being used; however, if an electrical engine is in use Azure LBS will accept input parameters for power settings and generate the most energy efficient route.
The Routing Services also allows for multiple, alternate routes to be generated in a single query. This will save on over the wire transfer. Customers can also specify that they would like to avoid specific route types such as toll roads, freeways, ferries or carpool roads.
The Azure LBS Search Service provides the ability for customers to find real world objects and their respective location. The Search Service provides for three major functions:
Geocoding: Finding addresses, places and landmarks
POI Search: Finding businesses based on a location
Reverse Geocoding: Finding addresses or cross streets based on a location
With the Search Service, customers can find addresses and places from around the world. Azure LBS supports address level geocoding in 38 regions, cascading to house numbers, street-level and city level geocoding for other regions of the world. Customers can pass addresses into the service based in a structured address form; or, they can use an unstructured form when they want to allow for their customers to search for addresses, places or business in a single query. Users can restrict their searches by region or bounding box and can query for a specific coordinate to influence the search results to improve quality. Reverse the query to provide a coordinate, say from a GPS receiver, customers can get the nearest address or cross street returned from the service.
The Azure LBS Search Service also allows customers to query for business listings. The Search Service contains hundreds of categories and hundreds of sub-categories for finding businesses or points of interest around a specific point or within a bounding area. Customers can query for businesses based on brand name or general category and filter those results based on location, bounding box or region.
Sample POI Search Request (Key Required): atlas.microsoft.com/search/poi/category/json?api-version=1&query=electric%20vehicle%20station&countrySet=FRA
Time Zone Service
The Azure LBS Time Zone Service is a first of it’s kind providing the ability to query time zones and time for locations around the world. Customers can now submit a location to Azure LBS and receive the respective time zone, the respective time in that time zone and the offset to Coordinated Universal Time (UTC). The Time Zone Service provides access to historical and future time zone information including changes for daylight savings. Additionally, customers can query for a list of all the time zones and the current version of the data – allowing customers to optimize their queries and downloads. For IoT customers, the Azure LBS Time Zone Service allows for POSIX output, so users can download information to their respective devices that only infrequently access the internet. Additionally, for Microsoft Windows users, Azure LBS can transform Windows time zone IDs to IANA time zone IDs.
Sample Time Zone Request (Key Required): atlas.microsoft.com/timezone/byCoordinates/json?api-version=1&query=32.533333333333331,-117.01666666666667
The Azure LBS Traffic Service provides our customers with the ability to overlay and query traffic flow and incident information. In partnership with TomTom, Azure LBS will have access to a best in class traffic product with coverage in 55 regions around the world. The Traffic Service provides the ability to natively overlay traffic information atop of the Azure Map Control for a quick and easy means of viewing traffic issues. Additionally, customers have access to traffic incident information – real time issues happening on the road and collected through probe information on the roads. The traffic incident information provides additional detail such as the type of incident and the exact location. The Traffic Service will also provide our customers with details of incidents and flow such as the distance and time from one’s current position to the “back of the line;” and, once a user is in the traffic congestion the distance and time until they’re out of it.