Video conference tools are becoming more versatile. As a result, video conferencing vendors are expected to support more features and high definition across multiple endpoints, including mobile devices, desktops and conference rooms. Buyers, too, need to assess more options, as use cases have expanded.
New video conferencing use cases are making visual collaboration a strategic imperative for enterprises, according to a recent report from Aragon Research Inc., an advisory firm based in Morgan Hill, Calif. Aragon views video conference use cases as part of the larger unified communications and collaboration landscape.
More and more, enterprises are trying to align collaboration investments with business outcomes, the report found. As a result, Aragon said it expects to see further growth in video conference tools. For enterprises, video conference use cases — along with quality and reliability — will be key buying criteria. As is the case with most technology purchases, video conference use cases will dictate vendor selection.
“Our advice to enterprise buyers is to first consider what your core requirements are with regard to web and video conferencing,” the report reads. “We encourage buyers to consider which capabilities and products best fit the required use cases that pertain to your enterprise.”
For example, if an enterprise has internal and external-facing use cases for a large number of people, then webcasting tools may be the best fit. In the near future, Aragon added, expect new video conference use cases to arise in 2018 as endpoints become fully mobile, with cars and drones playing a larger role.
Video conference tools across modalities
For video conference vendors, mobility is the new competitive frontier, Aragon said, as vendors look to enable video meetings from anywhere on any device. But buyers beware, as video experiences on mobile devices and the ease of accessing meetings can vary.
The Aragon report advised buyers to carefully evaluate providers for ease of connectivity. For instance, the mobile application should automatically connect to the meeting via voice over IP or dial-in. Additionally, the mobile app should not require a passcode and provide one-click connectivity.
“Ease of use is not optional, no matter the device or the environment,” the report stated. “Albeit there are still providers today who are not optimized for mobile.”
In addition to mobile, video conference vendors are expected to support desktop and room-based meetings to allow seamless switching among devices.
Conference rooms get video upgrades
Because of cloud technology and lower hardware prices, Aragon predicted fivefold growth in video-enabled conference rooms from 2017 to 2022. In five years, 65% of conference rooms will be video-enabled, Aragon said.
Enterprises need to digitize conference rooms, open workspaces and smaller huddle areas, Aragon advised, as employees face new work demands, including increased collaboration across groups, distances and affiliations.
Aragon also cited the importance of “intelligent” video rooms, where meetings can be automated and employ artificial intelligence to gain a deeper understanding of meeting activity and the devices and users involved. Many providers have also looked to enhance meetings by using HD video with auto-zoom and HD audio with auto-muting of background noise.
As video conference tools evolve, businesses are demanding high-quality, high-resolution meetings. To date, Avaya, Cisco, Google and Vidyo offer 4K video support, Aragon said. The research firm said it expects to see more support for 4K video in 2018 and beyond.
In addition to HD quality, video conference vendors are now expected to integrate with other business applications. Collaboration providers that fail to support business process and application integration will miss this wave and be left behind, Aragon said.
Chart vendor roadmaps
Overall, as video conference tools evolve — thanks to cloud technology — vendors and buyers are also expected to evolve to stay competitive and enhance collaboration workflows. The cloud is one of the key reasons for the widespread video conference tools today.
The wide-ranging Aragon report, written by Aragon CEO and lead analyst Jim Lundy, highlights the convergence of web and video conferencing services and examines 22 providers in the market. The report, released this month, stated real-time video conference tools bring together geographically dispersed teams visually, which can improve collaboration.
“Enterprises need to realize visual collaboration can help speed up internal employee and external customer journeys to get to faster business outcomes,” Lundy said in a statement. “The enterprises who are leveraging visual collaboration have a competitive advantage because they’re supporting how their people and customers need and want to work.”
The report advised enterprises to ask for detailed roadmaps from providers to ensure they mesh with enterprise technology and business direction.
Whenever I attend a software conference, I always like to survey the exhibition floor to discover what latest trend, fashion or fad is taking over the industry. At the 2017 Gartner Application Strategies & Solutions Summit, held earlier this month in Las Vegas, the dominant theme was once again DevOps — continuing a craze that has been pervasive at pretty much every software conference I’ve attended this year.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
But the big shift that seems to be happening in this space is the fact that the top DevOps vendors are no longer just talking about delivering software faster, but they’re setting their sights on loftier goals that impact not only the development teams, but the enterprise as a whole.
“Doing DevOps means getting more things to market faster,” said Lance Knight, COO of Go2Group, a DevOps vendor that specializes in software and services that help enterprises deal with DevOps transitions.
But DevOps isn’t just about delivering software. Knight is quick to point out the fact that software companies deploy — and the features they offer — are key elements that can set them apart from the rest of their competition.
“Don’t just think about it as delivering software faster, but think of it as delivering market-differentiating factors to the customer faster.” Knight used the ability to deposit a check into your bank account simply by taking a photo of it as an example.
Nowadays, it’s a fairly common feature of most mobile banking apps to use a cellphone’s onboard camera to take a picture of a check and then make an immediate deposit. The functionality has become somewhat novel, as most banking applications offer this functionality, but it is not yet a universal feature. But in the industry, there were leaders that delivered this type of functionality to their clients first, and there are still yet to offer this feature. The ones that use development processes to speed up the delivery of features and get applications to market faster are the ones who will differentiate themselves as the market leaders. And that’s exactly what DevOps does: It helps to get features, fixes and enhancements faster and more efficiently.
DevOps process unlocks velocity with quality
IBM’s Eric Minick was another software professional on the Gartner Strategies & Solutions conference’s exhibitors floor advocating the DevOps approach to software development.
Lance KnightCOO, Go2Group
“As organizations undergo digital transformations, they need better business and technical agility, and DevOps provides that,” Minick said. “From recognizing trends to reacting to new technical innovations, organizations that want to work faster, deliver high-quality software and take advantage of changes in the industry are adopting DevOps techniques.”
Differentiating oneself in the market or empowering an organization to undergo a digital transformation are fairly high-level benefits of adopting a DevOps process. There are certainly a variety of reasons for adopting a DevOps process that might resonate more with the application developers who are actually delivering the software, with predictability and transparency being two key features of note.
“Automating tasks makes software development more predictable,” Knight said. “And when the human element is removed from tasks, the processes become more transparent.”
And of course, there is the fundamental fact that organizations that implement a DevOps process tend to release software faster and have fewer bugs in their code, and that’s a pretty compelling attribute in itself. Of course, change is never easy, and organizations are still struggling as they try to incorporate DevOps processes into their application lifecycle. But as they struggle, it’s good to know that there are plenty of DevOps vendors out there providing solutions that will make their DevOps transitions easier.
Cloud computing leader Amazon Web Services’ re:Invent conference this week in Las Vegas saw a deluge of cloud and database announcements. Among those on the data side was Neptune, the company’s formal entry into the growing field of graph databases.
While this AWS graph database may have less immediate impact than Redshift, the influential cloud data warehouse it rolled out at re:Invent five years ago, it does fill a gap that competitors like IBM, Microsoft and others have included in their cloud data portfolios as they play catch-up with Amazon in the cloud.
AWS CEO Andy Jassy told the re:Invent audience that the Neptune graph database is intended to uncover connections in data points in a way that eludes traditional relational database. With graphs, data is stored in sets of interconnected nodes, unlike relational databases that store data in rows and columns.
Graph databases have found increasing use in online recommendation engines, as well as tasks including uncovering fraud and managing social media connections. Facebook’s Friends and Search graphs may be among the most vivid examples of use of the technology.
Jassy said graph databases, along with NoSQL key-value and document data stores, are part of a trend toward multimodel databases that support a variety of data processing methods, particularly in the cloud.
He said Amazon Neptune, which for now is available only as a limited preview, supports graphs based on property and semantic models — these being the two main schools of graph database construction. AWS will offer Neptune as a managed cloud service, with automatic backup to S3 over three cloud availability zones.
“People have used relational databases for everything,” he said. But such single-minded reliance on relational databases is breaking down, he contended.
This AWS graph database isn’t the company’s first foray into the technology: AWS already offers the ability to store graphs from the open source Titan graph database and its JanusGraph fork in DynamoDB tables via a back-end storage plug-in. DynamoDB is an Amazon NoSQL database for which the company claims more than 100,000 users.
Graph adept and less Graph adept
The graph data technology that has emerged in recent years comes primarily from smaller players such as Cambridge Semantics, DataStax, Franz and Neo Technologies Inc. By and large, these companies have welcomed the AWS graph database into their market, as they could signify validation of their technology niche.
Andy JassyCEO, AWS
Established relational leaders have come to include some graph support within their flagship SQL databases, and some even have rolled out stand-alone NoSQL graph databases.
AWS’ target with Neptune is the relational leaders’ flagships, which may struggle when processing ever bigger amounts of graph data, according to Doug Henschen, an analyst at Constellation Research.
“Oracle, Microsoft SQL Server and IBM DB2 have all added features for graph analysis, but SQL and extended SQL functions are not as adept as graph databases and graph query languages at exploring billions of relationships,” he said.
The AWS graph database correctly identifies an opportunity for replacing graph analysis use cases currently running on less-graph-adept commercial relational databases, Henschen said.
To Neptune, and beyond
Neptune was just one of many updates fleet Amazon added to its fast-moving cloud operation. At re:Invent, Jassy described a serverless version of Amazon Aurora database, which is now in controlled preview. It can be quickly spun up and down, and customers can pay by the second for database capacity when the database is in use, he said.
Meanwhile, Amazon’s DynamoDB is adding global table replication that ensures dependable low latency for data access across many cloud regions. Interest in such capabilities has grown along with the expansion of e-commerce across the globe.
Global replication for cloud databases was among traits heralded by Microsoft in its recent debut of Cosmos DB, as well as Oracle, in its fanfare for its upcoming Oracle 18 cloud database services.
The recent Global Conference on Cyber Space (GCCS) conference in Delhi brought together governments, businesses and civil society groups to address the future of cyberspace. Amongst the plethora of discussions and sessions, a significant paper on Norms for Cybersecurity in ASEAN (Policy Options for Collaborative Security in the ASEAN Region) was published by the Cyber Security Agency of Singapore (CSA), supported by Amazon Web Services (AWS), Dell Technologies, Intel and Microsoft.
Why is this paper significant? For a number of reasons. Having a range of significant tech sector companies align with a leading global cybersecurity regulator is more than window-dressing. Cooperation between public and private sectors underpins the multi-stakeholder approach to cybersecurity and cyberspace that Microsoft has long argued for. Harnessing these different groups’ perspectives, skills and (even) objectives seems to be the only way to reliably make progress towards a cyberspace that is secure for all participants, be they big or small, governmental or commercial or otherwise.
More than this, the report is significant because it makes clear that norms (defined by the report as “principles or standards of behavior expected of a member of a group”) can be used to build a safer and freer global cyberspace than the one we have today. The potential of norms was previously underlined by the UNGGE 2015 Report, which referenced both norms that encouraged positive duties for states and norms that limited negative actions. This was an exceptionally important attempt at the global level, via the United Nations, to shape responsible behavior by states in cyberspace.
The subsequent UNGGE 2017 process encountered some political setbacks that prevented the U.N. from building on the 2015 consensus and the norms it set out. While disappointing in and of itself, this development has not invalidated those norms. Indeed, the political and diplomatic roadblocks encountered within the UNGGE only serve to highlight the flexibility and applicability of norms themselves, which are easier to define and agree amongst governments than detailed obligations or treaties. These characteristics make norms ideal, in fact, for use in regional cybersecurity initiatives. The report’s authors make a strong case for regional activity being a way to make progress even if the global process has stalled. And this is where the role of the CSA and of ASEAN comes into focus.
ASEAN’s member states are rapidly coming online, with expanding numbers of internet users and burgeoning connectivity. This brings positives (more innovation and economic growth) and negatives (greater exposure to cybercrime and translation of long-standing geopolitical disagreements into cyberspace) to the states involved. As a result, ASEAN governments are having to pay attention to a policy area that virtually didn’t exist a decade ago. Being able to turn to norms, ideally norms based on the UNGEE 2015’s work, gives those governments a reliable starting point.
Indeed, the use of established norms will give individual states both the flexibility to address domestic cybersecurity needs and also the consistency to ensure their approaches will mesh with other jurisdictions (which is essential when you consider that cyberattackers rarely respect national borders). As the report makes clear, where the cybersecurity differences between states are pronounced (and there are some big differences in ASEAN), those with more advanced capabilities can help with capacity building (as the CSA has). And where there are concerns about potential cyberconflict between “real world” rivals, then confidence building measures (CBMs) can be used, as indeed ASEAN has been doing in recent years.
Norms in cybersecurity are not, however, simply a matter for governments. As the participation in this report by Microsoft and others shows, norms are also matter to the companies that build and run critical cyber-infrastructure. Norms also matter to those who rely on cyberspace, including communities and individual citizens. Involving all three of these groups in developing norms is essential, and one of the problems of the UNGGE process was that non-governmental groups were simply not as involved as they should be. The report’s concluding recommendation (after establishing a common glossary of cybersecurity terms, coordinating vulnerability disclosure, cooperating in information exchange and promoting data protection and privacy) is that “a multi-stakeholder process for norm development should be of paramount importance.”
If ASEAN and other regions build on the norms-led, multi-stakeholder options set out in the report, they could make a major contribution to rebooting the global cybersecurity process. Microsoft has made no secret of our belief, shared by others, that the world needs a Digital Geneva Convention to ensure cyberspace is a safer, freer place for everyone. Given that the UNGGE is in abeyance, work with norms at the regional level may be the best short-term way to advance the protection of civilians in cyberspace. The alternative scenario, where global progress remains frozen as cyberattacks relentlessly escalate until a catastrophic cyber-event forces governments back to the table, benefits nobody except those behind cyberattacks.
Tags: cybersecurity, Digital Geneva Convention
NEW YORK — The technology showcase floor at this fall’s ONUG conference teemed with the typical vendors displaying various products and technologies. But a significant number of these suppliers could be lumped into a single, albeit continually shifting, category: software-defined WAN.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
By now, SD-WAN’s benefits are well-understood. Among other advantages, the technology can simplify the WAN and streamline traffic management. It also gives enterprises the ability to transition away from more expensive connectivity links, like MPLS — or to add broadband internet in conjunction with existing MPLS services. As a result, SD-WAN deployments continue to blossom, with IDC predicting the market will eclipse $8 billion in sales by 2021.
Yet, as SD-WAN continues to mature, customers are now looking for new ways to exploit the technology. One concept gaining traction: network infrastructure consolidation. This change pushes SD-WAN from a sole “solution” to a framework supporting additional functions in a broader network package. Versa Networks, for example, offers its FlexVNF platform that inserts multiple functions, like SD-WAN, routing and security, into the broader platform. FatPipe Networks also launched a virtual network functions platform that integrates SD-WAN functionality earlier this month.
The ONUG conference also featured a session dedicated to its SD-WAN working group, the Open SD-WAN Exchange (OSE). Currently, the group is developing an interworking architecture framework that will contain reference points, or open APIs, to enable standardization across domain orchestrators. The group settled on RESTful architecture with JSON support, according to OSE group member Steve Wood, principal engineer for enterprise architecture and SD-WAN at Cisco.
Wood said the goal, for now, is not to get individual SD-WAN devices to talk with each other, but to automate and standardize the gateway between different domain orchestrators. The working group also hopes to standardize APIs that will enable application identification.
The OSE group has yet to release any APIs, however, and has only a handful of enterprises working with it to develop the standard interfaces.
Move away from just networking
Earlier this year, ONUG announced its rebranding to move the group away from being solely focused on computer networking. This transition was evident from the range of sessions throughout the ONUG conference — from discussions dedicated to containers and container orchestration to hybrid cloud and machine learning.
One session focused on automated container orchestration, especially related to hybrid cloud environments. Panelists from GE, Bank of America, Intuit and Citigroup discussed their companies’ current use of containers — if any — and hesitations they had with the still-emerging technology.
Bruce Pinsky, a distinguished engineer at Intuit, based in Mountain View, Calif., agreed containers are the next level of virtualization, but said more progress is necessary. Harmen Van der Linde, global head of Citi Management Tools at New York-based Citigroup, had similar concerns, among which was the question of how vendors will deal with patching containers on a large scale.
Process trumps the product
The cloud isn’t always easy. As Maria Azúa, senior vice president of distributed hosting at Fidelity, based in Boston, spoke those words during her ONUG presentation, a few audience members laughed in agreement.
“Everybody thinks the cloud is the best thing since sliced bread; everybody thinks it’s really easy. But it’s not that easy,” she said.
Maria Azúasenior vice president of distributed hosting at Fidelity
For Azúa, an important consideration when moving to the cloud is to know where the data goes. This entails creating a digital signature for the data and ensuring if the signature is compromised, you’re in possession of the key and can kill the data.
“Don’t patch anything in the cloud,” she said. “Everything is immutable.”
Despite data management and security challenges, Azúa said companies will continue to shift their workloads to the cloud. By 2020, she predicted 85% of workloads will be in hybrid cloud environments, with the top five cloud providers controlling about 75% of those services.
By 2025, she estimated cloud usage will be at 53%, surpassing enterprise reliance on legacy infrastructure.
Maria Azua of Fidelity receives Thought Leadership Award from @ONUG_#ONUGFall17 pic.twitter.com/uwQ7SKgPGf
— Jen English (@JenEnglish24)
October 18, 2017
Automation is also making inroads, Azúa said, especially as standardization becomes more important.
“Automation is the only way to standardization,” she said. “You can buy any tools you want, but you’ll never get there [standardization] if you don’t automate, because the human being is very bad at repeating tasks.”
Companies can potentially bypass these errors with automation. And to Azúa, this automation means more than getting a computer to automate a single task. She defined automation as a standardized declarative process for multiple workflows. These processes then become more important than the products themselves.
“More importantly, you need to understand that your processes trump any product that you have,” she said. “Process trumps product.”
This year at the Microsoft Ignite conference in Orlando, Microsoft presented several new product capabilities that extend the Microsoft 365 solution set—including a greater emphasis on cloud-first technologies. Visio Online is one of Microsoft’s newest solutions that further unlocks employee creativity in the modern workplace. Visio Online is available today to commercial customers for $5 per user per month with an annual commitment.
Extending Visio to new audiences
Information workers today want a simple yet powerful way to work visually. As a web-based, lightweight diagramming tool, Visio Online is the perfect solution: with it you can create, edit, and share diagrams online, helping you visualize information in new ways from anywhere. Plus, diagrams are available for anyone in your organization to view—even those without a Visio Online license—so you can get feedback on critical diagrams from all important stakeholders.
Visio Online comes with a host of templates for a variety of audiences, including starter diagrams for basic flowcharts, process diagrams, timelines, business matrixes, SDL diagrams, and many more. Visualizing information is easy—just drag and drop shapes onto the canvas, change a shape’s color or the overall diagram theme, and quickly link one shape to another with connectors. Plus, you can securely share web-based diagrams through OneDrive for Business. (Visio Online Plan 1 and Plan 2 include 2 GB of OneDrive storage. See below for more details about these plans.)
Visio Online comes with diagramming templates to help you get started fast.
“All in all, Visio Online is fantastic, especially for me. Normally, I just make quick drawings to use in documentation, etc., and I think that will be a lot easier with this product.” —Graves Kilsgaard, systems developer at KMC Foods
Since the beginning of last year, Visio has been committed to releasing innovative, cloud-first capabilities that unlock creativity for professional diagram creators. This continuous, accelerated innovation has resulted in a host of major releases that, in addition to Visio Online, extend Visio’s diagramming tool set.
Here’s a look at each of the capabilities that are only available in the cloud:
- Power BI is Microsoft’s cloud-based data visualization tool that helps companies gain actionable insights from complex datasets. Today, Power BI includes a new visual—Visio diagrams—which can be linked to live data and embedded within a Power BI dashboard. The Visio and Power BI integration is currently in public preview and planned for release next year. Learn more about this new Power BI feature.
- Data Visualizer converts process map data in Excel into data-driven diagrams in Visio. Using highly visual diagrams instead of table-based numbers, you can surface new process insights that lead to creative solutions for complex problems.
- Visio Viewer for iOS allows you to view and interact with Visio diagrams on both iPad and iPhone. You can easily share diagrams through OneDrive or SharePoint and then access them on your favorite iOS device.
- PowerPoint Slide Snippets enables you select specific diagram parts, give them a title, and export them as slides in PowerPoint. In this way you can create an entire PowerPoint presentation to break down complex diagrams into individual pieces for easier comprehension.
New innovations, new subscription models
Visio Online is available in Visio Online Plan 1. It is also included in our most comprehensive Visio cloud offering, Visio Online Plan 2 (previously Visio Pro for Office 365). Viewing diagrams is free for most Office 365 customers. The innovations described above are only available in Visio Online Plan 2. You can compare Visio versions to learn more about each Visio offer.
Please visit the Visio website for more details on each plan, as well as options for trying the Visio Online experience and our cloud-first innovations for free. We also invite you to submit ideas for more cloud innovations on our UserVoice site. For questions about our latest releases, please email us at firstname.lastname@example.org. To stay informed of the latest Visio releases, follow us on Facebook and Twitter, as well as check in with the latest Visio news.
—The Visio team
Frequently asked questions
Q. Do I need to install anything to start using Visio Online?
A. No. Visio Online is a web-based application. Customers can also sign in directly at microsoft.com/visio.
Q. Do I need a Visio Online subscription to view Visio Online diagrams?
A. No. Anyone with an Office 365 subscription can view diagrams created and shared through Visio Online. This way the entire organization can be involved in the diagramming process.
Q. Where can I read more about starting Visio Online?
A. Please read this support article to learn more.
Q. Are there more differences between Visio Online Plans 1 and 2 than mentioned here?
A. Yes. Please visit the Visio website for more details on each plan.
Q. Does Visio Online Plan 2 include more than 2 GB of OneDrive storage?
A. Both Visio Online Plans 1 and 2 come with 2 GB of OneDrive for Business storage. Customers can buy additional OneDrive storage if needed.
Q. How do the new capabilities and plans affect Visio Services and Visio Pro for Office 365.
A. Visio Online is replacing Visio Services for SharePoint Online customers, and Visio Online Plan 2 is the new name for Visio Pro for Office 365.
LAS VEGAS — The HR Technology Conference 2017 was the stage for Armen Berjikly, senior director of strategy at cloud HCM vendor Ultimate Software, to declare AI in HR tech is quite real for his company.
Others at the bustling HR Technology Conference & Exposition — the 20th annual version of the biggest U.S. HR tech show — dismissed artificial intelligence (AI) as a lot of hype.
Ultimate intros AI tool
But Ultimate made news at the HR Technology Conference 2017 with a much-anticipated unveiling of its new AI-based employee feedback platform, Xander — named after American communications pioneer Alexander Graham Bell.
Other vendors, ranging from professional networking mega-vendor LinkedIn to dozens of startups, said they are making big investments in AI even as many in HR tech remain critical or skeptical.
Meanwhile, the HR Technology Conference 2017 appeared to mark the definitive arrival in HR tech of internet-generation tech giants like Google and Uber, as well as LinkedIn.
HCM suite vendors exploring AI
At the same time, dominant human capital management (HCM) suite vendors, such as Oracle, SAP SuccessFactors, ADP, Workday and Ceridian — all competitors of Ultimate — marched forward with system upgrades, some bearing hallmarks of AI.
“AI has come back,” Berjikly proclaimed to a packed demo room at the ornate Sands Expo and Convention Center. “This time, it’s back with a vengeance.”
Armen Berjiklysenior director of strategy at Ultimate Software
The Ultimate executive was referring to an AI boom that hit the technology business two decades ago and then all but faded out. AI, along with close cousins machine learning and natural language processing, has re-emerged across the digital tech spectrum, even as few can agree on a common definition for it.
“Everyone’s talking about AI. Lots of people seem to be doing something with AI, but what in the world is AI?” Berjikly said.
His conclusion: The best real world — not movie incarnation — of AI is creating software that complements and supplements humans and their emotions instead of seeking to replace humans.
Technology and human emotion
In HR tech, Berjikly said early promise has been realized by Xander, a natural-language-processing-assisted tool that analyzes emotions behind employee responses to open-ended queries about working conditions and corporate well-being. Ultimate said the software would help employers keep workers from leaving their jobs due to dissatisfaction.
Xander also provides AI-backed predictive analytics to aid managerial decision-making, even about such seemingly trivial matters as taking a subordinate out to lunch on his birthday.
After his public remarks, Berjikly told SearchHRSoftware that the AI tool is available now, and Ultimate plans to extend AI capabilities throughout the rest of its HCM suite.
Analysts cautious but open about AI
The Ultimate move even impressed, somewhat, the occasionally caustic HR tech analyst John Sumser, who recently completed a research project on AI initiatives by major HCM vendors, as well as smaller startups. That report is expected to be out soon.
“There’s no fully realized implementation of AI available here,” Sumser said in an interview at the HR Technology Conference 2017. “But if you were to give an award for the closest thing to it that’s available in the market, you’d have to give it to them. That doesn’t mean it’s AI; it just means they’re closer than anyone else.”
Similarly, veteran analyst Ron Hanscome, research vice president of HCM technologies at Gartner, said AI is more or less in its infancy in HR tech, but Ultimate and SAP, with its diversity software tools, have developed notable applications.
“I don’t think anyone has gotten there yet. We’re still in the early days,” Hanscome said in an interview. “Just stayed tuned. Everyone is working it — no clear leaders.”
Oracle’s take on AI
For its part, Oracle downplayed AI, although it has emphasized what it calls “adaptive intelligence” in some of its HCM modules and a chat bot with which employers can talk to job candidates. But Oracle also said it sees promise for AI in its HCM cloud systems and is working on the technology.
Among Oracle’s customers, there is “significant interest in anything that increases productivity,” Gretchen Alarcon, group vice president of HCM strategy at Oracle, told SearchHRSoftware. “There’s a lot of buzz right now about AI, a lot of attention, a lot of people talking. I think the question we need to ask is, ‘How does this make your organization more productive?'”
LinkedIn makes AI play
As for LinkedIn, leaders of its Talent Insights and LinkedIn Talent Solutions product teams said the self-service product — announced last week and expected to be in beta development until 2018 — is at its core a big data analytics system built on AI.
LinkedIn claims more than 500 million users worldwide, as well as a huge data storehouse from those people it is now mining and starting to benchmark against.
The social media company, which has a fast-growing talent acquisition business, put forth its biggest presence ever at an HR tech show at the HR Technology Conference 2017, including separate booths for Talent Insights and the company’s learning software division.
“AI is making this possible,” said Kate Hastings, senior director for Insights at LinkedIn, as a colleague gave a whirlwind demo of the soon-to-be software. “We wouldn’t have been able to build this product five years ago.”
SAN JOSE, Calif. — The API World conference provided a good deal of advice and direction for those interested in taking their monolithic software architectures toward a more distributed, microservices-based architecture, particularly in regard to microservices messaging protocols.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Let’s look at some of the revelations and instructions provided to conference attendees related to microservices, including how to move beyond REST and some of the latest and greatest messaging protocols and architectures worth watching.
Moving beyond REST
There was buzz at API World 2017 about the use of REST as the standard messaging protocol for microservices. While REST has been popular, members of a panel session focused on the evolution of microservices messaging protocols suggested organizations are — and should be — ready to explore other options for messaging.
“People were not happy with RESTful APIs, because there is a pattern mismatch,” said panel member Fran Mendez, lead engineer at the London-based API support platform provider Hitch. “Even [with] web sockets, you need to have a great connection to make them match. That’s why people are looking at other options for event-driven.”
How Reactive can help
Mark Makary, CTO and president of Logic Keepers, a technology adviser company based in Frisco, Texas, spoke to conference attendees about the potential drawbacks of depending on a traditional blocking REST architecture for microservices and how moving to a Reactive nonblocking, event-based architecture may help.
“After people are going through the [microservices] journey, they run into problems where the system is not very responsive,” Makary explained. “We are getting used to apps getting very responsive, and this is part of the user experience.”
Makary explained there are three potential drawbacks on performance: I/O and database blocking, monoliths and performance management, and poor internal and external endpoint management. By moving toward a Reactive architecture, he said, organizations can make their applications more responsive, elastic, event-driven, asynchronous and nonblocking.
Consider gRPC, Kafka and GraphQL
One suggestion was for people to start exploring gRPC, an open source remote procedure call system initially developed at Google. Varun Talwar, product management lead at Google, explained that since this protocol uses HTTP/2, it allows for what he called “a very polyglot way for people to communicate.”
“GRPC can help with streaming, client-side streaming, server-side streaming and getting messages back,” Talwar explained. “A lot of people in the REST world found that hard to do.”
The discussion also shifted toward a conversation about the use of Kafka, an open source stream-processing platform, mainly due to its fault-tolerant nature of delivering messages.
“You can ensure that [Kafka] is reliable on two or three brokers,” said Mike Sample, director of technology and principal developer at Hootsuite, a social media management platform provider in Vancouver, B.C. “They can store two or three partitions … it’s very robust.”
Panel members also touted the advantages of using GraphQL, a data query language developed internally by Facebook, as an alternative to REST architecture, particularly for distributed development teams.
“Organizations can have spread-out development teams … [GraphQL] can really help with that,” explained panel member Ryan Blain, CTO at Atlanta-based Arvata.io. “It has to be thought of as an API gateway, and that gateway can reach out to different services.” Blain warned, however, that tooling for GraphQL may be relatively immature.
Taking it all home
Attendees of API World reacted positively to the microservices session, particularly to the panel discussion about microservices messaging protocols.
“My favorite one was actually the panel, the discussion about what the different communication protocols between the microservices are and why we would use one over the other,” said Hema Rajashekhara, a senior application developer at the financial services company Capital Group.
Rajashekhara said she is actively looking for more information about microservices messaging protocols to help mitigate performance issues as they transition toward microservice implementation.
“One thing that I’m concerned about is performance and communicating between the different microservices and how it’s going to affect performance,” she said. “So, to hear the differences between gRPC and Kafka is something that I’m going to take back and see what’s best to apply.”
Other attendees, such as Barb Honken, a systems integration analyst at Blackfoot, particularly gravitated toward the discussions about the use of the GraphQL language for microservices.
“I didn’t come here thinking I wanted to learn more about GraphQL,” Honken said. “But now I do.”
Microsoft to host earnings conference call webcast
REDMOND, Wash. — Oct. 10, 2017 — Microsoft Corp. will publish fiscal year 2018 first-quarter financial results after the close of the market on Thursday, Oct. 26, 2017, on the Microsoft Investor Relations website at https://www.microsoft.com/en-us/Investor/. A live webcast of the earnings conference call will be made available at 2:30 p.m. Pacific Time.
Microsoft (Nasdaq “MSFT” @microsoft) is the leading platform and productivity company for the mobile-first, cloud-first world, and its mission is to empower every person and every organization on the planet to achieve more.
For more information, financial analysts and investors only:
Investor Relations, Microsoft, (425) 706-4400
For more information, press only:
Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, email@example.com
Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://www.microsoft.com/news. Web links, telephone numbers and titles were correct at time of publication, but may since have changed. Shareholder and financial information is available at https://www.microsoft.com/en-us/Investor/.