Tag Archives: many

How to install the Windows Server 2019 VPN

Many organizations rely on a virtual private network, particularly those with a large number of remote workers who need access to resources.

While there are numerous vendors selling their VPN products in the IT market, Windows administrators also have the option to use the built-in VPN that comes with Windows Server. One of the benefits of using Windows Server 2019 VPN technology is there is no additional cost to your organizations once you purchase the license.

Another perk with using a Windows Server 2019 VPN is the integration of the VPN with the server operating system reduces the number of infrastructure components that can break. An organization that uses a third-party VPN product will have an additional hoop the IT staff must jump through if remote users can’t connect to the VPN and lose access to network resources they need to do their jobs.

One relatively new feature in Windows Server 2019 VPN functionality is the Always On VPN, which some users in various message boards and blogs have speculated will eventually replace DirectAccess, which remains supported in Windows Server 2019. Microsoft cites several advantages of Always On VPN, including granular app- and traffic-based rules to restrict network access, support for both RSA and elliptic curve cryptography algorithms, and native Extensible Authentication Protocol support to enable the use of a wider variety of advanced authentication methods.

Microsoft documentation recommends organizations that currently use DirectAccess to check Always On VPN functionality before migrating their remote access processes.

The following transcript for the video tutorial by contributor Brien Posey explains how to install the Windows Server 2019 VPN role. 

In this video, I want to show you how to configure Windows Server 2019 to act as a VPN server.

Right now, I’m logged into a domain joined Windows Server 2019 machine and I’ll get the Server Manager open so let’s go ahead and get started.

The first thing that I’m going to do is click on Manage and then I’ll click on Add Roles and Features.

This is going to launch the Add Roles and Features wizard.

I’ll go ahead and click Next on the Before you begin screen.

For the installation type, I’m going to choose Role-based or feature-based installation and click Next. From there I’m going to make sure that my local server is selected. I’ll click Next.

Now I’m prompted to choose the server role that I want to deploy. You’ll notice that right here we have Remote Access. I’ll go ahead and select that now. Incidentally, in the past, this was listed as Routing and Remote Access, but now it’s just listed as a Remote Access. I’ll go ahead and click Next.

I don’t need to install any additional feature, so I’ll click Next again, and I’ll click Next [again].

Now I’m prompted to choose the Role Services that I want to install. In this case, my goal is to turn the server into a VPN, so I’m going to choose DirectAccess and VPN (RAS).

There are some additional features that are going to need to be installed to meet the various dependencies, so I’ll click Add Features and then I’ll click Next. I’ll click Next again, and I’ll click Next [again].

I’m taken to a confirmation screen where I can make sure that all of the necessary components are listed. Everything seems to be fine here, so I’ll click Install and the installation process begins.

So, after a few minutes the installation process completes. I’ll go ahead and close this out and then I’ll click on the Notifications icon. We can see that some post-deployment configuration is required. I’m going to click on the Open the Getting Started Wizard link.

I’m taken into the Configure Remote Access wizard and you’ll notice that we have three choices here: Deploy both DirectAccess and VPN, Deploy DirectAccess Only and Deploy VPN Only. I’m going to opt to Deploy VPN Only, so I’ll click on that option.

I’m taken into the Routing and Remote Access console. Here you can see our VPN server. The red icon indicates that it hasn’t yet been configured. I’m going to right-click on the VPN server and choose the Configure and Enable Routing and Remote Access option. This is going to open up the Routing and Remote Access Server Setup Wizard. I’ll go ahead and click Next.

I’m asked how I want to configure the server. You’ll notice that the very first option on the list is Remote access dial-up or VPN. That’s the option that I want to use, so I’m just going to click Next since it’s already selected.

I’m prompted to choose my connections that I want to use. Rather than using dial-up, I’m just going to use VPN, so I’ll select the VPN checkbox and click Next.

The next thing that I have to do is tell Windows which interface connects to the internet. In my case it’s this first interface, so I’m going to select that and click Next.

I have to choose how I want IP addresses to be assigned to remote clients. I want those addresses to be assigned automatically, so I’m going to make sure Automatically is selected and click Next.

The next prompt asks me if I want to use a RADIUS server for authentication. I don’t have a RADIUS server in my own organization, so I’m going to choose the option No, use Routing and Remote Access to authenticate connection requests instead. That’s selected by default, so I can simply click Next.

I’m taken to a summary screen where I have the chance to review all of the settings that I’ve enabled. If I scroll through this, everything appears to be correct. I’ll go ahead and click Finish.

You can see that the Routing and Remote Access service is starting and so now my VPN server has been enabled.

View All Videos

Go to Original Article
Author:

Decision-makers may prefer Wi-Fi over 5G in retail networks

While fifth-generation wireless has taken the technology world by storm, many retailers don’t see a need to heed the hype.

Several use cases may glean immediate 5G benefits, yet 5G in retail is superfluous for now. Although 5G can support retail networks that require advanced capabilities, such as virtual reality, the retail world won’t depend on 5G because other wireless technologies are still efficient, according to a recent Forrester Research report. The report “The CIO’s Guide To 5G In The Retail Sector” explored particular retail use cases, and report author and principal analyst Dan Bieler discussed key differences between retail and other 5G use cases.

“Retailers are quite sophisticated in their existing technology understanding,” Bieler said. “They have achieved some great solutions with existing technologies, and they will not risk upsetting everything in the short term where they don’t see a clear [ROI] for making additional network infrastructure investments in 5G.”

Dan BielerDan Bieler

Retailers are interested in 5G for their networks, Bieler said, yet few have implemented or deployed 5G so far. Some retailers may seek out 5G as a replacement for existing MPLS connectivity, but this choice depends on pricing models and business requirements. Overall, IT decision-makers may prefer Wi-Fi over 5G in retail networks because not all retailers require the advanced capabilities 5G networks offer, he added.

5G in retail lacks transformative qualities largely because cellular technologies weren’t developed for indoor network coverage, and physical objects indoors can impede 5G’s millimeter wave frequencies and its line-of-sight travel capabilities.

The advent of Wi-Fi 6, or 802.11ax, may interest retailers more than 5G, as Wi-Fi historically supports more indoor use cases and networks than cellular technologies. Both Wi-Fi 6 and 5G offer similar capabilities, which makes them competitors in some use cases and complementary in others. For exclusively indoor retail environments, IT decision-makers may not see a need for 5G networks, Bieler said.

“[Retailers] can do a lot with the technologies that we have today,” he said. “5G will be a continuum rather than a completely revolutionary new technology for them.”

5G benefits
Aside from 5G in retail, the new generation of cellular technology has several benefits for all types of organizations.

Another issue retailers could face regarding 5G is customer apprehension. Despite 5G’s various new capabilities, customers don’t necessarily care about technological innovations and won’t alter their shopping habits because of 5G. However, customers in younger age groups may be more willing to adapt to the capabilities 5G enables, so organizations should focus more on how to win over older age groups, the report said.

Benefits of 5G in retail use cases, networks

Despite the efficiency of other wireless technologies, the report noted three main areas where 5G in retail can benefit business operations, including the following:

  1. Back-end operations, where organizations can handle work the customers don’t see, such as tracking and monitoring inventory within warehouses.
  2. Front-end operations, which are customer-facing operations and deal with tracking and monitoring products and people within stores.
  3. Finance operations, where the store can remotely track and monitor a customer’s product or service usage and charge them accordingly.

As 5G rolls out throughout the 2020s, more features and potential benefits for organizations will arise, such as network slicing and mobile edge computing. These capabilities can help organizations create experiences tailored specifically to individual customers.

“5G allows the retailer to track many more items and many more sensors in a store than previous cellular technologies, so they can have a much more granular picture of what retail customers are looking at, where they are going and what they are doing with products in the store,” Bieler said.

Other benefits the report cited include cost-efficient store connectivity, enhanced customer insights and improved transparency within supply chains. Organizations won’t glean these benefits for several years, Bieler said, as carriers will deploy new 5G features in stages.

However, decision-makers can prepare to deploy 5G in retail use cases by focusing closely on network design and determining whether 5G is the right choice for their operations. To evaluate this, organizations can assess their indoor connectivity environments and gauge how a 5G deployment could affect the business sectors in which the store or organization requires 5G access.

Overall, 5G has various benefits for retail use cases, the report said, but these benefits are not universal. Businesses need to look closely at their network infrastructures and business requirements to evaluate 5G’s potential effect on their operations. Regardless, Bieler said he was sure deployments of 5G in retail will eventually become common.

“[Retailers] will still adopt it over time because 5G will provide super-fast broadband connectivity,” Bieler said. “It opens up your business model opportunities in an easier way. So, over time, retailers will definitely embrace it, but not tomorrow.”

Go to Original Article
Author:

AI vendors to watch in 2020 and beyond

There are thousands of AI startups around the world. Many aim to do similar things — create chatbots, develop hardware to better power AI models or sell platforms to automatically transcribe business meetings and phone calls.

These AI vendors, or AI-powered product vendors, have raised billions over the last decade, and will likely raise even more in the coming years. Among the thousands of startups, a few shine a little brighter than others.

To help enterprises keep an eye on some of the most promising AI startups, here is a list of those founded within the past five years. The startups listed are all independent companies, or not a subsidiary of a larger technology vendor. The chosen startups also cater to enterprises rather than consumers, and focus on explainable AI, hardware, transcription and text extraction, or virtual agents.

Explainable AI vendors and AI ethics

As the need for more explainable AI models has skyrocketed over the last couple of years and the debate over ethical AI has reached government levels, the number of vendors developing and selling products to help developers and business users understand AI models has increased dramatically. Two to keep an eye on are DarwinAI and Diveplane.

DarwinAI uses traditional machine learning to probe and understand deep learning neural networks to optimize them to run faster.

Founded in 2017 and based in Waterloo, Ontario, the startup creates mathematical models of the networks, and then uses AI to create a model that infers faster, while claiming to maintain the same general levels of accuracy. While the goal is to optimize the deep learning models, a 2018 update introduced an “explainability toolkit” that offers optimization recommendations for specific tasks. The platform then provides detailed breakdowns on how each task works, and how exactly the optimization will improve them.

Founded in 2017, Diveplane claims to create explainable AI models based on historical data observations. The startup, headquartered in Raleigh, N.C., puts its outputs through a conviction metric that ranks how likely new or changed data fits into the model. A low ranking indicates a potential anomaly. A ranking that’s too low indicates that the system is highly surprised, and that the data likely doesn’t belong in a model’s data set.

AI startups, AI vendors
There are thousands of AI startups in the world today, and it looks like there will be many more over the coming years.

In addition to the explainability product, Diveplane also sells a product that creates an anonymized digital twin of a data set. It doesn’t necessarily help with explainability, but it does help with issues around data privacy.

According to Diveplane CEO Mike Capps, Diveplane Geminai takes in data, understands it and then generates new data from it without carrying over personal data. In healthcare, for example, the product can input patient data and scrub personal information like names and locations, while keeping the patterns in the data. The outputs can then be fed into machine learning algorithms.

“It keeps the data anonymous,” Capps said.

AI hardware

To help power increasingly complex AI models, more advanced hardware — or at least hardware designed specifically for AI workloads — is needed. Major companies, including Intel and Nvidia, have quickly stepped up to the challenge, but so, too, have numerous startups. Many are doing great work, but one stands out.

Cerebras Systems, a 2016 startup based in Los Altos, Calif., made headlines around the world in 2019 when it created what it dubbed the world’s largest computer chip designed for AI workloads. The chip, about the size of a dinner plate, has some 400,000 cores and 1.2 trillion transistors. By comparison, the largest GPU has around 21.1 billion transistors.

The company has shipped a limited number of chips so far, but with a valuation expected to be well over $1 billion, Cerebras looks to be going places.

Automatic transcription companies

It’s predicted that more businesses will use natural language processing (NLP) technology in 2020 and that more BI and AI vendors will integrate natural language search functions into their platforms in the coming years.

Numerous startups sell transcription and text capturing platforms, as well as many established companies. It’s hard to judge them, as their platforms and services are generally comparable; however, two companies stand out.

Fireflies.ai sells a transcription platform that syncs with users’ calendars to automatically join and transcribe phone meetings. According to CEO and co-founder Krish Ramineni, the platform can transcribe calls with over 90% accuracy levels after weeks of training.

The startup, founded in 2016, presents transcripts within a searchable and editable platform. The transcription is automatically broken into paragraphs and includes punctuation. Fireflies.ai also automatically extracts and bullets information it deems essential. This feature does “a fairly good job,” one client said earlier this year.

The startup plans to expand that function to automatically label more types of information, including tasks and questions.

Meanwhile, Trint, founded in late 2014 by former broadcast journalist Jeff Kofman, is an automatic transcription platform designed specifically for newsrooms, although it has clients across several verticals.

The platform can connect directly with live video feeds, such as the streaming of important events or live press releases, and automatically transcribe them in real time. Transcriptions are collaborative, as well as searchable and editable, and included embedded time codes to easily go back to the video.

“It’s a software with an emotional response, because people who transcribe generally hate it,” Kofman said.

Bots and virtual agents

As companies look to cut costs and process client requests faster, the use of chatbots and virtual agents has greatly increased across numerous verticals over the last few years. While there are many startups in this field, a couple stand out.

Boost.ai, a Scandinavian startup founded in 2016, sells an advanced conversational agent that it claims is powered by a neural network. Automatic semantic understanding technology sits on top of the network, enabling the agent to read textual input word by word, and then as a whole sentence, to understand user intent.

Agents are pre-trained on one of several verticals before they are trained on the data of a new client, and the Boost.ai platform is quick to set up and has a low count of false positives, according to co-founder Henry Vaage Iversen. It can generally understand the intent of most questions within a few weeks of training, and will find a close alternative if it can’t understand it completely, he said.

The platform supports 25 languages, and pre-trained modules for a number of verticals, including banking, insurance and transportation industries.

Formed in 2018, EyeLevel.ai doesn’t create virtual agents or bots; instead, it has a platform for conversational AI marketing agents. The San Francisco-based startup has more than 1,500 chatbot publishers on its platform, including independent developers and major companies.

Eyelevel.ai is essentially a marketing platform — it advertises for numerous clients through the bots on in its marketplace. Earlier this year, Eyelevel.ai co-founder Ryan Begley offered an example.

An independent developer on its platform created a bot that quizzes users on their Game of Thrones knowledge. The bot operates on social media platforms, and, besides providing a fun game for users, it also collects marketing data on them and advertises products to them. The data it collects is fed back into the Eyelevel platform, which then uses it to promote through its other bots.

By opening the platform to independent developers, it gives individuals a chance to get their bot to a broader audience while making some extra cash. Eyelevel.ai offers tools to help new bot developers get started, too.

“Really, the key goal of the business is help them make money,” Begley said of the developers.

Startup launches continuing to surge

This list of AI-related startups represents only a small percentage of the startups out there. Many offer unique products and services to their clients, and investors have widely picked up on that.

According to the comprehensive AI Index 2019 report, a nearly 300-page report on AI trends complied by the Human-Centered Artificial Intelligence initiative at Stanford University, global private AI investment in startups reached $37 billion in 2019 as of November.

The report notes that since 2010, which saw $1.3 billion raised, investments in AI startups have increased at an average annual growth rate of over 48%.

The report, which considered only AI startups with more than $400,000 in funding, also found that more than 3,000 AI startups received funding in 2018. That number is on the rise, the report notes.

Go to Original Article
Author:

How should organizations approach API-based SIP services?

Many Session Initiation Protocol features are now available through open APIs for a variety of platforms. While voice over IP only refers to voice calls, SIP encompasses the set up and release of all calls, whether they are voice, video or a combination of the two.

Because SIP establishes and tears down call sessions, it brings multiple tools into play. SIP services enable the use of multimedia, VoIP and messaging, and can be incorporated into a website, program or mobile application in many ways.

The APIs available range from application-specific APIs to native programming languages, such as Java or Python, for web-based applications. Some newer interfaces are operating system-specific for Android and iOS. SIP is an open protocol, which makes most features available natively regardless of the SIP vendor. However, the features and implementations for SIP service APIs are specific to the API vendor. 

Some of the more promising features include the ability to create a call during the shopping experience or from the shopping cart at checkout. This enables customer service representatives and customers to view the same product and discuss and highlight features within a browser, creating an enhanced customer shopping experience.

The type of API will vary based on which offerings you use. Before issuing a request for a quote, issue a request for information (RFI) to learn what kinds of SIP service APIs a vendor has to offer. While this step takes time, it will allow you to determine what is available and what you want to use. You will want to determine the platform or platforms you wish to support. Some APIs may be more compatible with specific platforms, which will require some programming to work with other platforms.

Make sure to address security in your RFI.  Some companies will program your APIs for you. If you don’t have the expertise, or aren’t sure what you’re looking for, then it’s advantageous to meet with some of those companies to learn what security features you need. 

Go to Original Article
Author:

Clumio eyes security, BaaS expansion with VC funding

Merging storage and security together effectively has been an elusive goal for many technology vendors over the years, but Clumio believes it has a winning formula — and one that can effectively mitigate ransomware threats.

Clumio, a backup-as-a-service provider based in Santa Clara, Calif., recently celebrated $135 million in Series C funding. The startup was founded in 2017 with the goal of leveraging cloud-native services to build a scalable and agile BaaS offering that could also meet enterprises’ needs for data protection and analytics needs.

In this Q&A, Clumio CTO Chad Kinney and CSO Glenn Mulvaney discuss the origin story of the company, how they plan to utilize their recent funding round, and how Clumio addresses ransomware threats.

Editor’s note: This interview has been edited for length and clarity.

Tell me how the company was founded.

Chad Kinney: The company was founded about two years ago. And the core concept behind it was to fundamentally remove the complexity of traditional data protection to start with, and do so by delivering a service offering that was delivered via the public cloud.

A few things we realized early on were, as customers were journeying to the public cloud, SaaS-based offerings, and path-based offerings, they needed a way to be able to protect their data set along the way. And we realized that people were running into roadblocks and moving data to the public cloud because data protection was not able to deliver the same type of functions and features that they delivered on premises, and there was a big barrier there that we were breaking through to help customers be able to journey along the public cloud.

The second part was, as we got to the public cloud, security became a big key focus. Our ability to be able to secure this information through both encryption and encryption-in-flight as well as various other ones Glen will go through on the core platform itself was something that customers were very much hyper-focused on as they moved data more and more into the public cloud.

So far we’ve raised about $186 million in a series of A, B and C. Most recently we just closed a series C of $135 million.

How do you plan to use that $135 million to grow the company?

Kinney: A lot of the key focus right now is expediting the introduction of new data sources for the platform itself. Today we back up VMware on premises, VMware running in AWS, as well as elastic block storage for AWS. And so, continuing to expand the data sources is a key thing we’re moving forward with as part of this investment — to get customers access to new data sources faster.

Give me a rundown of what the platform is all about.

Kinney: Fundamentally, we’ve built this platform for the public cloud, on top of AWS. We’ve built in a bunch of great efficiencies in the way the data is ingested. With anything that runs on the public cloud, if you compare that with something that runs on premises, typically you do duplication and security is retrofitted to the data center itself. And the world has shifted dramatically where people are looking to utilize the public cloud heavily and remove the things completely out of the data center. We were able to provide what we call a cloud connector that gets deployed in a customer’s environment — it’s a virtual appliance so there’s no hardware or anything like that. We do duplication and compression and encryption before the data is sent over the wire. We leverage the capabilities of S3 within Amazon, and we use their scale as data gets ingested over the platform itself. Then we use various stateless functions within the platform to churn through the data, as well as DynamoDB for a lot of the metadata functions and various other structures in AWS, and the agility and scale of that core platform to allow us to still be able to ingest data incredibly quickly and be able to provide services on top of that platform.

Glenn Mulvaney: From the security side, leveraging a lot of those public cloud controls we have in Amazon, we’ve implemented a model where data encryption is always on in the platform. It’s not an option to turn it off and data is always encrypted and compressed. And the way it starts, which I think is a critical feature of the platform, is that the data is encrypted before it leaves the customer environment; it’s encrypted in the customer environment, it’s transmitted over a secure channel and then it’s stored securely in S3. And there’s different encryption keys used in each of those steps.

In terms of security in a more general fashion, we think of it in a couple of different ways. Fundamentally, we think of it as technology, people and processes, so we’ve talked about the technology a little bit in terms of how we handle encryption, but for the people and the processes, what we have implemented is the ISO 27001 framework, and we just completed our stage 2 audit last week. The ISO 27001 framework gives us a solid foundation for principles and controls for internal processes, and it also guided how we trained our employees about security awareness. We really used that as a guideline to integrate a lot of security into our software development lifecycle and into our QA lifecycle and broadly across all of the employees at the company, including sales and marketing and customer success.

Do you see yourself as more of a security vendor or a backup vendor or both?

Kinney: I’d say a little bit of both. I’d say we’re a security-first company where we really spent a lot of time thinking about what we’re doing as a core platform setting ourselves up for success. If you had to put a name on it, I’d say we’re more of a data platform company than anything.

What effects have ransomware attacks had on the backup and data protection market in general?

Mulvaney: I think with the prevalence of ransomware attacks happening at all levels of organizations of all sizes, people are thinking a lot more seriously about their data protection and about their ability to recover from some sort of ransomware attack. I think there’s certainly a lot of opportunity for Clumio to help a lot of organizations like that and to be able to give them a truly secure ability to recover from something like a ransomware attack. Certainly the prevalence of these [attacks] is increasing at a rate we hadn’t anticipated, and I think that’s helping in the market for data protection to actually drive people to think much more seriously about what their backup compliance policies look like.

How does Clumio address ransomware threats in a way that’s different from other backup providers?

Kinney: Let me give you the most recent example, which is an interesting one. We recently announced the capability to be able to back up elastic block storage from AWS and when you look at the solutions that are out there today, most people protect data with snapshots and the snapshots live in the same account as the production data. Most people rely on these snapshots for quick recovery but they’re also relying on them for the backup. And when malware hits or a bad actor hits on that particular account, they functionally get access to both the production data as well as the backup of that data in the same account and so it’s opened up possibilities for people to run into data loss issues.

With our solution what we’re fundamentally doing is we’re copying the data and creating an air gap solution between the customer’s environment and Clumio, which enables people to protect their data outside of their account and protect them from malware and ransomware attacks. We store all data in S3, which is unbeatable so no data, once backed up, can even change itself in any factor, so it gives customers the ability with our recovery mechanism to restore data into another AWS account, alleviating any sort of malware issues that may occur within one of their other AWS accounts.  

What do the next 12 months look like for the company?

Kinney: The motivation for us is to continue to expand more and more into the public cloud. Today we solve the key focus around private cloud, which is VMware. As people are moving to the public cloud some are choosing to use VMware running in AWS which is using a button to quickly move assets into the public cloud. They’re also going and re-architecting applications into the public cloud, like using elastic block storage and other platform and service-based offerings. We are going to continue to expand in both SaaS-based offerings the usual suspects in that as well as more and more cloud-native capabilities so we can follow customers along that journey.

Beyond the additional data sources, we’re adding additional functions on top of those datasets; we’re investing in things like anomaly detection and reporting over the next 12 months and we are slowly bringing those into the platform as they come to bear.

Mulvaney: From the compliance side in 2020, obviously we’re thinking about looking closely at CCPA [California Privacy Protection Act] and I think with that going into effect on January 1 we’re going to see that there’s probably going to be more emerging new standards for certifications for protections and personal information handling already the ISO 27001 was revised in 2019 and previously was only revised in 2014 so I think protection of personal data is going to be a paramount part of our roadmap and in 2020 we’re looking very closely at doing high-trust certification and beginning implementation for Fedramp.

Go to Original Article
Author:

Epicor ERP system focuses on distribution

Many ERP systems try to be all things to all use cases, but that often comes at the expense of heavy customizations.

Some companies are discovering that a purpose-built ERP is a better and more cost-effective bet, particularly for small and midsize companies. One such product is the Epicor ERP system Prophet 21, which is primarily aimed at wholesale distributors.

The functionality in the Epicor ERP system is designed to help distributors run processes more efficiently and make better use of data flowing through the system.

In addition to distribution-focused functions, the Prophet 21 Epicor ERP system includes the ability to integrate value-added services, which could be valuable for distributors, said Mark Jensen, Epicor senior director of product management.

“A distributor can do manufacturing processes for their customers, or rentals, or field service and maintenance work. Those are three areas that we focused on with Prophet 21,” Jensen said.

Prophet 21’s functionality is particularly strong in managing inventory, including picking, packing and shipping goods, as well as receiving and put-away processes.

Specialized functions for distributors

Distribution companies that specialize in certain industries or products have different processes that Prophet 21 includes in its functions, Jensen said. For example, Prophet 21 has functionality designed specifically for tile and slab distributors.

“The ability to be able to work with the slab of granite or a slab of marble — what size it is, how much is left after it’s been cut, transporting that slab of granite or tile — is a very specific functionality, because you’re dealing with various sizes, colors, dimensions,” he said. “Being purpose-built gives [the Epicor ERP system] an advantage over competitors like Oracle, SAP, NetSuite, [which] either have to customize or rely on a third-party vendor to attach that kind of functionality.”

Jergens Industrial Supply, a wholesale supplies distributor based in Cleveland, has improved efficiency and is more responsive to shifting customer demands using Prophet 21, said Tony Filipovic, Jergens Industrial Supply (JIS) operations manager.

We looked at other systems that say they do manufacturing and distribution, but I just don’t feel that that’s the case.
Tony FilipovicOperations manager, Jergens Industrial Supply

“We like Prophet 21 because it’s geared toward distribution and was the leading product for distribution,” Filipovic said. “We looked at other systems that say they do manufacturing and distribution, but I just don’t feel that that’s the case. Prophet 21 is something that’s been top of line for years for resources distribution needs.”

One of the key differentiators for JIS was Prophet 21’s inventory management functionality, which was useful because distributors manage inventory differently than manufacturers, Filipovic said.

“All that functionality within that was key, and everything is under one package,” he said. “So from the moment you are quoting or entering an order to purchasing the product, receiving it, billing it, shipping it and paying for it was all streamlined under one system.”

Another key new feature is an IoT-enabled button similar to Amazon Dash buttons that enables customers to resupply stocks remotely. This allows JIS to “stay ahead of the click” and offer customers lower cost and more efficient delivery, Filipovic said.

“Online platforms are becoming more and more prevalent in our industry,” he said. “The Dash button allows customers to find out where we can get into their process and make things easier. We’ve got the ordering at the point where customers realize that when they need to stock, all they do is press the button and it saves multiple hours and days.”

Epicor Prophet 21 a strong contender in purpose-built ERP

Epicor Prophet 21 is on solid ground with its purpose-built ERP focus, but companies have other options they can look at, said Cindy Jutras, president of Mint Jutras, an ERP research and advisory firm in Windham, NH.

“Epicor Prophet 21 is a strong contender from a feature and function standpoint. I’m a fan of solutions that go that last mile for industry-specific functionality, and there aren’t all that many for wholesale distribution,” Jutras said. “Infor is pretty strong, NetSuite plays here, and then there a ton of little guys that aren’t as well-known.”

Prophet 21 may take advantage of new cloud capabilities to compete better in some global markets, said Predrag Jakovljevic, principal analyst at Technology Evaluation Centers, an enterprise computing analysis firm in Montreal.

“Of course a vertically-focused ERP is always advantageous, and Prophet 21 and Infor SX.e go head-to-head all the time in North America,” Jakovljevic said. “Prophet 21 is now getting cloud enabled and will be in Australia and the UK, where it might compete with NetSuite or Infor M3, which are global products.”

Go to Original Article
Author:

Microsoft’s history and future strategy beyond 2020

In 1975, a 20-year old Bill Gates stated a bold ambition that many at the time thought naïve: his fledgling startup would put a Windows computer on every desk and in every home.

Gates, and his co-founder Paul Allen, never quite realized that ambition. They did however, grow “Micro-Soft” from a tiny, underfunded company selling a BASIC interpreter for the Altair 8800 PC to a $22.9 billion company selling operating systems, applications and tools with a 90% share of the microcomputer software market by the year 2000 — 25 years into Microsoft’s history. Close enough.

That year, Microsoft was among the top five most valuable companies with a market cap of $258 billion. Fast forward to fiscal year 2020: The company’s market cap is slightly over $1 trillion and it now tops the list of the world’s most valuable companies.

One could surmise that Microsoft’s trip to the top of the heap was predictable given its position in the fast-growing computer industry over the past two decades. But the journey between the two mountain peaks saw dramatic changes to the company’s senior management and bold technology changes that strayed far from those products that made it rich and famous. It also helped that many of its archrivals made strategic missteps opening doors of opportunity.

Microsoft’s history marked by leadership changes

There are the more obvious reasons Microsoft remained near or at the top of the most influential companies in high tech: the arrival of Satya Nadella taking over from Steve Ballmer; the subsequent refocusing from proprietary products to open source, as well as making the cloud its first priority.

But as important to the company’s success as the right people rising to the top of the company and the right set of priorities, is an often-overlooked factor: Microsoft’s slow and steady progress convincing its mammoth user base to buy its core products through long-term, cloud-based subscriptions.

“If you want to point to one thing that’s kept the company’s revenues growing over the past 20 years it is subscription selling,” said Rob Helm, managing vice president of research at Directions On Microsoft, an independent analysis firm in Kirkland, Wash. “It’s not very exciting but the company has used all kinds of clever tactics to shift users to this model. And they aren’t done yet.”

The move to subscription selling, an initiative that originated before Steve Ballmer took over the day-to-day operations of the company, started with its Licensing 6.0 and Software Assurance program in 2002. The program got off to a slow start, mainly because users were unaccustomed to buying their products through long-term licensing contracts, said Al Gillen, group vice president in IDC’s software development and open source practice.

“That program wasn’t used much at all. Hardly anyone back then used subscriptions to buy software,” Gillen said.

But with the arrival of the cloud, Office 365 and Microsoft 365 in particular, longer-term cloud licensing skyrocketed.

“[Microsoft] became very enthusiastic about the cloud because for them, it was yet another way of moving its customers to long-term subscriptions,” Helm said.

The biggest obstacle to moving many of its customers to cloud-based subscriptions was Microsoft’s own success. For decades of Microsoft’s history, the company was wedded to its business model of selling a stack of products that included the operating system, applications and utilities, and signing up major hardware suppliers to sell the stack bundled with their systems. The company grew rich with this model, but by the mid-2000s, trouble was brewing.

If you want to point to one thing that’s kept the company’s revenues growing over the past 20 years it is subscription selling.
Rob HelmManaging vice president of research, Directions On Microsoft

Under Ballmer’s reign, the software stack model became outdated with the encroaching age of the cloud and open source software led by a new raft of competitors like AWS and Google. Nadella saw this and knew it was time to accelerate the company’s cloud-based subscription business.

So, although Ballmer famously monkey-danced his way across a stage shouting “developers, developers, developers” at a Microsoft 25th anniversary event in 2000, it was Nadella who knew what Windows and non-Windows developers wanted in the age of the cloud — and how to go about enlisting their cooperation.

“Satya decisively moved away from the full stack model and went to an IaaS model more resembling that of AWS,” Helm said. “This pivot allowed them to deliver cloud services much faster than it could have otherwise. But they could not have done this without cutting Windows adrift from the rest of the stack,” he said.

Open source fork in the road

Microsoft’s pivot to support Linux in 2018 and open source happened “just in time,” according to Gillen. While the company had open source projects underway as far back as the 2004 to 2005 timeframe, including a project to move SQL Server to Linux, it took a long while before open source was accepted across Microsoft’s development groups, according to Gillen.

“A developer [inside Microsoft] once told me he would show up for meetings and the Windows guys would look at him and say, ‘We don’t want you at this meeting, you’re the open source guy,” Gillen said. “Frankly, looking back they were lucky to make the transition at all.”

What made the transition to long-term cloud subscriptions easier for Microsoft and its users was the combination of cloud and Linux vendors who already had such licensing in play, according to Gillen.

“It became more palatable to users because they were getting more exposure to it from a variety of sources,” he said.

Microsoft has so fully embraced Linux, not just by delivering Linux compatible products and cloud-based services, but through acquisitions such as GitHub, the world’s largest open source repository, it is becoming hard to remember a time in Microsoft’s history when the company was despised by the open source community.

“If you are a 28-year-old programmer today, you aren’t aware of a time Microsoft was hated by the Linux world,” Gillen said. “But I’d argue now that Microsoft is as invested in open source software as any large cloud vendor.”

Microsoft cloud subscriptions start with desktops

One advantage Microsoft had over its more traditional competitors in moving to a SaaS-based subscription model was the fact that it started by moving desktop applications to the cloud instead of server-based applications that companies like IBM, Oracle and SAP were faced with.

“Microsoft’s desktop software is an easier conversion to the cloud and to move into a subscription model,” said Geoff Woollacott, senior strategy consultant and principal analyst at Technology Business Research Inc. “The degree of difficulty with desktops compared to an on-prem, server-based database that has to be changed to a microservices, subscription-based model is much harder.”

While Microsoft has downplayed the strategic importance of Windows in favor of Azure and cloud-based open source offerings, analysts believe the venerable operating system’s life will extend well into the future. IDC’s Gillen estimated that the Windows desktop and server franchise is likely still in excess of $20 billion a year, which is more than 15% of the company’s overall revenues of $125.8 billion for fiscal 2019 ended June 30.

“Large installed bases have longevity that goes far beyond what most want to have them. Just look at mainframes as an example,” Gillen said. “I’d argue that 60% to 70% of Windows apps are going to be around 10 and 15 years from now, which means Windows doesn’t go away.”

And those Windows apps are all being infused with AI capabilities, in the Azure cloud, on servers and desktops. Microsoft is also making it easier for developers to create AI-infused applications, with tools like AI Builder for PowerApps.

Microsoft’s AI, quantum computing future in focus

While Gillen and other analysts are hesitant to predict the future of Windows and its applications in 20 years from now, most believe the company will spend a generous amount of time focused on quantum computing.

At its recent Ignite conference, Nadella and other Microsoft executives made it clear they’ll have a deep commitment to quantum computing over the next couple of decades. The company delivered its first quantum software a couple of years ago and, surprisingly, is developing a processor capable of running quantum software with plans to deliver other quantum hardware components.

Over the coming years, Microsoft plans to build quantum systems to solve a wide range of issues from complex problems those doing advanced scientific enterprises face, said Julie Love, senior director of Quantum Computing at Microsoft.

“To realize that promise we will need a full system that scales with hundreds and thousands of qubits. It’s why we have this long-term deep development effort in our labs across the globe,” she said.

Microsoft’s first steps toward introducing quantum technology for enterprise use will be to combine quantum algorithms with classical algorithms to improve speed and performance and introduce new capabilities into existing systems. The company is already beta testing this approach with Case Western University, where quantum algorithms have tripled the performance of MRI machines.

“Quantum is going to be a hybrid architecture working alongside classical [architectures],” Love said. “If you look at the larger opportunities down the road, the most promise [for quantum computing] is in chemistry, science, optimization and machine learning.”

Unlike Bill Gates, Satya Nadella isn’t promising a quantum computer on every desktop and in every home by 2040. But you can assume that as long as Gates has an association with the company, it will make an earnest effort to put Microsoft software on those desktops that do.

Go to Original Article
Author:

Kubernetes tools vendors vie for developer mindshare

SAN DIEGO — The notion that Kubernetes solves many problems as a container orchestration technology belies the complexity it adds in other areas, namely for developers who need Kubernetes tools.

Developers at the KubeCon + CloudNativeCon North America 2019 event here this week noted that although native tooling for development on Kubernetes continues to improve, there’s still room for more.

“I think the tooling thus far is impressive, but there is a long way to go,” said a software engineer and Kubernetes committer who works for a major electronics manufacturer and requested anonymity.

Moreover, “Kubernetes is extremely elegant, but there are multiple concepts for developers to consider,” he said. “For instance, I think the burden of the onboarding process for new developers and even users sometimes can be too high. I think we need to build more tooling, as we flush out the different use cases that communities bring out.”

Developer-oriented approach

Enter Red Hat, which introduced an update of its Kubernetes-native CodeReady Workspaces tool at event.

Red Hat CodeReady Workspaces 2 enables developers to build applications and services on their laptops that mirror the environment they will run in production. And onboarding is but one of the target use cases for the technology, said Brad Micklea, vice president of developer tools, developer programs and advocacy at Red Hat.

The technology is especially useful in situations where security is an issue, such as bringing in new contracting teams or using offshore development teams where developers need to get up and running with the right tools quickly.

I think the tooling thus far is impressive, but there is a long way to go.
Anonymous Kubernetes committer

CodeReady Workspaces runs on the Red Hat OpenShift Kubernetes platform.

Initially, new enterprise-focused developer technologies are generally used in experimental, proof-of-concept projects, said Charles King, an analyst at Pund-IT in Hayward, Calif. Yet over time those that succeed, like Kubernetes, evolve from the proof-of-concept phase to being deployed in production environments.

“With CodeReady Workspaces 2, Red Hat has created a tool that mirrors production environments, thus enabling developers to create and build applications and services more effectively,” King said. “Overall, Red Hat’s CodeReady Workspaces 2 should make life easier for developers.”

In addition to popular features from the first version, such as an in-browser IDE, Lightweight Directory Access Protocol support, Active Directory and OpenAuth support as well as one-click developer workspaces, CodeReady Workspaces 2 adds support for Visual Studio Code extensions, a new user interface, air-gapped installs and a shareable workspace configuration known as Devfile.

“Workspaces is just generally kind of a way to package up a developer’s working workspace,” Red Hat’s Micklea said.

Overall, the Kubernetes community is primarily “ops-focused,” he said. However, tools like CodeReady Workspaces help to empower both developers and operations.

For instance, at KubeCon, Amr Abdelhalem, head of the cloud platform at Fidelity Investments, said the way he gets teams initiated with Kubernetes is to have them deliver on small projects and move on from there. CodeReady Workspaces is ideal for situations like that because it simplifies developer adoption of Kubernetes, Micklea said.

Such a tool could be important for enterprises that are banking on Kubernetes to move them into a DevOps model to achieve business transformation, said Charlotte Dunlap, an analyst with GlobalData.

“Vendors like Red Hat are enhancing Kubernetes tools and CLI [Command Line Interface] UIs to bring developers with more access and visibility into the ALM [Application Lifecycle Management] of their applications,” Dunlap said. “Red Hat CodeReady Workspaces is ultimately about providing enterprises with unified management across endpoints and environments.”

Competition for Kubernetes developer mindshare

Other companies that focus on the application development platform, such as IBM and Pivotal, have also joined the Kubernetes developer enablement game.

Earlier this week, IBM introduced a set of new open-source tools to help ease developers’ Kubernetes woes. Meanwhile, at KubeCon this week, Pivotal made its Pivotal Application Service (PAS) on Kubernetes generally available and also delivered a new release of the alpha version of its Pivotal Build Service. The PAS on Kubernetes tool enables developers to focus on coding while the platform automatically handles software deployment, networking, monitoring, and logging.

The Pivotal Build Service enables developers to build containers from source code for Kubernetes, said James Watters, senior vice president of strategy at Pivotal. The service automates container creation, management and governance at enterprise scale, he said.

The build service brings technologies such as Pivotal’s kpack and Cloud Native Buildpacks to the enterprise. Cloud Native Buildpacks address dependencies in the middleware layer, such as language-specific frameworks. Kpack is a set of resource controllers for Kubernetes. The Build Service defines the container image, its contents and where it should be kept, Watters said.

Indeed, Watters said he believes it just might be game over in the Kubernetes tools space because Pivotal owns the Spring Framework and Spring Boot, which appeal to a wide swath of Java developers, which is “one of the most popular ways enterprises build applications today,” he said.

“There is something to be said for the appeal of Java in that my team would not need to make wholesale changes to our build processes,” said a Java software developer for a financial services institution who requested anonymity because he was not cleared to speak for the organization.

Yet, in today’s polyglot programming world, programming language is less of an issue as teams have the capability to switch languages at will. For instance, Fidelity’s Abdelhalem said his teams find it easier to move beyond a focus strictly on tools and more on overall technology and strategy to determine what fits in their environment.

Go to Original Article
Author:

How PowerCLI automation brings PowerShell capabilities to VMware

VMware admins can use PowerCLI to automate many common tasks and operations in their data centers and perform them at scale. Windows PowerShell executes PowerCLI commands via cmdlets, which are abbreviated lines of code that perform singular, specific functions.

Automation can help admins keep a large, virtualized environment running smoothly. It helps with resource and workload provisioning. It also adds speed and consistency to most operations, since an automated task should behave the same way every time. And because automation can guide daily repetitions of testing, configuration and deployment without introducing the same errors that a tired admin might, it aids in the development of modern software as well.

PowerShell provides easy automation for Windows environments. VMware admins can also use the capabilities of PowerShell, however, with the help of VMware’s PowerCLI, which uses PowerShell as a framework to execute automated tasks on VMware environments.

PowerShell and PowerCLI

In a VMware environment, PowerCLI automation and management is provided at scale in a quicker way than using a GUI via the PowerShell framework. PowerCLI functions as a command-line interface (CLI) tool that “snaps into” PowerShell, which executes its commands through cmdlets. PowerCLI cmdlets can manage infrastructure components, such as High Availability, Distributed Resource Scheduler and vMotion, and can perform tasks such as gathering information, powering on and off VMs, and altering workloads and files.

In a single line of code, admins can enact mass changes to an entire VMware environment.

PowerShell commands consist of a function, which defines an action to take, and a cmdlet, which defines an object on which to perform that action. Parameters provide additional detail and specificity to PowerShell commands. In a single line of code, admins can enact mass changes to an entire VMware environment.

Common PowerCLI cmdlets

You can automate vCenter and vSphere using a handful of simple cmdlets.

With just five cmdlets, you can execute most major vCenter tasks. To obtain information about a VM — such as a VM’s name, power state, guest OS and ESXi host — use the Get-VM cmdlet. To modify an existing vCenter VM, use Set-VM. An admin can use Start-VM to start a single VM or many VMs at once. To stop a VM use Stop-VM, which simply shuts down a VM immediately, or Stop-VMGuest, which performs a more graceful shutdown. You can use these cmdlets to perform any of these tasks at scale across an entire data center.

You can also automate vSphere with PowerCLI. One of the most useful cmdlets for vSphere management is Copy-VMGuestFile, which enables an admin to copy files and folders from a local machine to a vSphere VM. Admins can add a number of parameters to this cmdlet to fine-tune vSphere VM behavior. For example, there is -GuestCredential, which authenticates a VM, and -GuestToLocal, which reverses the flow of information.

Recent updates to PowerCLI and PowerShell

PowerCLI features over 500 separate commands, and the list is only growing. In June 2019, VMware released PowerCLI 11.3, which added 22 new cmdlets for HCX management and support for opaque networks, additional network adapter types and high-level promotion of instant clones.

PowerShell is more than simply PowerCLI, of course. In May 2019, Microsoft released the most recent version of PowerShell: PowerShell 7, which includes several new APIs in the .NET Core 3.0 runtime. At the PowerShell summit in September 2019, Microsoft announced several other developments to PowerShell programming.

PowerShell now works with AWS serverless computing, which enables you to manage a Windows deployment without managing a Windows Server machine. So, you can run PowerShell on an API and use it to run serverless events, such as placing an image in an AWS Simple Storage Service bucket and converting that image to multiple resolutions.

PowerShell also offers a service called Simple Hierarchy in PowerShell (SHiPS). An admin can use SHiPS to build a hierarchical file system provider from scratch and bypass the normal complexity of such a task. SHiPS reduces the amount of code it takes to write a provider module from thousands of lines to around 20.

Go to Original Article
Author:

Forus Health uses AI to help eradicate preventable blindness – AI for Business

Big problems, shared solutions

Tackling global challenges has been the focus of many health data consortiums that Microsoft is enabling. The Microsoft Intelligent Network for Eyecare (MINE) – the initiative that Chandrasekhar read about – is now part of the Microsoft AI Network for Healthcare, which also includes consortiums focused on cardiology and pathology.

For all three, Microsoft’s aim is to play a supporting role to help doctors and researchers find ways to improve health care using AI and machine learning.

“The health care providers are the experts,” said Prashant Gupta, Program Director in Azure Global Engineering. “We are the enabler. We are empowering these health care consortiums to build new things that will help with the last mile.”

In the Forus Health project, that “last mile” started by ensuring image quality. When members of the consortium began doing research on what was needed in the eyecare space, Forus Health was already taking the 3nethra classic to villages to scan hundreds of villagers in a day. But because the images were being captured by minimally trained technicians in areas open to sunlight, close to 20% of the images were not high quality enough to be used for diagnostic purposes.

“If you have bad images, the whole process is crude and wasteful,” Gupta said. “So we realized that before we start to understand disease markers, we have to solve the image quality problem.”

Now, an image quality algorithm immediately alerts the technician when an image needs to be retaken.

The same thought process applies to the cardiology and pathology consortiums. The goal is to see what problems exist, then find ways to use technology to help solve them.

“Once you have that larger shared goal, when you have partners coming together, it’s not just about your own efficiency and goals; it’s more about social impact,” Gupta said.

And the highest level of social impact comes through collaboration, both within the consortiums themselves and when working with organizations such as Forus Health who take that technology out into the world.

Chandrasekhar said he is eager to see what comes next.

“Even though it’s early, the impact in the next five to 10 years can be phenomenal,” he said. “I appreciated that we were seen as an equal partner by Microsoft, not just a small company. It gave us a lot of satisfaction that we are respected for what we are doing.”

Top image: Forus Health’s 3nethra classic is an eye-scanning device that can be attached to the back of a moped and transported to remote locations. Photo by Microsoft. 

Leah Culler edits Microsoft’s AI for Business and Technology blog.

Go to Original Article
Author: Microsoft News Center