Tag Archives: many

Learning from our customers in the Greater China Region

As so many organizations have shifted to remote work during COVID-19, we are hearing inspiring stories from customers discovering new ways to connect, collaborate, and keep business moving. From Sydney, Australia, to Seattle, Washington, schools, hospitals, small businesses, and large companies alike have found inventive ways to enable remote work across their organizations. We want to share what they are learning. Each week we will be spotlighting customers in one impacted region around the globe. First up: the Greater China Region. My colleague Lily Zheng in Shanghai is sharing stories for customers who, faced with extraordinary and difficult circumstances, have found innovative new ways to work.

Since we last heard from Lily and team, the region has begun to move into recovery mode. “Many businesses reopened, and more and more people have started going back to work,” Lily reports. “In the past two months, Teams has certainly played an important role in helping our customers pass through the most difficult time.” Looking ahead, she says: “Teams can play an even bigger role in helping our customers boost their productivity and increase their business resilience.” Here are some examples of how organizations in the Greater China Region kept things moving over the past few months.

Education

With travel bans and health concerns keeping students, faculty, and staff at home over the past months, schools and universities have experienced a crash course in moving to remote learning. In February, the Peking University Guanghua School of Management used Teams to hold a digital school-opening ceremony with thousands of students. Meanwhile, Tamkang University, a private university headquartered in New Taipei City, Taiwan, quickly enabled distance learning for students in China, Macau, and Hong Kong by leveraging Microsoft Teams and cloud resources on their iClass Mobile Learning Platform. A total of 637 students and 1,041 teachers were set up to use the platform in 2,366 classes. Hong Kong Polytechnic University is conducting 120 to 160 concurrent teaching sessions daily through Microsoft Teams, with 10,000 to 11,000 students connecting simultaneously during peak times. And Wellington College International Tianjin, quickly established a solid e-learning program where students have been able to continue their learning journey with lessons conducted over Microsoft Teams.

Healthcare

The healthcare industry has faced extraordinary pressure during COVID-19. We’ve all seen news stories about medical supply challenges, but these organizations have experienced challenges in the IT space, too, including a lack of video conferencing solutions and heavy dependency on manual patient data inputting. Staff at the largest hospital in WenZhou, China, 2nd Affiliated Hospital of WMU, for instance, were unable to communicate with personnel inside the quarantined area. They had never used Teams before, but quickly deployed it and were able to communicate with quarantined-area colleagues. The team at Zhongshan Hospital in Shanghai hadn’t used Teams before the outbreak either, but they put it to use to hold their first remote leadership meeting. “It only took a few days to get reports,” said Mr. Li, Chief of Information Management Center at Zhongshan Hospital, “and we were able to successfully hold our first leader’s meeting, which was well-received by the whole leadership team.”

Commercial

SF-Express is one of the best-known logistics companies in China. CIO Sheng Wang said, “Fortunately, we deployed Teams after we revamped our network branches [in] December of 2019. “It solves our needs for remote working, meeting, and training, and allows our staff to collaborate with high productivity.” DHL Supply Chain China also deployed Teams to handle its increasing remote collaboration needs.

The manufacturing industry has been hit hard by the impact of the outbreak, but also used it to discover new ways to digitally transform. Headquartered in Ningbo, China, Joyson Electronic has more than 100 bases in 30 countries and over 50,000 employees globally. “Microsoft Teams really helps Joyson improve our cross-regional and boundary collaboration productivity during the COVID-19 outbreak,” reported CIO Zong Jia. “We hold daily internal meetings, co-edit documents, and interview candidates on Teams.”

Over 50 percent of China International Marine Containers (CIMC) Group Ltd.’s business comes from export, which brings an urgent need for project-based management and real-time communications. CIMC has been using Teams to easily enable multiple collaborative team channels and remove restrictions imposed by different work locations. They’re finding it facilitates employee collaboration and has helped them complete their first successful step towards a modern workplace transformation.

We hope you’ve found it helpful to read about some of the innovative ways our customers have transformed their organizations during this difficult time. We have seen how schools have moved quickly to remote learning in virtual classrooms, and are continuing to hold important meetings, with Teams. We’ve seen how healthcare workers, faced with communication barriers brought on by COVID-19, have used Teams to connect. And we’ve seen how commercial enterprises are bringing distributed teams together and are bringing formerly in-person-only meetings—including job interviews—online. As the Greater China Region enters a new phase of its COVID-19 experience, we look forward to learning about how they apply what they’ve discovered in the days to come. We’ll be sharing more inspiring customer stories here soon, so check back often.

Go to Original Article
Author: Microsoft News Center

Cisco security GM discusses plan for infosec domination

Cisco believes CISOs are overwhelmed by too many security products and vendors, and the company introduced a new platform, ominously code-named Thanos, to help enterprises.

But despite being named after the Marvel Comics megavillain, Cisco’s SecureX platform isn’t necessarily designed to wipe out half of all existing security products within enterprise environments. Instead, Cisco is taking a different approach by opening up the platform, which was unveiled last month, and integrating with third parties.

Gee Rittenhouse, senior vice president and general manager of Cisco’s Security Business Group (SBG), said the aim of SecureX is to tie not only Cisco products together, but other vendor offerings as well. “We’ve been working really hard on taking the security problem and reducing it to its simplest form,” he told SearchSecurity at RSA Conference 2020 last month.

That isn’t to say that all security products are effective; many “are supposed to have a bigger impact than they actually do,” Rittenhouse said. Nevertheless, the SBG strategy for SecureX is to establish partnerships with third parties and invite them to integrate with the platform, he said, rather than Cisco trying to be everything to everyone. In this interview, Rittenhouse discusses the evolution of SecureX, how Cisco’s security strategy has shifted over the last decade and the company’s plan to change the infosec industry.

Editor’s note: This interview was edited for clarity and length.

How did the idea for SecureX come about?

Gee Rittenhouse CiscoGee Rittenhouse

Gee Rittenhouse: We thought initially if we had a solution for every one of the major threats vectors — email, endpoint, firewalls, cloud, etc. — for one vendor, Cisco, then that would be enough. You buy Cisco networking and you buy Cisco security and that transactional model will simplify the industry. And we realized very quickly that didn’t do anything except put a name on a box. Then the second thing we thought was this: What happens if we take all these different things and integrate the back end together so that when I see a threat on email, I can block on my endpoint? We stitch all this together [via the SecureX framework] on behalf of the customer, and not only does the blocking happen automatically but you also get better protection and higher efficacy. We’d tell people we had an integrated architecture. And the customers would look at us and say ‘Really? I don’t feel that. You’ve got a portal over here, and a portal over there’ and so on. And we’d say, ‘Look, we’ve worked for three years integrating this together and we have the highest efficacy.’ And they’d say, ‘Well, everybody has their numbers …’

About a couple of years ago, we said we’ve simplified the buying model and simplified the back end. Let’s try to simplify the user experience. But you have to be very careful with that. The classic approach is to build a platform, and everyone jumps on the platform and if you only have Cisco stuff, life is great. But, of course, there are other platforms and other products. We wanted to be precise about how we do this, so we picked a particular use case around investigations. It’s an important use case. We built this very simple investigation tool [Cisco Threat Response] that you can think about as the Google search of security. Within five seconds, you can find out that you don’t have [a specific threat] in your environment, or yes, you do and here’s how to block it and respond. The tool had the fastest rate of adoption of any of our products in Cisco’s history. It’s massively successful. More than 8,000 customers use it every day as their investigation tool.

Were you expecting that kind of adoption for Cisco Threat Response?

Rittenhouse: No. We were not. There were two things we weren’t expecting. We weren’t expecting the response in terms in usage. We thought there’d be a few customers using it. The other thing that we didn’t expect was a whole use community came together to, for example, integrate vendor X into the tool and publish the connectors on GitHub. A whole user community has evolved around that platform and extended the capability of it. In both cases, we were quite surprised.

When we saw how that worked, saw the business model, and we understood how people consumed it, we attached it to everything and then said ‘Let’s take the next step’ with analytics and security postures. We asked what a day in the life for security professional was. They’re flooded with noise and threats and alerts. They have to be able to decipher all of that — can the platform do that automatically on their behalf? That’s what we’re doing with SecureX, and the feedback has been super positive

What kind of feedback did you get from customers prior to Cisco Threat Response and SecureX? Did they have an idea of what they wanted?

There is only a handful of true, successful platform businesses in the world; it’s very hard to attract that community and achieve that scale.
Gee RittenhouseSVP and GM, Cisco

Rittenhouse: There was a lot of feedback from customers who asked us to make the front end of our portfolio simpler. But what does that actually mean? It was very generic feedback. And in fact, we struggled with the ‘single pane of glass’ approach. What typically happens with that approach is you try to do everything through it, and all of the sudden that portal becomes the slowest part of the portfolio. This actually took a lot of time and a lot of conversations with customers on how they actually work. We engaged a lot of them with design thinking, and Cisco Threat Response was the first thing to come out of those discussions, and then SecureX.

And I want to make the distinction between a platform and a single pane of glass or a portal. And we very much think of SecureX as a platform. And when you think about a platform, it’s usually something that other people can build stuff on top of, so the value to the community is other people’s contributions to it, and you get a multiplier effect. There is only a handful of true, successful platform businesses in the world; it’s very hard to attract that community and achieve that scale.

Like other recent studies, Cisco’s [2020] CISO Benchmark Report showed that many CISOs feel they have too many security products and are actively trying to reduce the number of vendors they have. Other vendors have talked about this trend and are trying to capitalize on it by becoming a one-stop security shop and pushing out other products. But with SecureX, it sounds like you’re taking a different approach by welcoming third-party vendors to the platform and being more open.

Rittenhouse: We would encourage the industry as a whole to be more open. In fact, the industry is not very open at all. One of the benefits to being open is the ability to integrate. In today’s industry, for example, let’s say you’re a security vendor and your technology says a piece of malware is a threat level 5, and I say it’s a level 2. And you’re integrated into our platform, and you’re freaking out because it’s a level 5. I ask you, ‘Rob, why do you think this? What’s the context around this? Share more.’ And until you have that open interface and integration, I just sit there and say, ‘For some reason, this vendor over here claims it’s big, but we don’t see it'”

So yes, we’re open. And I would anticipate the user experience with Cisco security products integrated together will be very different than what you would get with third parties integrated until they start to share more. And this is one of the issues you see in the SIEM and SOAR markets; they become data repositories for investigations after you get attacked. What actually happened? Let’s go back into the records and figure it out. Because of the data fidelity and the real-time nature [of SecureX] this is something you interact with immediately. It can automatically trace threats and set up workflows and bring in other team members to collaborate because you have that integrated back end.

Cisco has said it’s the biggest security vendor in the world by revenue, but most businesses probably still associate the company with networking. Now that SecureX has been introduced, what’s the strategy moving forward?

Rittenhouse: We’ve spent a lot of time on the messaging. I think more and more people recognize we’re the biggest enterprise security company. In many ways, our mission is to democratize security like [Duo Security’s] Wendy Nather said, so we want to make it invisible. We don’t want to be sending the message that you have to get this other stuff to be secure. We want it to be built into everything we do.

There’s been a lot of mergers and acquisitions, especially by companies looking to increase their infosec presence. But Wendy talked during her keynote about simplifying security instead adding product upon product. But it doesn’t sound like you’re feeling the pressure to do that.

Rittenhouse: No. We are not a private equity firm. We buy things for a purpose. And when we buy something, we’ll be happy to tell you why.

Go to Original Article
Author:

Workspot VDI key to engineering firm’s pandemic planning

Like many companies, Southland Industries is working to accelerate its virtualization plans in the face of the coronavirus pandemic.

The mechanical engineering firm, which is based in Garden Grove, Calif., and has seven main offices across the U.S., has been using the Workspot Workstation Cloud virtual desktop service. Combined with Microsoft Azure Cloud, Workspot’s service enables engineers to build design-intensive work at home and enables Southland to keep pace as technology advances. When COVID-19 emerged, the company was transitioning users in the mid-Atlantic states to virtual desktops.

Israel Sumano, senior director of infrastructure at Southland Industries, recently spoke about making the move to virtual desktops and the challenges posed by the current public health crisis.

How did your relationship with Workspot first begin?

Israel SumanoIsrael Sumano

Israel Sumano: We were replicating about 50 terabytes across 17 different locations in the U.S. real-time, with real-time file launches. It became unsustainable. So over the last five years, I’ve tested VDI solutions — Citrix, [VMware] Horizon, other hosted solutions, different types of hardware. We never felt the performance was there for our users.

When Workspot came to us, I liked it because we were able to deploy within a week. We tested it on on-prem hardware, we tested it on different cloud providers, but it wasn’t until we had Workspot on [Microsoft] Azure that we were comfortable with the solution.

For us to build our own GPU-enabled VDI systems [needed for computing-intensive design work], we probably would have spent about $4 million, and they would have been obsolete in about six years. By doing it with Microsoft, we were able to deploy the machines and ensure they will be there and upgradeable. If a new GPU comes out, we can upgrade to the new GPU and it won’t be much cost to us to migrate.

How has your experience in deploying Workspot been so far? What challenges have you met?

Sumano: It was a battle trying to rip the PCs from engineers’ hands. They had a lot of workstations [and] they really did not want to give them up. We did the first 125 between October 2017 and February 2018. … That pushed back the rest of the company by about a year and a half. We didn’t get started again until about October of 2019. By that time, everyone had settled in, and they all agreed it was the best thing we’ve ever done and we should push forward. That’s coming from the bottom up, so management is very comfortable now doing the rest of the company.

How did you convince workers that the virtualization service was worthwhile?

Sumano: They were convinced when they went home and were able to work, or when they were in a hotel room and they were able to work. When they were at a soccer match for their kids, and something came up that needed attention right away, they pulled out their iPads and were able … to manipulate [designs] or check something out. That’s when it kicked in.

In the past, when they went to a job site, [working] was a really bad experience. We invested a lot of money into job sites to do replication [there].

[With Workspot,] they were able to pick up their laptops, go to the job site and work just like they were at the office.

The novel coronavirus has forced companies to adopt work-at-home policies. What is Southland’s situation?

Sumano: We have offices in Union City [California], which is Marin County, and they were ordered to stay in place, so everyone was sent home there. We just got notice that Orange County will be sent home. Our Las Vegas offices have also been sent home.

Our job sites are still running, but having this solution has really changed the ability for these engineers to go home and work. Obviously, there’s nothing we can do about the shops — we need to have people on-hand at the shop, [as] we’re not fully automated at that level.

On the construction site, we need guys to install [what Southland has designed]. Those are considered critical by the county. They’re allowed to continue work at the job sites, but everybody from the offices has been set home, and they’re working from home.

We hadn’t done the transition for the mid-Atlantic division to Workspot. We were planning on finishing that in the next 10 weeks. We are now in a rush and plan on finishing it by next Friday. We’re planning on moving 100 engineers to Workspot, so they’re able to go home.

How has it been, trying to bring many workers online quickly?

Sumano: I’ve been doing this a long time. I’ve implemented large virtual-desktop and large Citrix environments in the past. It’s always been a year to a year-and-a-half endeavor.

We are rushing it for the mid-Atlantic. We’d like to take about 10 weeks to do it — to consolidate servers and reduce footprint. We’re skipping all those processes right now and just enacting [virtualization] on Azure, bringing up all the systems as-is and then putting everyone onto those desktops.

Has the new remote-work situation been a strain on your company’s infrastructure?

Sumano: The amount of people using it is exactly the same. We haven’t heard any issues about internet congestion — that’s always a possibility with more and more people working from home. It’s such a small footprint, the back-and-forth chatter between Workspot and your desktop, that it shouldn’t be affected much.

What’s your level of confidence going forward, given that this may be a protracted situation?

Sumano: We’re very confident. We planned on being 100% Azure-based by December 2020. We’re well on track for doing that, except for, with what’s happening right now, there was a bit of a scramble to get people who didn’t have laptops [some] laptops. There’s a lot of boots on the ground to get people able to work from home.

Most of our data is already on Azure, so it’s a very sustainable model going forward, unless there’s a hiccup on the internet.

Editor’s note: This interview has been edited for clarity and length.

Go to Original Article
Author:

Remote work shift may boost SaaS management platforms

In Milan, Ivan Fioravanti, CTO at CoreView, is working from home, like so many others because of the coronavirus. In Italy, the outbreak has caused a rapid shift to remote work. His firm makes a SaaS management platform for cloud-based SaaS systems, including Office 365, something that may gain HR’s interest, especially as remote work increases.

Features include workflow management and administration, but a SaaS management platform like the one from CoreView, which has dual headquarters in Alpharetta, Ga., and Italy, can also help HR departments and business managers get a better understanding for the productivity of employees.

SaaS applications connect through APIs into the cloud-based management platform to provide application usage data. This can include anything from companywide usage to employee-specific data on applications such as Outlook, Skype or Teams, whether the employee is in the office or remote.

This monitoring capability may appeal to firms new to remote working, said David Lewis, president and CEO of OperationsInc, an HR consulting firm in Norwalk, Conn.

Firms are now adopting remote work that “were not interested in it before,” Lewis said. “And that breeds a certain lack of sophistication about remote work.”

Indeed, he said, the coronavirus will have a major impact on how work gets done. “The number of companies that will have people working remotely will outnumber the number of companies that don’t,” he said.

Employees may see this type of monitoring as big brotherish, Lewis said. But it may “calm the concerns and paranoia that tends to creep in to most managers” about remote workers, he said. Managers may be concerned that employees working from home are distracted and not giving the job the time it needs, he said.

Ivan FioravantiIvan Fioravanti

This shift to remote work is happening quickly in Milan, Fioravanti said. The coronavirus problem is “really becoming worse day after day,” he said. Italy this week closed schools until March 15. Universities are closing as well, and the government is urging seniors to remain at home.

“Very few people are going to the office,” Fioravanti said. “If you go outside in the city, you see very few people around.”

SaaS management platform functions

Firms that are shifting to remote work and have invested in a SaaS management platform can decide on the level of monitoring they want, whether it’s a department, team or individual usage, Fioravanti said. Individual level monitoring can tell whether an employee is responding to such things as emails and chats and is engaged with co-workers and thirds parties, he said.

Along with providing insights into how an application is used, usage data can tell whether a firm needs all the seat licenses it is paying for. Workflow features can be used to speed up onboarding, and SaaS management platforms often provide embedded learning tools, such as short videos for ongoing training, on specific applications. The platforms also include licensing management and IT security functions, such as role-based access controls.

If there is fear of an employee backlash, or legal restrictions in some countries about employee monitoring, the system can be configured to anonymize users, Fioravanti said.

It’s the team’s productivity that matters more than individual metrics.
Manjunath BhatAnalyst, Garnter

Gartner analyst Manjunath Bhat said SaaS management platforms “are increasingly becoming important to manage, govern and secure SaaS applications.”

“It’s less about measuring individual productivity, and more about ensuring that employees are making use of the productivity tools at their disposal — and doing so in secure and compliant ways,” he said.

Bhat advised against using SaaS management platforms to monitor individual employees.

“Organizations will see employee backlash if the tools are used to target and penalize individuals for not using productivity apps,” Bhat said. What’s important to measure is the application’s “contribution toward business outcomes and not individual output,” he said.

“It’s the team’s productivity that matters more than individual metrics,” Bhat said.

Go to Original Article
Author:

SMBs struggle with data utilization, analytics

While analytics have become a staple of large enterprises, many small and medium-sized businesses struggle to utilize data for growth.

Large corporations can afford to hire teams of data scientists and provide business intelligence software to employees throughout their organizations. While many SMBs collect data that could lead to better decision-making and growth, data utilization is a challenge when there isn’t enough cash in the IT budget to invest in the right people and tools.

Sensing that SMBs struggle to use data, Onepath, an IT services vendor based in Kennesaw, Ga., conducted a survey of more than 100 businesses with 100 to 500 employees to gauge their analytics capabilities for the “Onepath 2020 Trends in SMB Data Analytics Report.”

Among the most glaring discoveries, the survey revealed that 86% of the companies that invested in personnel and analytics surveyed felt they weren’t able to fully exploit their data.

Phil Moore, Onepath’s director of applications management services, recently discussed both the findings of the survey and the challenges SMBs face when trying to incorporate analytics into their decision-making process.

In Part II of this Q&A, he talks about what failure to utilize data could ultimately mean for SMBs.

What was Onepath’s motivation for conducting the survey about SMBs and their data utilization efforts?

Phil MoorePhil Moore

Phil Moore: For me, the key finding was that we had a premise, a hypothesis, and this survey helped us validate our thesis. Our thesis is that analytics has always been a deep pockets game — people want it, but it’s out of reach financially. That’s talking about the proverbial $50,000 to $200,000 analytics project… Our goal and our mission is to bring that analytics down to the SMB market. We just had to prove our thesis, and this survey proves that thesis.

It tells us that clients want it — they know about analytics and they want it.

What were some of the key findings of the survey?

Moore: Fifty-nine percent said that if they don’t have analytics, it’s going to take them longer to go to market. Fifty-six percent said it will take them longer to service their clients without analytics capabilities. Fifty-four percent, a little over half, said if they didn’t have analytics, or when they don’t have analytics, they run the risk of making a harmful business decision.

We have people trying analytics — 67% are spending $10,000 a year or more, and 75% spent at least 132 hours of labor maintaining their systems — but they’re not getting what they need.
Phil MooreDirector of applications management services, Onepath

That tells us people want it… We have people trying analytics — 67% are spending $10,000 a year or more, and 75% spent at least 132 hours of labor maintaining their systems — but they’re not getting what they need. A full 86 % said they’re underachieving when they’re taking a swing with their analytics solution.

What are the key resources these businesses lack in order to fully utilize data? Is it strictly financial or are there other things as well?

Moore: We weren’t surprised, but what we hadn’t thought about is that the SMB market just doesn’t have the in-house skills. One in five said they just don’t have the people in the company to create the systems.

Might new technologies help SMBs eventually exploit data to its full extent?

Moore: The technologies have emerged and have matured, and one of the biggest things in the technology arena that helps bring the price down, or make it more available, is simply moving to the cloud. An on-premises analytics solution requires hardware, and it’s just an expensive footprint to get off the ground. But with Microsoft and their Azure Cloud and their Office 365, or their Azure Synapse Analytics offering, people can actually get to the technology at a far cheaper price point.

That one technology right there makes it far more affordable for the SMB market.

What about things like low-code/no-code platforms, natural language query, embedded analytics — will those play a role in helping SMBs improve data utilization for growth?

Moore: In the SMB market, they’re aware of things like machine learning, but they’re closer to the core blocking and tackling of looking at [key performance indicators], looking at cash dashboards so they know how much cash they have in the bank, looking at their service dashboard and finding the clients they’re ignoring.

The first and easiest one that’s going to apply to SMBs is low-code/no-code, particularly in grabbing their source data, transforming it and making it available for analytics. Prior to low-code/no-code, it’s really a high-code alternative, and that’s where it takes an army of programmers and all they’re doing is moving data — the data pipeline.

But there will be a set of the SMB market that goes after some of the other technologies like machine learning — we’ve seen some people be really excited about it. One example was looking at [IT help] tickets that are being worked in the service industry and comparing it with customer satisfaction. What they were measuring was ticket staleness, how many tickets their service team were ignoring, and as they were getting stale, their clients would be getting angry for lack of service. With machine learning, they were able to find that if they ignored a printer ticket for two weeks, that is far different than ignoring an email problem for two weeks. Ignoring an email problem for two days leads to a horrible customer satisfaction score. Machine learning goes in and relates that stuff, and that’s very powerful. The small and medium-sized business market will get there, but they’re starting at earlier and more basic steps.

Editor’s note: This Q&A has been edited for brevity and clarity.

Go to Original Article
Author:

How to install the Windows Server 2019 VPN

Many organizations rely on a virtual private network, particularly those with a large number of remote workers who need access to resources.

While there are numerous vendors selling their VPN products in the IT market, Windows administrators also have the option to use the built-in VPN that comes with Windows Server. One of the benefits of using Windows Server 2019 VPN technology is there is no additional cost to your organizations once you purchase the license.

Another perk with using a Windows Server 2019 VPN is the integration of the VPN with the server operating system reduces the number of infrastructure components that can break. An organization that uses a third-party VPN product will have an additional hoop the IT staff must jump through if remote users can’t connect to the VPN and lose access to network resources they need to do their jobs.

One relatively new feature in Windows Server 2019 VPN functionality is the Always On VPN, which some users in various message boards and blogs have speculated will eventually replace DirectAccess, which remains supported in Windows Server 2019. Microsoft cites several advantages of Always On VPN, including granular app- and traffic-based rules to restrict network access, support for both RSA and elliptic curve cryptography algorithms, and native Extensible Authentication Protocol support to enable the use of a wider variety of advanced authentication methods.

Microsoft documentation recommends organizations that currently use DirectAccess to check Always On VPN functionality before migrating their remote access processes.

The following transcript for the video tutorial by contributor Brien Posey explains how to install the Windows Server 2019 VPN role. 

In this video, I want to show you how to configure Windows Server 2019 to act as a VPN server.

Right now, I’m logged into a domain joined Windows Server 2019 machine and I’ll get the Server Manager open so let’s go ahead and get started.

The first thing that I’m going to do is click on Manage and then I’ll click on Add Roles and Features.

This is going to launch the Add Roles and Features wizard.

I’ll go ahead and click Next on the Before you begin screen.

For the installation type, I’m going to choose Role-based or feature-based installation and click Next. From there I’m going to make sure that my local server is selected. I’ll click Next.

Now I’m prompted to choose the server role that I want to deploy. You’ll notice that right here we have Remote Access. I’ll go ahead and select that now. Incidentally, in the past, this was listed as Routing and Remote Access, but now it’s just listed as a Remote Access. I’ll go ahead and click Next.

I don’t need to install any additional feature, so I’ll click Next again, and I’ll click Next [again].

Now I’m prompted to choose the Role Services that I want to install. In this case, my goal is to turn the server into a VPN, so I’m going to choose DirectAccess and VPN (RAS).

There are some additional features that are going to need to be installed to meet the various dependencies, so I’ll click Add Features and then I’ll click Next. I’ll click Next again, and I’ll click Next [again].

I’m taken to a confirmation screen where I can make sure that all of the necessary components are listed. Everything seems to be fine here, so I’ll click Install and the installation process begins.

So, after a few minutes the installation process completes. I’ll go ahead and close this out and then I’ll click on the Notifications icon. We can see that some post-deployment configuration is required. I’m going to click on the Open the Getting Started Wizard link.

I’m taken into the Configure Remote Access wizard and you’ll notice that we have three choices here: Deploy both DirectAccess and VPN, Deploy DirectAccess Only and Deploy VPN Only. I’m going to opt to Deploy VPN Only, so I’ll click on that option.

I’m taken into the Routing and Remote Access console. Here you can see our VPN server. The red icon indicates that it hasn’t yet been configured. I’m going to right-click on the VPN server and choose the Configure and Enable Routing and Remote Access option. This is going to open up the Routing and Remote Access Server Setup Wizard. I’ll go ahead and click Next.

I’m asked how I want to configure the server. You’ll notice that the very first option on the list is Remote access dial-up or VPN. That’s the option that I want to use, so I’m just going to click Next since it’s already selected.

I’m prompted to choose my connections that I want to use. Rather than using dial-up, I’m just going to use VPN, so I’ll select the VPN checkbox and click Next.

The next thing that I have to do is tell Windows which interface connects to the internet. In my case it’s this first interface, so I’m going to select that and click Next.

I have to choose how I want IP addresses to be assigned to remote clients. I want those addresses to be assigned automatically, so I’m going to make sure Automatically is selected and click Next.

The next prompt asks me if I want to use a RADIUS server for authentication. I don’t have a RADIUS server in my own organization, so I’m going to choose the option No, use Routing and Remote Access to authenticate connection requests instead. That’s selected by default, so I can simply click Next.

I’m taken to a summary screen where I have the chance to review all of the settings that I’ve enabled. If I scroll through this, everything appears to be correct. I’ll go ahead and click Finish.

You can see that the Routing and Remote Access service is starting and so now my VPN server has been enabled.

View All Videos

Go to Original Article
Author:

Decision-makers may prefer Wi-Fi over 5G in retail networks

While fifth-generation wireless has taken the technology world by storm, many retailers don’t see a need to heed the hype.

Several use cases may glean immediate 5G benefits, yet 5G in retail is superfluous for now. Although 5G can support retail networks that require advanced capabilities, such as virtual reality, the retail world won’t depend on 5G because other wireless technologies are still efficient, according to a recent Forrester Research report. The report “The CIO’s Guide To 5G In The Retail Sector” explored particular retail use cases, and report author and principal analyst Dan Bieler discussed key differences between retail and other 5G use cases.

“Retailers are quite sophisticated in their existing technology understanding,” Bieler said. “They have achieved some great solutions with existing technologies, and they will not risk upsetting everything in the short term where they don’t see a clear [ROI] for making additional network infrastructure investments in 5G.”

Dan BielerDan Bieler

Retailers are interested in 5G for their networks, Bieler said, yet few have implemented or deployed 5G so far. Some retailers may seek out 5G as a replacement for existing MPLS connectivity, but this choice depends on pricing models and business requirements. Overall, IT decision-makers may prefer Wi-Fi over 5G in retail networks because not all retailers require the advanced capabilities 5G networks offer, he added.

5G in retail lacks transformative qualities largely because cellular technologies weren’t developed for indoor network coverage, and physical objects indoors can impede 5G’s millimeter wave frequencies and its line-of-sight travel capabilities.

The advent of Wi-Fi 6, or 802.11ax, may interest retailers more than 5G, as Wi-Fi historically supports more indoor use cases and networks than cellular technologies. Both Wi-Fi 6 and 5G offer similar capabilities, which makes them competitors in some use cases and complementary in others. For exclusively indoor retail environments, IT decision-makers may not see a need for 5G networks, Bieler said.

“[Retailers] can do a lot with the technologies that we have today,” he said. “5G will be a continuum rather than a completely revolutionary new technology for them.”

5G benefits
Aside from 5G in retail, the new generation of cellular technology has several benefits for all types of organizations.

Another issue retailers could face regarding 5G is customer apprehension. Despite 5G’s various new capabilities, customers don’t necessarily care about technological innovations and won’t alter their shopping habits because of 5G. However, customers in younger age groups may be more willing to adapt to the capabilities 5G enables, so organizations should focus more on how to win over older age groups, the report said.

Benefits of 5G in retail use cases, networks

Despite the efficiency of other wireless technologies, the report noted three main areas where 5G in retail can benefit business operations, including the following:

  1. Back-end operations, where organizations can handle work the customers don’t see, such as tracking and monitoring inventory within warehouses.
  2. Front-end operations, which are customer-facing operations and deal with tracking and monitoring products and people within stores.
  3. Finance operations, where the store can remotely track and monitor a customer’s product or service usage and charge them accordingly.

As 5G rolls out throughout the 2020s, more features and potential benefits for organizations will arise, such as network slicing and mobile edge computing. These capabilities can help organizations create experiences tailored specifically to individual customers.

“5G allows the retailer to track many more items and many more sensors in a store than previous cellular technologies, so they can have a much more granular picture of what retail customers are looking at, where they are going and what they are doing with products in the store,” Bieler said.

Other benefits the report cited include cost-efficient store connectivity, enhanced customer insights and improved transparency within supply chains. Organizations won’t glean these benefits for several years, Bieler said, as carriers will deploy new 5G features in stages.

However, decision-makers can prepare to deploy 5G in retail use cases by focusing closely on network design and determining whether 5G is the right choice for their operations. To evaluate this, organizations can assess their indoor connectivity environments and gauge how a 5G deployment could affect the business sectors in which the store or organization requires 5G access.

Overall, 5G has various benefits for retail use cases, the report said, but these benefits are not universal. Businesses need to look closely at their network infrastructures and business requirements to evaluate 5G’s potential effect on their operations. Regardless, Bieler said he was sure deployments of 5G in retail will eventually become common.

“[Retailers] will still adopt it over time because 5G will provide super-fast broadband connectivity,” Bieler said. “It opens up your business model opportunities in an easier way. So, over time, retailers will definitely embrace it, but not tomorrow.”

Go to Original Article
Author:

AI vendors to watch in 2020 and beyond

There are thousands of AI startups around the world. Many aim to do similar things — create chatbots, develop hardware to better power AI models or sell platforms to automatically transcribe business meetings and phone calls.

These AI vendors, or AI-powered product vendors, have raised billions over the last decade, and will likely raise even more in the coming years. Among the thousands of startups, a few shine a little brighter than others.

To help enterprises keep an eye on some of the most promising AI startups, here is a list of those founded within the past five years. The startups listed are all independent companies, or not a subsidiary of a larger technology vendor. The chosen startups also cater to enterprises rather than consumers, and focus on explainable AI, hardware, transcription and text extraction, or virtual agents.

Explainable AI vendors and AI ethics

As the need for more explainable AI models has skyrocketed over the last couple of years and the debate over ethical AI has reached government levels, the number of vendors developing and selling products to help developers and business users understand AI models has increased dramatically. Two to keep an eye on are DarwinAI and Diveplane.

DarwinAI uses traditional machine learning to probe and understand deep learning neural networks to optimize them to run faster.

Founded in 2017 and based in Waterloo, Ontario, the startup creates mathematical models of the networks, and then uses AI to create a model that infers faster, while claiming to maintain the same general levels of accuracy. While the goal is to optimize the deep learning models, a 2018 update introduced an “explainability toolkit” that offers optimization recommendations for specific tasks. The platform then provides detailed breakdowns on how each task works, and how exactly the optimization will improve them.

Founded in 2017, Diveplane claims to create explainable AI models based on historical data observations. The startup, headquartered in Raleigh, N.C., puts its outputs through a conviction metric that ranks how likely new or changed data fits into the model. A low ranking indicates a potential anomaly. A ranking that’s too low indicates that the system is highly surprised, and that the data likely doesn’t belong in a model’s data set.

AI startups, AI vendors
There are thousands of AI startups in the world today, and it looks like there will be many more over the coming years.

In addition to the explainability product, Diveplane also sells a product that creates an anonymized digital twin of a data set. It doesn’t necessarily help with explainability, but it does help with issues around data privacy.

According to Diveplane CEO Mike Capps, Diveplane Geminai takes in data, understands it and then generates new data from it without carrying over personal data. In healthcare, for example, the product can input patient data and scrub personal information like names and locations, while keeping the patterns in the data. The outputs can then be fed into machine learning algorithms.

“It keeps the data anonymous,” Capps said.

AI hardware

To help power increasingly complex AI models, more advanced hardware — or at least hardware designed specifically for AI workloads — is needed. Major companies, including Intel and Nvidia, have quickly stepped up to the challenge, but so, too, have numerous startups. Many are doing great work, but one stands out.

Cerebras Systems, a 2016 startup based in Los Altos, Calif., made headlines around the world in 2019 when it created what it dubbed the world’s largest computer chip designed for AI workloads. The chip, about the size of a dinner plate, has some 400,000 cores and 1.2 trillion transistors. By comparison, the largest GPU has around 21.1 billion transistors.

The company has shipped a limited number of chips so far, but with a valuation expected to be well over $1 billion, Cerebras looks to be going places.

Automatic transcription companies

It’s predicted that more businesses will use natural language processing (NLP) technology in 2020 and that more BI and AI vendors will integrate natural language search functions into their platforms in the coming years.

Numerous startups sell transcription and text capturing platforms, as well as many established companies. It’s hard to judge them, as their platforms and services are generally comparable; however, two companies stand out.

Fireflies.ai sells a transcription platform that syncs with users’ calendars to automatically join and transcribe phone meetings. According to CEO and co-founder Krish Ramineni, the platform can transcribe calls with over 90% accuracy levels after weeks of training.

The startup, founded in 2016, presents transcripts within a searchable and editable platform. The transcription is automatically broken into paragraphs and includes punctuation. Fireflies.ai also automatically extracts and bullets information it deems essential. This feature does “a fairly good job,” one client said earlier this year.

The startup plans to expand that function to automatically label more types of information, including tasks and questions.

Meanwhile, Trint, founded in late 2014 by former broadcast journalist Jeff Kofman, is an automatic transcription platform designed specifically for newsrooms, although it has clients across several verticals.

The platform can connect directly with live video feeds, such as the streaming of important events or live press releases, and automatically transcribe them in real time. Transcriptions are collaborative, as well as searchable and editable, and included embedded time codes to easily go back to the video.

“It’s a software with an emotional response, because people who transcribe generally hate it,” Kofman said.

Bots and virtual agents

As companies look to cut costs and process client requests faster, the use of chatbots and virtual agents has greatly increased across numerous verticals over the last few years. While there are many startups in this field, a couple stand out.

Boost.ai, a Scandinavian startup founded in 2016, sells an advanced conversational agent that it claims is powered by a neural network. Automatic semantic understanding technology sits on top of the network, enabling the agent to read textual input word by word, and then as a whole sentence, to understand user intent.

Agents are pre-trained on one of several verticals before they are trained on the data of a new client, and the Boost.ai platform is quick to set up and has a low count of false positives, according to co-founder Henry Vaage Iversen. It can generally understand the intent of most questions within a few weeks of training, and will find a close alternative if it can’t understand it completely, he said.

The platform supports 25 languages, and pre-trained modules for a number of verticals, including banking, insurance and transportation industries.

Formed in 2018, EyeLevel.ai doesn’t create virtual agents or bots; instead, it has a platform for conversational AI marketing agents. The San Francisco-based startup has more than 1,500 chatbot publishers on its platform, including independent developers and major companies.

Eyelevel.ai is essentially a marketing platform — it advertises for numerous clients through the bots on in its marketplace. Earlier this year, Eyelevel.ai co-founder Ryan Begley offered an example.

An independent developer on its platform created a bot that quizzes users on their Game of Thrones knowledge. The bot operates on social media platforms, and, besides providing a fun game for users, it also collects marketing data on them and advertises products to them. The data it collects is fed back into the Eyelevel platform, which then uses it to promote through its other bots.

By opening the platform to independent developers, it gives individuals a chance to get their bot to a broader audience while making some extra cash. Eyelevel.ai offers tools to help new bot developers get started, too.

“Really, the key goal of the business is help them make money,” Begley said of the developers.

Startup launches continuing to surge

This list of AI-related startups represents only a small percentage of the startups out there. Many offer unique products and services to their clients, and investors have widely picked up on that.

According to the comprehensive AI Index 2019 report, a nearly 300-page report on AI trends complied by the Human-Centered Artificial Intelligence initiative at Stanford University, global private AI investment in startups reached $37 billion in 2019 as of November.

The report notes that since 2010, which saw $1.3 billion raised, investments in AI startups have increased at an average annual growth rate of over 48%.

The report, which considered only AI startups with more than $400,000 in funding, also found that more than 3,000 AI startups received funding in 2018. That number is on the rise, the report notes.

Go to Original Article
Author:

How should organizations approach API-based SIP services?

Many Session Initiation Protocol features are now available through open APIs for a variety of platforms. While voice over IP only refers to voice calls, SIP encompasses the set up and release of all calls, whether they are voice, video or a combination of the two.

Because SIP establishes and tears down call sessions, it brings multiple tools into play. SIP services enable the use of multimedia, VoIP and messaging, and can be incorporated into a website, program or mobile application in many ways.

The APIs available range from application-specific APIs to native programming languages, such as Java or Python, for web-based applications. Some newer interfaces are operating system-specific for Android and iOS. SIP is an open protocol, which makes most features available natively regardless of the SIP vendor. However, the features and implementations for SIP service APIs are specific to the API vendor. 

Some of the more promising features include the ability to create a call during the shopping experience or from the shopping cart at checkout. This enables customer service representatives and customers to view the same product and discuss and highlight features within a browser, creating an enhanced customer shopping experience.

The type of API will vary based on which offerings you use. Before issuing a request for a quote, issue a request for information (RFI) to learn what kinds of SIP service APIs a vendor has to offer. While this step takes time, it will allow you to determine what is available and what you want to use. You will want to determine the platform or platforms you wish to support. Some APIs may be more compatible with specific platforms, which will require some programming to work with other platforms.

Make sure to address security in your RFI.  Some companies will program your APIs for you. If you don’t have the expertise, or aren’t sure what you’re looking for, then it’s advantageous to meet with some of those companies to learn what security features you need. 

Go to Original Article
Author:

Clumio eyes security, BaaS expansion with VC funding

Merging storage and security together effectively has been an elusive goal for many technology vendors over the years, but Clumio believes it has a winning formula — and one that can effectively mitigate ransomware threats.

Clumio, a backup-as-a-service provider based in Santa Clara, Calif., recently celebrated $135 million in Series C funding. The startup was founded in 2017 with the goal of leveraging cloud-native services to build a scalable and agile BaaS offering that could also meet enterprises’ needs for data protection and analytics needs.

In this Q&A, Clumio CTO Chad Kinney and CSO Glenn Mulvaney discuss the origin story of the company, how they plan to utilize their recent funding round, and how Clumio addresses ransomware threats.

Editor’s note: This interview has been edited for length and clarity.

Tell me how the company was founded.

Chad Kinney: The company was founded about two years ago. And the core concept behind it was to fundamentally remove the complexity of traditional data protection to start with, and do so by delivering a service offering that was delivered via the public cloud.

A few things we realized early on were, as customers were journeying to the public cloud, SaaS-based offerings, and path-based offerings, they needed a way to be able to protect their data set along the way. And we realized that people were running into roadblocks and moving data to the public cloud because data protection was not able to deliver the same type of functions and features that they delivered on premises, and there was a big barrier there that we were breaking through to help customers be able to journey along the public cloud.

The second part was, as we got to the public cloud, security became a big key focus. Our ability to be able to secure this information through both encryption and encryption-in-flight as well as various other ones Glen will go through on the core platform itself was something that customers were very much hyper-focused on as they moved data more and more into the public cloud.

So far we’ve raised about $186 million in a series of A, B and C. Most recently we just closed a series C of $135 million.

How do you plan to use that $135 million to grow the company?

Kinney: A lot of the key focus right now is expediting the introduction of new data sources for the platform itself. Today we back up VMware on premises, VMware running in AWS, as well as elastic block storage for AWS. And so, continuing to expand the data sources is a key thing we’re moving forward with as part of this investment — to get customers access to new data sources faster.

Give me a rundown of what the platform is all about.

Kinney: Fundamentally, we’ve built this platform for the public cloud, on top of AWS. We’ve built in a bunch of great efficiencies in the way the data is ingested. With anything that runs on the public cloud, if you compare that with something that runs on premises, typically you do duplication and security is retrofitted to the data center itself. And the world has shifted dramatically where people are looking to utilize the public cloud heavily and remove the things completely out of the data center. We were able to provide what we call a cloud connector that gets deployed in a customer’s environment — it’s a virtual appliance so there’s no hardware or anything like that. We do duplication and compression and encryption before the data is sent over the wire. We leverage the capabilities of S3 within Amazon, and we use their scale as data gets ingested over the platform itself. Then we use various stateless functions within the platform to churn through the data, as well as DynamoDB for a lot of the metadata functions and various other structures in AWS, and the agility and scale of that core platform to allow us to still be able to ingest data incredibly quickly and be able to provide services on top of that platform.

Glenn Mulvaney: From the security side, leveraging a lot of those public cloud controls we have in Amazon, we’ve implemented a model where data encryption is always on in the platform. It’s not an option to turn it off and data is always encrypted and compressed. And the way it starts, which I think is a critical feature of the platform, is that the data is encrypted before it leaves the customer environment; it’s encrypted in the customer environment, it’s transmitted over a secure channel and then it’s stored securely in S3. And there’s different encryption keys used in each of those steps.

In terms of security in a more general fashion, we think of it in a couple of different ways. Fundamentally, we think of it as technology, people and processes, so we’ve talked about the technology a little bit in terms of how we handle encryption, but for the people and the processes, what we have implemented is the ISO 27001 framework, and we just completed our stage 2 audit last week. The ISO 27001 framework gives us a solid foundation for principles and controls for internal processes, and it also guided how we trained our employees about security awareness. We really used that as a guideline to integrate a lot of security into our software development lifecycle and into our QA lifecycle and broadly across all of the employees at the company, including sales and marketing and customer success.

Do you see yourself as more of a security vendor or a backup vendor or both?

Kinney: I’d say a little bit of both. I’d say we’re a security-first company where we really spent a lot of time thinking about what we’re doing as a core platform setting ourselves up for success. If you had to put a name on it, I’d say we’re more of a data platform company than anything.

What effects have ransomware attacks had on the backup and data protection market in general?

Mulvaney: I think with the prevalence of ransomware attacks happening at all levels of organizations of all sizes, people are thinking a lot more seriously about their data protection and about their ability to recover from some sort of ransomware attack. I think there’s certainly a lot of opportunity for Clumio to help a lot of organizations like that and to be able to give them a truly secure ability to recover from something like a ransomware attack. Certainly the prevalence of these [attacks] is increasing at a rate we hadn’t anticipated, and I think that’s helping in the market for data protection to actually drive people to think much more seriously about what their backup compliance policies look like.

How does Clumio address ransomware threats in a way that’s different from other backup providers?

Kinney: Let me give you the most recent example, which is an interesting one. We recently announced the capability to be able to back up elastic block storage from AWS and when you look at the solutions that are out there today, most people protect data with snapshots and the snapshots live in the same account as the production data. Most people rely on these snapshots for quick recovery but they’re also relying on them for the backup. And when malware hits or a bad actor hits on that particular account, they functionally get access to both the production data as well as the backup of that data in the same account and so it’s opened up possibilities for people to run into data loss issues.

With our solution what we’re fundamentally doing is we’re copying the data and creating an air gap solution between the customer’s environment and Clumio, which enables people to protect their data outside of their account and protect them from malware and ransomware attacks. We store all data in S3, which is unbeatable so no data, once backed up, can even change itself in any factor, so it gives customers the ability with our recovery mechanism to restore data into another AWS account, alleviating any sort of malware issues that may occur within one of their other AWS accounts.  

What do the next 12 months look like for the company?

Kinney: The motivation for us is to continue to expand more and more into the public cloud. Today we solve the key focus around private cloud, which is VMware. As people are moving to the public cloud some are choosing to use VMware running in AWS which is using a button to quickly move assets into the public cloud. They’re also going and re-architecting applications into the public cloud, like using elastic block storage and other platform and service-based offerings. We are going to continue to expand in both SaaS-based offerings the usual suspects in that as well as more and more cloud-native capabilities so we can follow customers along that journey.

Beyond the additional data sources, we’re adding additional functions on top of those datasets; we’re investing in things like anomaly detection and reporting over the next 12 months and we are slowly bringing those into the platform as they come to bear.

Mulvaney: From the compliance side in 2020, obviously we’re thinking about looking closely at CCPA [California Privacy Protection Act] and I think with that going into effect on January 1 we’re going to see that there’s probably going to be more emerging new standards for certifications for protections and personal information handling already the ISO 27001 was revised in 2019 and previously was only revised in 2014 so I think protection of personal data is going to be a paramount part of our roadmap and in 2020 we’re looking very closely at doing high-trust certification and beginning implementation for Fedramp.

Go to Original Article
Author: