Tag Archives: application

Citrix patches vulnerability as ransomware attacks emerge

A new round of Citrix patches arrived Thursday for the vendor’s Application Delivery Controller and Gateway products as reports of ransomware attacks targeting vulnerable systems emerged.

The directory traversal flaw allows an unauthenticated party to perform arbitrary code execution. Originally, the Citrix patches were scheduled for release later this month, but last week the vendor accelerated the delivery and issued the first round of patches. Thursday’s patches are for Citrix ADC and Citrix Gateway versions 12.1 and 13.0. A fix for version 10.5 of the products is scheduled for release Friday.

The vulnerability, CVE-2019-19781, was disclosed in December before Citrix had an opportunity to develop fixes. Fermin Serna, CISO at Citrix, previously told SearchSecurity that the company decided to disclose the vulnerability at that time because it had received three separate reports of the flaw within two days, which indicated the risk of exploitation was higher than normal.

In a blog post, Serna urged customers to immediately apply the Citrix patches and also advised customers to take advantage of a free scanning tool, co-developed with FireEye Mandiant, designed to detect indicators of compromise in customer environments running ADC, Gateway and SD-WAN WANOP products.

It’s unclear how many unpatched systems are currently online. Security researcher Victor Gevers, who is also chair of the Dutch Institute for Vulnerability Disclosure, said via Twitter that his public scans showed the number of vulnerable Citrix systems on the internet fell to 11,372 Thursday from a high of 128,777 on Dec. 31. Gevers’ research showed that many of the vulnerable systems during that stretch either “powered down” or applied temporary mitigations in lieu of patches.

Ransomware attacks reported

As Citrix rolled out the latest patches, two separate reports of ransomware detections on vulnerable systems emerged. On Thursday, FireEye threat analyst Andrew Thompson noted on Twitter that he observed a threat actor using the Citrix vulnerability to gain initial access to a network and then pivoting to Windows environment to attempt a ransomware infection. “If you haven’t already begun mitigating, you really need to consider the ramifications,” Thompson wrote on Twitter.

On Friday, anonymous security researcher known as “Under the Breach” also reported a potential exploit of CVE-2019-19781 in a Sodinokibi ransomware attack on German carmaker Gedia. Under the Breach said via Twitter that an analysis of data released by the Sodinokibi threat actors, in retaliation for Gedia’s refusal to pay the ransom, showed the carmaker had unpatched versions of Citrix ADC.

While Under the Breach said he believed the CVE-2019-19781 was used in the attack, it’s unclear if the data released by Sodinokibi is authentic, or if Citrix vulnerability was used to infect Gedia with ransomware.

Go to Original Article
Author:

Threat actors scanning for vulnerable Citrix ADC servers

An unpatched vulnerability in Citrix Application Delivery Controller and Citrix Gateway products has become the target of scans by potential threat actors.

Kevin Beaumont, a security researcher based in the U.K., and Johannes Ullrich, fellow at the SANS Internet Storm Center, independently discovered evidence of people scanning for Citrix ADC and Gateways vulnerable to CVE-2019-19781 over the past week.

Citrix disclosed the vulnerability on Dec. 17, which affects all supported versions of Citrix ADC and Citrix Gateway (formerly NetScaler and NetScaler Gateway, respectively.) Citrix warned that successful exploitation could allow an unauthenticated attacker to run arbitrary code and urged customers to apply mitigation techniques because a patch is not yet available.

Beaumont warned this could “become a serious issue” because of the ease of exploitation and how widespread the issue could be.

“In my Citrix ADC honeypot, CVE-2019-19781 is being probed with attackers reading sensitive credential config files remotely using ../ directory traversal (a variant of this issue). So this is in the wild, active exploitation starting up,” Beaumont wrote on Twitter. “There are way more boxes exposed than Pulse Secure, and you can exploit to RCE pre-auth with one POST and one GET request. Almost every box is also still vulnerable.”

Researchers at Positive Technologies have estimated as many as 80,000 businesses in 158 countries could have vulnerable Citrix products.

Neither Beaumont nor Ullrich saw any public exploits of the Citrix ADC vulnerability, and Ullrich wrote in a blog post that he would not describe the scans as “sophisticated.”

However, Craig Young, computer security researcher for Tripwire’s vulnerability and exposure research team, wrote on Twitter he had reproduced a remote code exploit for the vulnerability and he would “be surprised if someone hasn’t already used this in the wild.”

Florian Roth, CTO of Nextron Systems, detailed a Sigma rule to detect exploitation of the Citrix ADC vulnerability, but Young noted that his functional exploit could “absolutely exploit NetScaler CVE-2019-19781 without leaving this in the logs.”

Young described how he developed the exploit but did not release any proof-of-concept code.

“VERT’s research has identified three vulnerable behaviors which combine to enable code execution attacks on the NetScaler/ADC appliance,” Young wrote in a blog post. “These flaws ultimately allow the attacker to bypass an authorization constraint to create a file with user-controlled content which can then be processed through a server-side scripting language. Other paths towards code execution may also exist.”

All researchers involved urged customers to implement configuration changes detailed in Citrix’s mitigation suggestions while waiting for a proper fix.

Citrix did not respond to requests for comment at the time of this writing and it is unclear when a firmware update will be available to fix the issue.

Go to Original Article
Author:

Hyper-V Virtual CPUs Explained

Did your software vendor indicate that you can virtualize their application, but only if you dedicate one or more CPU cores to it? Not clear on what happens when you assign CPUs to a virtual machine? You are far from alone.

Note: This article was originally published in February 2014. It has been fully updated to be relevant as of November 2019.

Introduction to Virtual CPUs

Like all other virtual machine “hardware”, virtual CPUs do not exist. The hypervisor uses the physical host’s real CPUs to create virtual constructs to present to virtual machines. The hypervisor controls access to the real CPUs just as it controls access to all other hardware.

Hyper-V Never Assigns Physical Processors to Specific Virtual Machines

Make sure that you understand this section before moving on. Assigning 2 vCPUs to a system does not mean that Hyper-V plucks two cores out of the physical pool and permanently marries them to your virtual machine. You cannot assign a physical core to a VM at all. So, does this mean that you just can’t meet that vendor request to dedicate a core or two? Well, not exactly. More on that toward the end.

Understanding Operating System Processor Scheduling

Let’s kick this off by looking at how CPUs are used in regular Windows. Here’s a shot of my Task Manager screen:

Task Manager

Task Manager

Nothing fancy, right? Looks familiar, doesn’t it?

Now, back when computers never, or almost never, shipped as multi-CPU multi-core boxes, we all knew that computers couldn’t really multitask. They had one CPU and one core, so there was only one possible active thread of execution. But aside from the fancy graphical updates, Task Manager then looked pretty much like Task Manager now. You had a long list of running processes, each with a metric indicating what percentage of the CPUs time it was using.

Then, as in now, each line item you see represents a process (or, new in the recent Task Manager versions, a process group). A process consists of one or more threads. A thread is nothing more than a sequence of CPU instructions (keyword: sequence).

What happens is that (in Windows, this started in 95 and NT) the operating system stops a running thread, preserves its state, and then starts another thread. After a bit of time, it repeats those operations for the next thread. We call this pre-emptive, meaning that the operating system decides when to suspend the current thread and switch to another. You can set priorities that affect how a process rates, but the OS is in charge of thread scheduling.

Today, almost all computers have multiple cores, so Windows can truly multi-task.

Taking These Concepts to the Hypervisor

Because of its role as a thread manager, Windows can be called a “supervisor” (very old terminology that you really never see anymore): a system that manages processes that are made up of threads. Hyper-V is a hypervisor: a system that manages supervisors that manage processes that are made up of threads.

Task Manager doesn’t work the same way for Hyper-V, but the same thing is going on. There is a list of partitions, and inside those partitions are processes and threads. The thread scheduler works pretty much the same way, something like this:

Hypervisor Thread Scheduling

Hypervisor Thread Scheduling

Of course, a real system will always have more than nine threads running. The thread scheduler will place them all into a queue.

What About Processor Affinity?

You probably know that you can affinitize threads in Windows so that they always run on a particular core or set of cores. You cannot do that in Hyper-V. Doing so would have questionable value anyway; dedicating a thread to a core is not the same thing as dedicating a core to a thread, which is what many people really want to try to do. You can’t prevent a core from running other threads in the Windows world or the Hyper-V world.

How Does Thread Scheduling Work?

The simplest answer is that Hyper-V makes the decision at the hypervisor level. It doesn’t really let the guests have any input. Guest operating systems schedule the threads from the processes that they own. When they choose a thread to run, they send it to a virtual CPU. Hyper-V takes it from there.

The image that I presented above is necessarily an oversimplification, as it’s not simple first-in-first-out. NUMA plays a role, for instance. Really understanding this topic requires a fairly deep dive into some complex ideas. Few administrators require that level of depth, and exploring it here would take this article far afield.

The first thing that matters: affinity aside, you never know where any given thread will execute. A thread that was paused to yield CPU time to another thread may very well be assigned to another core when it is resumed. Did you ever wonder why an application consumes right at 50% of a dual-core system and each core looks like it’s running at 50% usage? That behavior indicates a single-threaded application. Each time the scheduler executes it, it consumes 100% of the core that it lands on. The next time it runs, it stays on the same core or goes to the other core. Whichever core the scheduler assigns it to, it consumes 100%. When Task Manager aggregates its performance for display, that’s an even 50% utilization — the app uses 100% of 50% of the system’s capacity. Since the core not running the app remains mostly idle while the other core tops out, they cumulatively amount to 50% utilization for the measured time period. With the capabilities of newer versions of Task Manager, you can now instruct it to show the separate cores individually, which makes this behavior far more apparent.

Now we can move on to a look at the number of vCPUs assigned to a system and priority.

What Does the Number of vCPUs Assigned to a VM Really Mean?

You should first notice that you can’t assign more vCPUs to a virtual machine than you have logical processors in your host.

Invalid CPU Count

Invalid CPU Count

So, a virtual machine’s vCPU count means this: the maximum number of threads that the VM can run at any given moment. I can’t set the virtual machine from the screenshot to have more than two vCPUs because the host only has two logical processors. Therefore, there is nowhere for a third thread to be scheduled. But, if I had a 24-core system and left this VM at 2 vCPUs, then it would only ever send a maximum of two threads to Hyper-V for scheduling. The virtual machine’s thread scheduler (the supervisor) will keep its other threads in a queue, waiting for their turn.

But Can’t I Assign More Total vCPUs to all VMs than Physical Cores?

Yes, the total number of vCPUs across all virtual machines can exceed the number of physical cores in the host. It’s no different than the fact that I’ve got 40+ processes “running” on my dual-core laptop right now. I can only run two threads at a time, but I will always far more than two threads scheduled. Windows has been doing this for a very long time now, and Windows is so good at it (usually) that most people never see a need to think through what’s going on. Your VMs (supervisors) will bubble up threads to run and Hyper-V (hypervisor) will schedule them the way (mostly) that Windows has been scheduling them ever since it outgrew cooperative scheduling in Windows 3.x.

What’s The Proper Ratio of vCPU to pCPU/Cores?

This is the question that’s on everyone’s mind. I’ll tell you straight: in the generic sense, this question has no answer.

Sure, way back when, people said 1:1. Some people still say that today. And you know, you can do it. It’s wasteful, but you can do it. I could run my current desktop configuration on a quad 16 core server and I’d never get any contention. But, I probably wouldn’t see much performance difference. Why? Because almost all my threads sit idle almost all the time. If something needs 0% CPU time, what does giving it its own core do? Nothing, that’s what.

Later, the answer was upgraded to 8 vCPUs per 1 physical core. OK, sure, good.

Then it became 12.

And then the recommendations went away.

They went away because no one really has any idea. The scheduler will evenly distribute threads across the available cores. So then, the amount of physical CPUs needed doesn’t depend on how many virtual CPUs there are. It depends entirely on what the operating threads need. And, even if you’ve got a bunch of heavy threads going, that doesn’t mean their systems will die as they get pre-empted by other heavy threads. The necessary vCPU/pCPU ratio depends entirely on the CPU load profile and your tolerance for latency. Multiple heavy loads require a low ratio. A few heavy loads work well with a medium ratio. Light loads can run on a high ratio system.

I’m going to let you in on a dirty little secret about CPUs: Every single time a thread runs, no matter what it is, it drives the CPU at 100% (power-throttling changes the clock speed, not workload saturation). The CPU is a binary device; it’s either processing or it isn’t. When your performance metric tools show you that 100% or 20% or 50% or whatever number, they calculate it from a time measurement. If you see 100%, that means that the CPU was processing during the entire measured span of time. 20% means it was running a process 1/5th of the time and 4/5th of the time it was idle. This means that a single thread doesn’t consume 100% of the CPU, because Windows/Hyper-V will pre-empt it when it wants to run another thread. You can have multiple “100%” CPU threads running on the same system. Even so, a system can only act responsively when it has some idle time, meaning that most threads will simply let their time slice go by. That allows other threads to access cores more quickly. When you have multiple threads always queuing for active CPU time, the overall system becomes less responsive because threads must wait. Using additional cores will address this concern as it spreads the workload out.

The upshot: if you want to know how many physical cores you need, then you need to know the performance profile of your actual workload. If you don’t know, then start from the earlier 8:1 or 12:1 recommendations.

Altaro Dojo Forums
forums logo

Connect with fellow IT pros and master Hyper-V

Moderated by Microsoft MVPs

What About Reserve and Weighting (Priority)?

I don’t recommend that you tinker with CPU settings unless you have a CPU contention problem to solve. Let the thread scheduler do its job. Just like setting CPU priorities on threads in Windows can cause more problems than they solve, fiddling with hypervisor vCPU settings can make everything worse.

Let’s look at the config screen:

vCPU Settings

vCPU Settings

The first group of boxes is the reserve. The first box represents the percentage of its allowed number of vCPUs to set aside. Its actual meaning depends on the number of vCPUs assigned to the VM. The second box, the grayed-out one, shows the total percentage of host resources that Hyper-V will set aside for this VM. In this case, I have a 2 vCPU system on a dual-core host, so the two boxes will be the same. If I set 10 percent reserve, that’s 10 percent of the total physical resources. If I drop the allocation down to 1 vCPU, then 10 percent reserve becomes 5 percent physical. The second box, will be auto-calculated as you adjust the first box.

The reserve is a hard minimum… sort of. If the total of all reserve settings of all virtual machines on a given host exceeds 100%, then at least one virtual machine won’t start. But, if a VM’s reserve is 0%, then it doesn’t count toward the 100% at all (seems pretty obvious, but you never know). But, if a VM with a 20% reserve is sitting idle, then other processes are allowed to use up to 100% of the available processor power… until such time as the VM with the reserve starts up. Then, once the CPU capacity is available, the reserved VM will be able to dominate up to 20% of the total computing power. Because time slices are so short, it’s effectively like it always has 20% available, but it does have to wait like everyone else.

So, that vendor that wants a dedicated CPU? If you really want to honor their wishes, this is how you do it. You enter whatever number in the top box that makes the second box show the equivalent processor power of however many pCPUs/cores the vendor thinks they need. If they want one whole CPU and you have a quad-core host, then make the second box show 25%. Do you really have to? Well, I don’t know. Their software probably doesn’t need that kind of power, but if they can kick you off support for not listening to them, well… don’t get me in the middle of that. The real reason virtualization densities never hit what the hypervisor manufacturers say they can do is because of software vendors’ arbitrary rules, but that’s a rant for another day.

The next two boxes are the limit. Now that you understand the reserve, you can understand the limit. It’s a resource cap. It keeps a greedy VM’s hands out of the cookie jar. The two boxes work together in the same way as the reserve boxes.

The final box is the priority weight. As indicated, this is relative. Every VM set to 100 (the default) has the same pull with the scheduler, but they’re all beneath all the VMs that have 200 and above all the VMs that have 50, so on and so forth. If you’re going to tinker, weight is safer than fiddling with reserves because you can’t ever prevent a VM from starting by changing relative weights. What the weight means is that when a bunch of VMs present threads to the hypervisor thread scheduler at once, the higher-weighted VMs go first.

But What About Hyper-Threading?

Hyper-Threading allows a single core to operate two threads at once — sort of. The core can only actively run one of the threads at a time, but if that thread stalls while waiting for an external resource, then the core operates the other thread. You can read a more detailed explanation below in the comments section, from contributor Jordan. AMD has recently added a similar technology.

To kill one major misconception: Hyper-Threading does not double the core’s performance ability. Synthetic benchmarks show a high-water mark of a 25% improvement. More realistic measurements show closer to a 10% boost. An 8-core Hyper-Threaded system does not perform as well as a 16-core non-Hyper-Threaded system. It might perform almost as well as a 9-core system.

With the so-called “classic” scheduler, Hyper-V places threads on the next available core as described above. With the core scheduler, introduced in Hyper-V 2016, Hyper-V now prevents threads owned by different virtual machines from running side-by-side on the same core. It will, however, continue to pre-empt one virtual machine’s threads in favor of another’s. We have an article that deals with the core scheduler.

Making Sense of Everything

I know this is a lot of information. Most people come here wanting to know how many vCPUs to assign to a VM or how many total vCPUs to run on a single system.

Personally, I assign 2 vCPUs to every VM to start. That gives it at least two places to run threads, which gives it responsiveness. On a dual-processor system, it also ensures that the VM automatically has a presence on both NUMA nodes. I do not assign more vCPU to a VM until I know that it needs it (or an application vendor demands it).

As for the ratio of vCPU to pCPU, that works mostly the same way. There is no formula or magic knowledge that you can simply apply. If you plan to virtualize existing workloads, then measure their current CPU utilization and tally it up; that will tell you what you need to know. Microsoft’s Assessment and Planning Toolkit might help you. Otherwise, you simply add resources and monitor usage. If your hardware cannot handle your workload, then you need to scale out.


Go to Original Article
Author: Eric Siron

Threat Stack Application Security Monitoring adds Python support

Threat Stack has announced Python support for its Threat Stack Application Security Monitoring product. The update comes with no additional cost as part of the Threat Stack Cloud Security Platform.

With Python support for Application Security Monitoring, Threat Stack customers who use Python with Django and Flask frameworks can ensure security in the software development lifecycle with risk identification of both third-party and native code, according to Tim Buntel, vice president of application security products at Threat Stack.

In addition, the platform also provides built-in capabilities to help developers learn secure coding practices and real-time attack blocking, according to the company.

“Today’s cloud-native applications are comprised of disparate components, including containers, virtual machines and scripts, including those written in Python, that serve as the connective tissue between these elements,” said Doug Cahill, senior analyst and group Practice Director, Cybersecurity at Enterprise Strategy Group. Hence, the lack of support for any one layer of a stack means a lack of visibility and a vulnerability an attacker could exploit.

Application Security Monitoring is a recent addition to Threat Stack Cloud Security Platform. Introduced last June, the platform is aimed at bringing visibility and protection to cloud-based architecture and applications. Threat Stack Cloud Security Platform touts the ability to identify and block attacks such as cross-site scripting (XSS) and SQL injection by putting the application in context with the rest of the stack. It also allows users to move from the application to the container or the host, where it is deployed with one click when an attack happens, according to the company.

“[Application Security Monitoring] … provides customers with full stack security observability by correlating security telemetry from the cloud management console, host, containers and applications in a single, unified platform,” Buntel said.

To achieve full stack security and insights from the cloud management console, host, containers, orchestration and applications, customers can combine Threat Stack Application Security Monitoring with the rest of the Threat Stack Cloud Security Platform, according to the company.

Cahill said customers should look for coverage of the technology stack as well as the lifecycle when looking to secure cloud-native applications, because such full stack and lifecycle support allows for threat detection and prevention capabilities “from the code level down to the virtual machine or container to be implemented in both pre-deployment stages and runtime.”

“Cloud security platforms, which integrate runtime application self-protection functionality with cloud workload protection platforms to provide full-stack and full lifecycle visibility and control, are just now being offered by a handful of cybersecurity vendors, including Threat Stack,” he added.

Threat Stack Application Security Monitoring for Python is available as of Wednesday.

Threat Stack competitors include CloudPassage, Dome9 and Sophos. CloudPassage Halo is a security automation platform delivering visibility, protection and compliance monitoring for cybersecurity risks; the platform also covers risks in Amazon Web Services and Azure deployments, according to the company. CloudGuard Dome9 is a software platform for public cloud security and compliance orchestration; the platform helps customers assess their security posture, detect misconfigurations and enforce security best practices to prevent data loss, according to the company. Sophos Intercept X enables organizations to detect blended threats that merge automation and human hacking skills, according to the company.

Go to Original Article
Author:

ServiceNow adds mobile app to ‘New York’ Now Platform

ServiceNow rolled out the latest version of its flagship Now Platform highlighted by a mobile application that allows remote users to access core capabilities of the enterprise workflow product.

The Now Platform New York release, which works with Apple and Android devices, was motivated by the company’s own users, who increasingly demand mobile-optimized tools to make a wide range of tasks easier for remote users, from ordering computers to approving purchase orders to making travel request.

While ServiceNow users wanted these remote capabilities, they didn’t want a slew of new applications to accomplish these tasks.

“We see enterprises bogged down by app overload meaning there are just too many applications each helping with separate workloads,” said CJ Desai, Service Now’s chief product officer. “We are trying to remove the friction associated with that as it relates to everyday work-related task.”

One analyst believes the timing of the New York Now Platform is fortuitous given the needs of not just the corporate world, but consumers’ growing need to use a number of mobile technologies.

“We are a mobile society in general, but with corporate customers’ increased focus on IT operations and LOBs, there are a lot of intersection points where it makes sense to drive more automated workloads from a mobile environment,” said Stephen Elliot, analyst at IDC.

Desai added that one of the goals of the new release is to create “consumer-like” mobile experiences for corporate users to make them more productive inside the office.

The New York release also includes a built-in onboarding application that works in concert with the mobile application, and that also taps into all the core capabilities of the Now Platform. The new offering combines all the necessary tasks that span multiple departments including IT, human resources, facilities, finance and legal as part of the process for bringing on new employees.

Business processes like onboarding employees is a big hassle to every company out there. The easier you make it to implement processes like can only increase the value of the platform.
Stephen ElliotVice president of management software and DevOps, IDC

Elliot said the addition of mobile technology to the Now Platform is an essential step in the maturity of the offering for both users and ServiceNow as a company, especially as the company continues to expand into new areas such IT operations, finance and human resources — markets where you need a stronger mobile component.

“Different business processes like onboarding employees is a big hassle to almost every company out there,” Elliot said. “The easier you make it to implement processes like that increases the value of the platform.”

ServiceNow focusing on HR, finance workflows

Earlier this year ServiceNow redoubled its efforts around customer workflows focusing more on specific vertical markets as a way to expand the opportunities for its Now Platform.

“Most people know us for our IT workflows,” said Farrell Hough, senior vice president of customer workflow products at ServiceNow. “But we are segmenting that business out and leveraging the strengths we have in the human resources, financial and telco markets.”

ServiceNow has made improvements to the platform’s natural language understanding (NLU) by integrating it tightly with its Virtual Agent. Through NLU, workers can interact with the Virtual Agent by using simple terms to find the answers to problems themselves, rather than using the IT help desk.

Company officials said this capability works in tandem with ServiceNow’s existing Predictive Intelligence technology to improve the delivery of products and services to employees.

PayPal plans to use the new offering as a technology backbone to connect its engineers to all of its internal operations and create a centralized hub for resources including a number of self-service tools.

PayPal engineers use a number of applications to manage infrastructure, but the new version of Now Platform provides an opportunity to have just one platform that connects all of its digital workflows across all their systems of record and applications, according to a statement attributed to Dan Torunian, PayPal’s VP or employee technology.

The New York release is available now, with the ServiceNow Mobile and Onboarding applications also available for download from the Apple App Store and Google Play.

Go to Original Article
Author:

Satellite connectivity expands reach of Azure ExpressRoute across the globe

Staying connected to access and ingest data in today’s highly distributed application environments is paramount for any enterprise. Many businesses need to operate in and across highly unpredictable and challenging conditions. For example, energy, farming, mining, and shipping often need to operate in remote, rural, or other isolated locations with poor network connectivity.

With the cloud now the de facto and primary target for the bulk of application and infrastructure migrations, access from remote and rural locations becomes even more important. The path to realizing the value of the cloud starts with a hybrid environment access resources with dedicated and private connectivity.

Network performance for these hybrid scenarios from rural and remote sites becomes increasingly critical. With globally connected organizations, the explosive number of connected devices and data in the Cloud, as well as emerging areas such as autonomous driving and traditional remote locations such as cruise ships are directly affected by connectivity performance.  Other examples requiring highly available, fast, and predictable network service include managing supply chain systems from remote farms or transferring data to optimize equipment maintenance in aerospace.

Today, I want to share the progress we have made to help customers address and solve these issues. Satellite connectivity addresses challenges of operating in remote locations.

Microsoft cloud services can be accessed with Azure ExpressRoute using satellite connectivity. With commercial satellite constellations becoming widely available, new solutions architectures offer improved and affordable performance to access Microsoft.

Infographic of High level architecture of ExpressRoute and satellite integration

Microsoft Azure ExpressRoute, with one of the largest networking ecosystems in the public Cloud now includes satellite connectivity partners bringing new options and coverage.

 8095 1SES will provide dedicated, private network connectivity from any vessel, airplane, enterprise, energy or government site in the world to the Microsoft Azure cloud platform via its unique multi-orbit satellite systems. As an ExpressRoute partner, SES will provide global reach and fibre-like high-performance to Azure customers via its complete portfolio of Geostationary Earth Orbit (GEO) satellites, Medium Earth Orbit (MEO) O3b constellation, global gateway network, and core terrestrial network infrastructure around the world.

 8095 2Intelsat’s customers are the global telecommunications service providers and multinational enterprises that rely on our services to power businesses and communities wherever their needs take them. Now they have a powerful new tool in their solutions toolkit. With the ability to rapidly expand the reach of cloud-based enterprises, accelerate customer adoption of cloud services, and deliver additional resiliency to existing cloud-connected networks, the benefits of cloud services are no longer limited to only a subset of users and geographies. Intelsat is excited to bring our global reach and reliability to this partnership with Microsoft, providing the connectivity that is essential to delivering on the expectations and promises of the cloud.

8095 3 Viasat, a provider of high-speed, high-quality satellite broadband solutions to businesses and commercial entities around the world, is introducing Direct Cloud Connect service to give customers expanded options for accessing enterprise-grade cloud services. Azure ExpressRoute will be the first cloud service offered to enable customers to optimize their network infrastructure and cloud investments through a secure, dedicated network connection to Azure’s intelligent cloud services.

Microsoft wants to help accelerate scenarios by optimizing the connectivity through Microsoft’s global network, one of the largest and most innovative in the world.

ExpressRoute for satellites directly connects our partners’ ground stations to our global network using a dedicated private link. But what does it more specifically mean to our customers?

  • Using satellite connectivity with ExpressRoute provides dedicated and highly available, private access directly to Azure and Azure Government clouds.
  • ExpressRoute provides predictable latency through well-connected ground stations, and, as always, maintains all traffic privately on our network – no traversing of the Internet.
  • Customers and partners can harness Microsoft’s global network to rapidly deliver data to where it’s needed or augment routing to best optimize for their specific need.
  • Satellite and a wide selection of service providers will enable rich solution portfolios for cloud and hybrid networking solutions centered around Azure networking services.
  • With some of the world’s leading broadband satellite providers as partners, customers can select the best solution based on their needs. Each of the partners brings different strengths, for example, choices between Geostationary (GEO), Medium Earth Orbit (MEO) and in the future Low Earth Orbit(LEO) satellites, geographical presence, pricing, technology differentiation, bandwidth, and others.
  • ExpressRoute over satellite creates new channels and reach for satellite broadband providers, through a growing base of enterprises, organizations and public sector customers.

With this addition to the ExpressRoute partner ecosystem, Azure customers in industries like aviation, oil and gas, government, peacekeeping, and remote manufacturing can deploy new use cases and projects that increase the value of their cloud investments and strategy.

As always, we are very interested in your feedback and suggestions as we continue to enhance our networking services, so I encourage you to share your experiences and suggestions with us.

You can follow these links to learn more about our partners Intelsat, SES, and Viasat, and learn more about Azure ExpressRoute from our website and our detailed documentation.

Go to Original Article
Author: Microsoft News Center

Low-code goes mainstream to ease app dev woes

Low-code/no-code application development has gone mainstream as demand grows for enterprises to turn out increasingly more applications with not enough skilled developers.

A recent Forrester Research study showed that 23% of the 3,200 developers surveyed said their firms have adopted low-code development platforms, and another 22% said their organizations plan to adopt low-code platforms in the next year. That data was gathered in late 2018, so by the end of this year, those numbers should combine to be close to 50% of developers whose organizations have adopted low-code platforms, said John Rymer, an analyst at Forrester.

“That seems like mainstream to me,” he said, adding that low-code/no-code comes up routinely with his clients nowadays. In fact, low-code development could possibly be as impactful on the computing industry as the creation of the internet or IBM’s invention of the PC, he said.

The industry is on the cusp of a huge change to incorporate business people into the way software is built and delivered, Rymer said.

If you look ahead five years or so, we can see maybe 100 million people — business people — engaged in producing software.
John RymerAnalyst, Forrester Research

“If you believe that there are six million developers in the world and we believe there are probably a billion business people in the world, if you look ahead five years or so, we can see maybe 100 million people –business people — engaged in producing software,” he said. “‘And I think that is the change we’re all starting to witness.”

Meanwhile, Forrester said there are eight key reasons for enterprises to adopt low-code platforms:

  • Support product or service innovation.
  • Empower departmental IT to deliver apps.
  • Empower employees outside of IT to deliver apps.
  • Make the app development processes more efficient.
  • Develop apps more quickly.
  • Reduce costs of app development.
  • Increase the number of people who develop applications.
  • Develop unique apps for specific business needs.

The top three types of apps built with low-code tools are complete customer-facing apps — web or mobile, business process and workflow apps, and web or mobile front ends, Rymer said. Meanwhile, the top three departments using low-code are IT, customer service or call center, and digital business or e-commerce, he added.

Low-code landscape shaped by business users

Surging interest in low-code/no-code adoption comes not just to help increase developers’ productivity, but also to empower enterprise business users.

A Gartner report on the low-code space, released in August 2019, predicted that by 2024, 75% of large enterprises will use at least four low-code development tools for both IT application development and citizen development, and over 65% of applications will be developed with low-code technology. Upwork, the web platform for matching freelance workers with jobs, recently identified low-code development skills as rapidly gaining in popularity, particularly for developers familiar with Salesforce’s Lightning low-code tools to build web apps.

Low-code analyses from Gartner and Forrester in 2018 did not rank Microsoft as a leader, but the software giant shot up in the rankings with the latest release of its Power Platform and PowerApps low-code environment that broadly supports both citizen developers and professional developers. This helps bring the vast community of Visual Studio and Visual Studio Code developers into the fold, said Charles Lamanna, general manager of application platform at Microsoft.

Other low-code platform vendors have shifted focus to business users. A study commissioned by low-code platform vendor OutSystems showed results quite similar to the Gartner and Forrester analyses. Out of 3,300 developers surveyed, 41% of respondents said their organization already uses a low-code platform, and another 10% said they were about to start using one, according to the study.

Mendix now offers a part of their product that’s aimed at business people as well. With the Mendix platform, eXp Realty, a Bellingham, Wash., cloud-based real estate brokerage, cut its onboarding process for new agents from 18 steps down to nine, said Steve Ledwith, the company’s vice president of engineering.

Gartner’s latest low-code report includes OutSystems and Salesforce Mendix, Microsoft and Appian. The most recent Forrester Wave report on the low-code space, in March 2019, saw the same four core leaders but swapped out Appian for Kony.

The rising popularity of low-code/no-code platforms also means the marketplace itself is active. “Low-code platform leaders are growing fast and the smaller companies are finding a niche,” said Mike Hughes, principal platform evangelist at OutSystems.

Last year, Siemens acquired Mendix for $730 million. And just this week, Temenos, a Geneva, Switzerland-based banking software company, acquired Kony for $559 million plus another $21 million if they meet unspecified goals. Both Temenos and Siemens said they acquired the low-code platforms to speed up their own internal application development, as well as to advance and sell the platforms to customers.

“We wanted to shore up our banking software with Kony’s low-code platform and particularly their own banking application built with their product,” said Mark Gunning, global business solutions director at Temenos. Kony also will help advance Temenos’ presence in the U.S., he added.

As enterprises rely more on these platforms to develop their applications, look for consolidation ahead in the low-code/no-code space. Gartner now tracks over 200-plus companies that claim to serve the low-code market. Acquisitions such as these are another strong indicator that the market is maturing.

Salvation Army recruits low-code

Like many not-for-profits, the Salvation Army was slow to move off of its old Lotus Notes platform. Yet, when they decided to move to Office 365 in 2016, there was no Power Platform or PowerApps, so the organization turned to low-code platform maker AgilePoint, based in Mountain View, Calif., said David Brown, director of applications at the Salvation Army USA West, in Rancho Palos Verdes, Calif. (Gartner ranks AgilePoint as a high-level niche player; the company doesn’t appear on Forrester’s rankings.)

The AgilePoint platform enabled the charitable organization to build more apps and be more responsive to the organization’s demands for new applications. The Salvation Army started to build apps with AgilePoint in 2017 and put 10 new apps into production that year. In 2018, they delivered 20 apps, and the goal for 2019 is 30 new apps, Brown said. The Salvation Army also is considering a training program for citizen developers, Brown said.

“We built an app that replaced a paper process that cost us $10,000 a month,” he said. “When I can invest in a new technology and in the first year save $120,000 using something that I am not spending anywhere that much for, that’s a huge return on investment.”

Low-code, no-code lines begin to blur

No-code typically means the platform is basic and requires no coding, while low-code platforms enable pro developers to go under the hood and hard-code portions if they choose to. However, the distinction between low-code and no-code is not absolute.

Jeffrey Hammond, ForresterJeffrey Hammond

“I don’t think you are either low-code or you are no code,” said Jeffrey Hammond, another Forrester analyst. “I think you might be less code or more code. I think the no-code vendors aspire to have you do less raw text entry.”

As a developer, there are times when you can only visually model so much before it’s just more efficient to drop into text and write something. It is the quickest, easiest way to express what you want to do.

“And if you’re typing text, to me you’re coding,” Hammond said.

Michael Beckley, CTO of Appian, based in Tysons, Va., said many of Appian’s developers would agree.

“A lot of our developers believe low-code exists to help developers write less code upfront,” he said. “And when the platform stops [when finished execution of its instructions] they should just start writing code all over the place.”

Next low-code hurdles are AI, serverless

The addition of artificial intelligence to the platforms to help developers build smart apps is one next hurdle for the low-code space. Another is to provide DevOps natively on the platforms, and that is already happening now with platforms from OutSystems and Mendix, among others.

However, there is a potential future connection point between the serverless and low-code spaces, Hammond said.

“They are looking to solve a similar problem, which is extracting developers from a lot of the lower-level grunt work so they can focus on building business logic,” he said.

The serverless side relies on network infrastructure and managed services, not so much with tools. The low-code space does it with tools and frameworks, but not necessarily as part of an open, standards-based approach.

With some standardization in the serverless space around Kubernetes and CloudEvents, there could be some intersections between the tools and the low-code space and the high-scale infrastructure in the cloud-native space.

“If you have a common event model, you can start to build events and you can string them together and you can start to build business rules around them,” Forrester’s Hammond said. “You can go into the editors to write the business logic for them. To me, that’s an extension of low-code — and I think it can open up the floodgates to an intersection of these two different technologies.”

Go to Original Article
Author:

Cohesity CyberScan scans backup copies for security risks

The latest application added to Cohesity Marketplace is designed to trawl through backups to look for security vulnerabilities.

Cohesity CyberScan is a free application available on the Cohesity Marketplace. Using Tenable.io, it compares applications in a backup environment against the public Common Vulnerabilities and Exposures (CVE) database to detect possible security flaws. The admin can then address these flaws, whether it’s an out-of-date patch, software bugs or vulnerabilities around open ports.

Cohesity CyberScan doesn’t affect the production environment when it performs its scan. The application boots up a backup snapshot and performs an API call to Tenable.io in order to find vulnerabilities. This process has the added benefit of confirming whether a particular snapshot is recoverable in the first place.

Raj Dutt, director of product marketing at Cohesity, said because the vulnerability scan happens in Cohesity’s runtime environment and not the live environment, many customers may be prompted to perform these scans. Citing a recent study performed by independent IT research firm Ponemon Institute, Dutt said 37% of respondents who suffered a security breach did not scan their environments for vulnerabilities.

“They work in an environment where the organization is expected to run 24/7/365, so essentially, there is no downtime to run these scans or make the patches,” Dutt said.

Dutt said even organizations that do perform vulnerability scans don’t do them often enough. Vulnerabilities and exposures published on the CVE database are known to bad actors, so weekly or monthly scans still leave organizations with a wide window in which they can be attacked. Dutt said since Cohesity CyberScan doesn’t interfere with the production environment, customers are free to run scans more frequently.

screenshot of Cohesity CyberScan dashboard
The Security Dashboard houses the vulnerability scan and anti-ransomware capabilities.

Phil Goodwin, a research director at IT analyst firm IDC, said there are applications that scan backup copies or secondary data but scan mostly to detect out-of-date drivers or other roadblocks to a successful restore or failover. Running it against a CVE database to look for potential security problems is unique.

End users are talking about data protection and security in the same sentence. It really is two sides of the same coin.
Phil GoodwinResearch director, IDC

Goodwin said Cohesity CyberScan is the latest example of backup vendors adding security capabilities. Security and data protection are different IT disciplines that call for different technology, but Goodwin said he has encountered customers conflating the two.

“End users are talking about data protection and security in the same sentence,” Goodwin said. “It really is two sides of the same coin.”

Security is the proactive approach of preventing data loss, while data protection and backup are reactive. Goodwin said organizations should ideally have both, and backup vendors are starting to provide proactive features such as data masking, air gapping and immutability. But Goodwin said he has noticed many vendors stop shy of malware detection.

Indeed, Cohesity CyberScan does not have malware detection. Dutt stressed that the application’s use cases are focused on detecting cyberattack exposures and ensuring recoverability of snapshots. He pointed out that Cohesity DataPlatform does have anti-ransomware capabilities, and they can be accessed from the same dashboard as CyberScan’s vulnerability scan.

Cohesity CyberScan is generally available now to customers who have upgraded to the latest Cohesity Pegasus 6.4 software. The application itself is free, but customers are required to have their own Tenable license.

Go to Original Article
Author:

No-code and low-code tools seek ways to stand out in a crowd

As market demand for enterprise application developers continues to surge, no-code and low-code vendors seek ways to stand out from one another in an effort to lure professional and citizen developers.

For instance, last week’s Spark release of Skuid’s eponymous drag-and-drop application creation system adds on-premises, private data integration, a new Design System Studio, and new core components for tasks such as creation of buttons, forms, charts and tables.

A suite of prebuilt application templates aim to help users build and customize a bespoke application, such as salesforce automation, recruitment and applicant tracking, HR management and online learning.

And a native mobile capability enables developers to take the apps they’ve built with Skuid and deploy them on mobile devices with native functionality for iOS and Android.

Ray Wang, Constellation ResearchRay Wang

“We’re seeing a lot of folks who started in other low-code/no-code platforms move toward Skuid because of the flexibility and the ability to use it in more than one type of platform,” said Ray Wang, an analyst at Constellation Research in San Francisco.

Skuid CTO Mike DuensingMike Duensing

“People want to be able to get to templates, reuse templates and modify templates to enable them to move very quickly.”

Skuid — named for an acronym, Scalable Kit for User Interface Design — was originally an education software provider, but users’ requests to customize the software for individual workflows led to a drag-and-drop interface to configure applications. That became the Skuid platform and the company pivoted to no-code, said Mike Duensing, CTO of Skuid in Chattanooga, Tenn.

Quick Base adds Kanban reports

Quick Base Inc., in Cambridge, Mass., recently added support for Kanban reports to its no-code platform. Kanban is a scheduling system for lean and just-in-time manufacturing. The system also provides a framework for Agile development practices, so software teams can visually track and balance project demands with available capacity and ease system-level bottlenecks.

The Quick Base Kanban reports enable development teams to see where work is in process. It also lets end users interact with their work and update their status, said Mark Field, Quick Base director of products.

Users drag and drop progress cards between columns to indicate how much work has been completed on software delivery tasks to date. This lets them track project tasks through stages or priority, opportunities through sales stages, application features through development stages, team members and their task assignments and more, Field said.

Datatrend Technologies, an IT services provider in Minnetonka, Minn., uses Quick Base to build the apps that manage technology rollouts for its customers, and finds the Kanban reports handy.

A lot of low-code/no-code platforms allow you to get on and build an app but then if you want to take it further, you’ll see users wanting to move to something else.
Ray Wanganalyst, Constellation Research

“Quick Base manages that whole process from intake to invoicing, where we interface with our ERP system,” said Darla Nutter, senior solutions architect at Datatrend.

Previously, we kept data of work in progress through four stages (plan, execute, complete and invoice) in a table report with no visual representation, but with these reports users can see what they have to do at any given stage and prioritize work accordingly, she said.

“You can drag and drop tasks to different columns and it automatically updates the stage for you,” she said.

Like the Quick Base no-code platform, the Kanban reports require no coding or programming experience. Datatrend’s typical Quick Base users are project managers and business analysts, Nutter said.

For most companies, however, the issue with no-code and low-code systems is how fast users can learn and then expand upon it, Constellation Research’s Wang said.

“A lot of low-code/no-code platforms allow you to get on and build an app but then if you want to take it further, you’ll see users wanting to move to something else,” Wang said.

OutSystems sees AI as the future

OutSystems said it plans to add advanced artificial intelligence features into its products to increase developer productivity, said Mike Hughes, director of product marketing at OutSystems in Boston.

“We think AI can help us by suggesting next steps and anticipating what developers will be doing next as they build applications,” Hughes said.

OutSystems uses AI in its own tool set, as well as links to publicly available AI services to help organizations build AI-based products. To facilitate this, the company launched Project Turing and opened an AI Center of Excellence in Lisbon, Portugal, named after Alan Turing, who is considered the father of AI.

The company also will commit 20% of its R&D budget to AI research and partner with industry leaders and universities for research in AI and machine learning.