Tag Archives: release

Oracle’s GraalVM finds its place in Java app ecosystem

One year after its initial release for production use, Oracle’s GraalVM universal virtual machine has found validation in the market, evidenced by industry-driven integrations with cloud-native development projects such as Quarkus, Micronaut, Helidon and Spring Boot.

GraalVM supports applications written in Java, JavaScript and other programming languages and execution modes. But it means different things to different people, said Bradley Shimmin, an analyst with Omdia in Longmeadow, Mass.

First, it’s a runtime that can support a wide array of non-Java languages such as JavaScript, Ruby, Python, R, WebAssembly and C/C++, he said. And it can do the same for Java Virtual Machine (JVM) languages as well, namely Java, Scala and Kotlin.

Secondly, GraalVM is a native code generator capable of doing things like ahead-of-time compiling — the act of compiling a higher-level programming language such as C or C++ into a native machine code so that the resulting binary file can execute natively.

“GraalVM is really quite a flexible ecosystem of capabilities,” Shimmin said. “For example, it can run on its own or be embedded as a part of the OpenJDK. In short, it allows Java developers to tackle some specific problems such as the need for fast app startup times, and it allows non-Java developers to enjoy some of the benefits of a JVM such as portability.”

GraalVM came out of Oracle Labs, which used to be Sun Labs. “Basically, it is the answer to the question, ‘What would it look like if we could write the Java native compiler in Java itself?'” said Cameron Purdy, former senior vice president of development at Oracle and current CEO of Xqiz.it, a stealth startup in Lexington, Mass., that is working to deliver a platform for building cloud-native applications.

“The hypothesis behind the Graal implementation is that a compiler built in Java would be more easily maintained over time, and eventually would be compiling itself or ‘bootstrapped’ in compiler parlance,” Purdy added.

The GraalVM project’s overall mission was to build a universal virtual machine that can run any programming language.

The big idea was that a compiler didn’t have to have built-in knowledge of the semantics of any of the supported languages. The common belief of VM architects had been that a language VM needed to understand those semantics in order to achieve optimal performance.

“GraalVM has disproved this notion by demonstrating that a multilingual VM with competitive performance is possible and that the best way to do it isn’t through a language-specific bytecode like Java or Microsoft CLR [Common Language Runtime],” said Eric Sedlar, vice president and technical director of Oracle Labs.

To achieve this, the team developed a new high-performance optimizing compiler and a language implementation framework that makes it possible to add new languages to the platform quickly, Sedlar said. The GraalVM compiler provides significant performance improvements for Java applications without any code changes, according to Sedlar. Embeddability is another goal. For example, GraalVM can be plugged into system components such as a database.

GraalVM joins broader ecosystem

One of the higher-profile integrations for GraalVM is with Red Hat’s Quarkus, a web application framework with related extensions for Java applications. In essence, Quarkus tailors applications for Oracle’s GraalVM and HotSpot compiler, which means that applications written in it can benefit from using GraalVM native image technology to achieve near instantaneous startup and significantly lower memory consumption compared to what one can expect from a typical Java application at runtime.

“GraalVM is interesting to me as it potentially speeds up Java execution and reduces the footprint – both of which are useful for modern Java applications running on the cloud or at the edge,” said Jeffrey Hammond, an analyst at Forrester Research. “In particular, I’m watching the combination of Graal and Quarkus as together they look really fast and really small — just the kind of thing needed for microservices on Java running in a FaaS environment.”

In particular, I’m watching the combination of Graal and Quarkus as together they look really fast and really small — just the kind of thing needed for microservices on Java running in a FaaS environment.
Jeffrey HammondAnalyst, Forrester

Jeffrey HammondJeffrey Hammond

Quarkus uses the open source, upstream GraalVM project and not the commercial products — Oracle GraalVM or Oracle GraalVM Enterprise Edition.

“Quarkus applications can either be run efficiently in JVM mode or compiled and optimized further to run in Native mode, ensuring developers have the best runtime environment for their particular application,” said Rich Sharples, senior director of product management at Red Hat.

Red Hat officials believe Quarkus will be an important technology for two of its most important constituents — developers who are choosing Kubernetes and OpenShift as their strategic application development and production platform and enterprise developers with deep roots in Java.

“That intersection is pretty huge and growing and represents a key target market for Red Hat and IBM,” Sharples said. “It represents organizations across all industries who are building out the next generation of business-critical applications that will provide those organizations with a competitive advantage.”

Go to Original Article
Author:

HashiCorp Nomad vs. Kubernetes matchup intensifies with 0.11

A HashiCorp Nomad beta release this week could help it encroach on Kubernetes’ territory with advanced IT automation for legacy applications and a simpler approach to container orchestration.

Hashicorp first released the open source workload orchestrator in 2015, a year after Kubernetes arrived in the market. But since then, Kubernetes has become the industry-standard container orchestrator, while Nomad Enterprise is HashiCorp’s least-used commercial product in a portfolio that also includes Terraform infrastructure as code, Vault secrets management and Consul service discovery.

These products are also commonly used in Kubernetes environments, and HashiCorp officials typically prefer to frame Nomad as complementary to Kubernetes, rather than a competitor. In the past, HashiCorp’s documentation has pointed out that past versions of Nomad orchestrated only compute resources, scheduling workloads on separately managed underlying resources. This made for a simpler but less complete approach to workload automation, as previous versions of Nomad did not handle networking and storage for application clusters, as Kubernetes does.

However, with version 0.11, released in beta this week, HashiCorp Nomad’s storage features draw closer to those offered by Kubernetes. The new capabilities include support for shared storage volumes through the open source Container Storage Interface (CSI), a set of APIs supported by most major storage vendors. CSI is most commonly used with Kubernetes, but any CSI plugins written to work with Kubernetes will also work with HashiCorp Nomad as of version 0.11.

HashiCorp Nomad version 0.11 also introduces horizontal application autoscaling capabilities, as well as support for task dependencies in cases where application components must be deployed in a certain order on a container cluster.

“[Nomad] can still coexist with Kubernetes, especially for legacy applications when customers prefer to use Kubernetes for containers,” said Amith Nair, VP of product marketing at HashiCorp. “But the [new] features make it a more direct comparison, and we’re starting to see increased usage on the open source side, where some customers are downloading it to replace Kubernetes.”

In the last six months, open source downloads of HashiCorp Nomad have doubled each month to reach 20,000 per month, Nair said. A hosted Nomad cloud service also remains on the company’s long-term roadmap, which would likely compete with the many hosted Kubernetes services available.

HashiCorp Nomad seeks app modernization niche

Most of HashiCorp Nomad’s workload orchestration features can be used to modernize legacy applications that run on VMs. Nomad’s scheduler, when used with Consul service discovery, can optimize how applications on VMs and containers use underlying resources. With version 0.11’s CSI support, HashiCorp Nomad can perform non-disruptive rolling updates of both container-based and VM-based applications.

Such features may put HashiCorp Nomad in closer competition with IT vendors such as VMware, which offers Kubernetes container orchestration alongside VM management. HashiCorp has an uphill battle in that market as well, given VMware’s ubiquity in enterprise shops. But as with Kubernetes, HashiCorp Nomad could capture some attention from IT pros because of its simplicity, analysts said.

Nomad can infiltrate the same market as VMware’s Project Pacific and Tanzu with a low-cost alternative for users that want to manage traditional workloads and cloud-native workloads with one entity.
Roy IllsleyAnalyst, Omdia

“Nomad can infiltrate the same market as VMware’s Project Pacific and Tanzu with a low-cost alternative for users that want to manage traditional workloads and cloud-native workloads with one entity,” said Roy Illsley, analyst at Omdia, a technology market research firm in London. “The challenge is that HashiCorp hasn’t been great at marketing — tech people know it, but tech people don’t necessarily sign the checks.”

With a recent $175 million funding infusion for HashiCorp, however, that could change, and HashiCorp could play a role similar to Linkerd, a service mesh rival to Google and IBM’s Istio that has held its own in the enterprise because many consider it easier to setup and use.

HashiCorp Nomad vs. Kubernetes pros and cons

Two HashiCorp users published blog posts last year detailing their decision to deploy Nomad over Kubernetes. The on-premises IT team at hotel search site Trivago moved its IT monitoring workloads to the public cloud using Nomad in early 2019. Trivago’s IT staff already had experience with HashiCorp’s tool and found Kubernetes more complex than was necessary for its purposes.

“The additional functionality that Kubernetes had to offer was not worth the extra efforts and human resources required to keep it running,” wrote Inga Feick, a DevOps engineer at Trivago, based in Dusseldorf, Germany. “Remote cloud solutions like a managed Kubernetes cluster or [Amazon ECS] are not an option for our I/O-intense jobs either.”

Another freelance developer cited Nomad’s simplicity in a November 2019 post about porting a project to Nomad from Kubernetes.

“Kubernetes is getting all the visibility for good reasons, but it’s probably not suitable for small to medium companies,” wrote Fabrice Aneche, a software engineering consultant based in Quebec. “You don’t need to deploy Google infrastructure when you are not Google.”

Both blog posts noted significant downsides to HashiCorp Nomad vs Kubernetes at the time, however.

“Nomad is one binary, but the truth is Nomad is almost useless without Consul,” Aneche noted in his post. This adds some complexity to HashiCorp Nomad for production use, since users are required to use Consul’s template language to track changes to the Nomad environment. Version 0.11 adds more detailed insights and alerts to a Nomad remote execution UI to make service management easier. Aneche did not respond to requests for comment about the version 0.11 release this week.

Meanwhile, Trivago’s Feick noted the lack of support for autoscaling in January 2019 made HashiCorp Nomad cumbersome to manage at times.

“You need to specify the resource requirements per job,” she wrote. “Give a job too much CPU and memory and Nomad cannot allocate any, or at least not many, other jobs on the same host. Give it not enough memory and you might find it dying… It would be neat if Nomad had a way of calculating those resource needs on its own. One can dream.” Feick didn’t respond to requests for additional comment this week.

HashiCorp Nomad version 0.11 takes the first step toward full autoscaling support with horizontal application autoscaling, or the ability to provide applications with cluster resources dynamically without manual intervention, a company spokesperson said.

Subsequent releases will support horizontal cluster autoscaling that adds resources to the cluster infrastructure as necessary, along with vertical application autoscaling, which will add and remove instances of applications in response to demand. Autoscaling features will work with VM workloads but are primarily intended for use with containers.

Go to Original Article
Author:

Oracle ships Java 14 with new preview, productivity features

Oracle’s latest release of the Java language and platform, Java 14 — also known as Oracle JDK14 — brings a series of features focused on helping developers code faster and more efficiently.

The latest Java Development Kit (JDK) provides new developer-focused features including Java language support for switch expressions, new APIs for continuous monitoring of JDK Flight Recorder data, and extended availability of the low-latency Z Garbage Collector to macOS and Windows.

In addition, Java 14 includes three preview features that come out of the JDK Enhancement Proposals (JEP) process. These are Pattern Matching, or JEP 305; Records, or JEP 359; and Text Blocks, also known as JEP 368.

Java 12 introduced switch expressions in preview, and it is now standard in Java 14. This feature extends the Java switch statement so it can be used as either a statement or an expression. “Basically, we converted the switch statement into an expression and made it much simpler and more concise,” said Aurelio Garcia-Ribeyro, Oracle’s Sr. Director of Product Management, Java Platform.

 Oracle will give developers a way to spot errors by continuously monitoring the JDK Flight Recorder, a tool integrated into the Java Virtual Machine for collecting diagnostic and profiling data about a running Java application.

Finally, the z Garbage Collector, also known as ZGC, is a scalable, low-latency garbage collector. Garbage collection is a form of automatic memory management that frees up memory that is no longer in use or needed by the application. Prior to the Windows and MacOS support introduced with Java 14, the z Garbage collector was available only on Linux/x64 platforms.

As for the preview features, Oracle has developed pattern matching for the Java “instanceof” operator. The instanceof operator is used to test if an object is of a given type. In turn, the introduction of Java Records cuts down on the verbosity of Java and provides a compact syntax for declaring classes.

“Records will eliminate a lot of the boilerplate that has historically been needed to create a class,” Garcia-Ribeyro said.

Text Blocks, initially introduced in Java 13 as a preview, returns as an enhanced preview in Java 14. Text Blocks make it easy to express strings that span several lines of source code. It enhances the readability of strings in Java programs that denote code written in non-Java languages, Garcia-Ribeyro said.

Oracle needs to give Java developers the types of tools they need to evolve with the marketplace, said Bradley Shimmin, an analyst at Omdia in Longmeadow, Mass.

“When I look at what they’re doing with Java 14, they’re adding features that make the language more resilient, more performant and that make developers more productive in using the language,” he said.

Oracle takes iterative approach to Java updates

Java 14 also includes a new Packaging Tool, introduced as an incubator feature, that provides a way for developers to package Java applications for distribution in platform-specific formats. This tool is introduced as an incubator module to get developer feedback as the tool nears finalization.

Among the more obscure features in this release are Non-Volatile Mapped Byte Buffers, which add a file mapping mode for the JDK when using non-volatile memory. Also, Helpful NullPointerExceptions improves the usability of NullPointerExceptions by describing precisely which variable was null. NullPointerExceptions are exceptions that occur when you try to use a reference that points to no location in memory as though it were referencing an object. And the Foreign-Memory Access API allows Java programs to safely access foreign memory outside of the Java heap. The Java heap is the amount of memory allocated to applications running in the JVM.

Java 14 is another new release of the language under the six-month cadence Oracle instituted more than two years ago. The purpose of the quicker cadence of releases is to get “more bite-size pieces that are easier to deploy and manage and that get the features to app developers in the enterprise to benefit from these new capabilities,” said Manish Gupta, Oracle’s Vice President of Marketing for Java and GraalVM.

Overall, Oracle wants to advance the Java language and platform to make it work well for new cloud computing applications as well as platforms such as mobile and IoT. In 2017, Oracle spun out enterprise Java, known as Java Enterprise Edition or JavaEE, to the Eclipse Foundation. Eclipse has since created a new enterprise Java specification called Jakarta EE.

“When I think about Java 14, what I’m seeing is that Oracle is not only staying true to what they promised back when they acquired Sun Microsystems, which was to do no harm to Java, but that they are trying to now evolve Java in such a way that it can remain relevant into the future,” Shimmin said.

Go to Original Article
Author:

Announcing PowerShell 7.0 | PowerShell

Joey Aiello

Joey

Today, we’re happy to announce the Generally Available (GA) release of PowerShell 7.0! Before anything else, we’d like to thank our many, many open-source contributors for making this release possible by submitting code, tests, documentation, and issue feedback. PowerShell 7 would not have been possible without your help.

slew of new cmdlets/APIs and bug fixes, we’re introducing a number of new features, including:

  • Pipeline parallelization with ForEach-Object -Parallel
  • New operators:
    • Ternary operator: a ? b : c
    • Pipeline chain operators: || and &&
    • Null coalescing operators: ?? and ??=
  • A simplified and dynamic error view and Get-Error cmdlet for easier investigation of errors
  • A compatibility layer that enables users to import modules in an implicit Windows PowerShell session
  • Automatic new version notifications
  • The ability to invoke to invoke DSC resources directly from PowerShell 7 (experimental)

For a more complete list of features and fixes, check out the PowerShell 7.0 release notes.

The shift from PowerShell Core 6.x to 7.0 also marks our move from .NET Core 2.x to 3.1. .NET Core 3.1 brings back a host of .NET Framework APIs (especially on Windows), enabling significantly more backwards compatibility with existing Windows PowerShell modules. This includes many modules on Windows that require GUI functionality like Out-GridView and Show-Command, as well as many role management modules that ship as part of Windows. For more info, check out our module compatibility table showing off how you can the latest, up-to-date modules that work with PowerShell 7.

If you weren’t able to use PowerShell Core 6.x in the past because of module compatibility issues, this might be the first time you get to take advantage of some of the awesome features we already delivered since we started the Core project!

Windows, macOS, or Linux. Depending on the version of your OS and preferred package format, there may be multiple installation methods.

If you already know what you’re doing, and you’re just looking for a binary package (whether it’s an MSI, ZIP, RPM, or something else), hop on over to our latest release tag on GitHub.

Additionally, you may want to use one of our many Docker container images. For more information on using those, check out our PowerShell-Docker repo.

following operating systems on x64, including:

  • Windows 7, 8.1, and 10
  • Windows Server 2008 R2, 2012, 2012 R2, 2016, and 2019
  • macOS 10.13+
  • Red Hat Enterprise Linux (RHEL) / CentOS 7+
  • Fedora 29+
  • Debian 9+
  • Ubuntu 16.04+
  • openSUSE 15+
  • Alpine Linux 3.8+

Additionally, we support ARM32 and ARM64 flavors of Debian and Ubuntu, as well as ARM64 Alpine Linux.

While not officially supported, the community has also provided packages for Arch and Kali Linux.

If you need support for a platform that wasn’t listed here, please file a distribution request on GitHub (though it should be noted that we’re ultimately limited by what’s supported by .NET Core 3.1).

.NET decided to do with .NET 5, we feel that PowerShell 7 marks the completion of our journey to maximize backwards compatibility with Windows PowerShell. To that end, we consider PowerShell 7 and beyond to be the one, true PowerShell going forward.

PowerShell 7 will still be noted with the edition “Core” in order to differentiate 6.x/7.x from Windows PowerShell, but in general, you will see it denoted as “PowerShell 7” going forward.

Import-Module documentation.

For those modules still incompatible, we’re working with a number of teams to add native PowerShell 7 support, including Microsoft Graph, Office 365, and more.

Azure Cloud Shell has already been updated to use PowerShell 7, and others like the .NET Core SDK Docker container images and Azure Functions will be updated soon.

will be supported for approximately 3 years from December 3, 2019 (the release date of .NET Core 3.1).

You can find more info about PowerShell’s support lifecycle at https://aka.ms/pslifecycle

filing an issue on the main PowerShell repository. For issues related to specific modules (e.g. PSReadline or PowerShellGet), make sure to file in the appropriate repository.

Go to Original Article
Author: Microsoft News Center

EG Enterprise v7 focuses on usability, user experience monitoring

Software vendor EG Innovations will release version 7 of its EG Enterprise software, its end-user experience monitoring tool, on Jan. 31.

New features and updates have been added to the IT monitoring software with the goal of making it more user-friendly. The software focuses primarily on monitoring end-user activities and responses.

“Many times, vendor tools monitor their own software stack but do not go end to end,” said Srinivas Ramanathan, CEO of EG Innovations. “Cross-tier, multi-vendor visibility is critical when it comes to monitoring and diagnosing user experience issues. After all, users care about the entire service, which cuts across vendor stacks.”

Ramanathan said IT issues are not as simple as they used to be.

“What you will see in 2020 is now that there is an ability to provide more intelligence to user experience, how do you put that into use?” said Mark Bowker, senior analyst at Enterprise Strategy Group. “EG has a challenge of when to engage with a customer. IT’s a value to them if they engage with the customer sooner in an end-user kind of monitoring scenario. In many cases, they get brought in to solve a problem when it’s already happened, and it would be better for them to shift.”

New features in EG Enterprise v7 include:

  • Synthetic and real user experience monitoring: Users can create simulations and scripts of different applications that can be replayed to further help diagnose a problem and notifies IT operations teams of impending problems.
  • Layered monitoring: Enables users to monitor every tier of an application stack via a central console.
  • Automated diagnosis: Lets users use machine learning and automation to find root causes to issues.
  • Optimization plan: Users can customize optimization plans through capacity and application overview reports.

“Most people look at user experience as just response time for accessing any application. We see user experience as being broader than this,” Ramanthan said. “If problems are not diagnosed correctly and they reoccur again and again, it will hurt user experience. If the time to resolve a problem is high, users will be unhappy.”

Pricing for EG Enterprise v7 begins at $2 per user per month in a digital workspace. Licensing for other workloads depends on how many operating systems are being monitored. The new version includes support for Citrix and VMWare Horizon.

Go to Original Article
Author:

Red Hat OpenShift Container Storage seeks to simplify Ceph

The first Red Hat OpenShift Container Storage release to use multiprotocol Ceph rather than the Gluster file system to store application data became generally available this week. The upgrade comes months after the original late-summer target date set by open source specialist Red Hat.

Red Hat — now owned by IBM — took extra time to incorporate feedback from OpenShift Container Storage (OCS) beta customers, according to Sudhir Prasad, director of product management in the company’s storage and hyper-converged business unit.

The new OCS 4.2 release includes Rook Operator-driven installation, configuration and management so developers won’t need special skills to use and manage storage services for Kubernetes-based containerized applications. They indicate the capacity they need, and OCS will provision the available storage for them, Prasad said.

Multi-cloud support

OCS 4.2 also includes multi-cloud support, through the integration of NooBaa gateway technology that Red Hat acquired in late 2018. NooBaa facilitates dynamic provisioning of object storage and gives developers consistent S3 API access regardless of the underlying infrastructure.

Prasad said applications become portable and can run anywhere, and NooBaa abstracts the storage, whether AWS S3 or any other S3-compatible cloud or on-premises object store. OCS 4.2 users can move data between cloud and on-premises systems without having to manually change configuration files, a Red Hat spokesman added.

Customers buy OCS to use with the Red Hat OpenShift Container Platform (OCP), and they can now manage and monitor the storage through the OCP console. Kubernetes-based OCP has more than 1,300 customers, and historically, about 40% to 50% attached to OpenShift Container Storage, a Red Hat spokesman said. OCS had about 400 customers in May 2019, at the time of the Red Hat Summit, according to Prasad.

One critical change for Red Hat OpenShift Container Storage customers is the switch from file-based Gluster to multiprotocol Ceph to better target data-intensive workloads such as artificial intelligence, machine learning and analytics. Prasad said Red Hat wanted to give customers a more complete platform with block, file and object storage that can scale higher than the product’s prior OpenStack S3 option. OCS 4.2 can support 5,000 persistent volumes and will support 10,000 in the upcoming 4.3 release, according to Prasad.

Migration is not simple

Although OCS 4 may offer important advantages, the migration will not be a trivial one for current customers. Red Hat provides a Cluster Application Migration tool to help them move applications and data from OCP 3/OCS 3 to OCP 4/OCS 4 at the same time. Users may need to buy new hardware, unless they can first reduce the number of nodes in their OpenShift cluster and use the nodes they free up, Prasad confirmed.

“It’s not that simple. I’ll be upfront,” Prasad said, commenting on the data migration and shift from Gluster-based OCS to Ceph-backed OCS. “You are moving from OCP 3 to OCP 4 also at the same time. It is work. There is no in-place migration.”

One reason that Red Hat put so much emphasis on usability in OCS 4.2 was to abstract away the complexity of Ceph. Prasad said Red Hat got feedback about Ceph being “kind of complicated,” so the engineering team focused on simplifying storage through the operator-driven installation, configuration and management.

“We wanted to get into that mode, just like on the cloud, where you can go and double-click on any service,” Prasad said. “That took longer than you would have expected. That was the major challenge for us.”

OpenShift Container Storage roadmap

The original OpenShift Container Storage 4.x roadmap that Red Hat laid out last May at its annual customer conference called for a beta release in June or July, OCS 4.2 general availability in August or September, and a 4.3 update in December 2019 or January 2020. Prasad said February is the new target for the OCS 4.3 release.

The OpenShift Container Platform 4.3 update became available this week, with new security capabilities such as Federal Information Processing Standard (FIPS)-compliant encryption. Red Hat eventually plans to return to its prior practice of synchronizing new OCP and OCS releases, said Irshad Raihan, the company’s director of storage product marketing.

The Red Hat OpenShift Container Storage 4.3 software will focus on giving customers greater flexibility, such as the ability to choose the type of disk they want, and additional hooks to optimize the storage. Prasad said Red Hat might need to push its previously announced bare-metal deployment support from OCS 4.3 to OCS 4.4.

OCS 4.2 supports converged-mode operation, with compute and storage running on the same node or in the same cluster. The future independent mode will let OpenShift use any storage backend that supports the Container Storage Interface. OCS software would facilitate access to the storage, whether it’s bare-metal servers, legacy systems or public cloud options.

Alternatives to Red Hat OpenShift Container Storage include software from startups Portworx, StorageOS, and MayaData, according to Henry Baltazar, storage research director at 451 Research. He said many traditional storage vendors have added container plugins to support Kubernetes. The public cloud could appeal to organizations that don’t want to buy and manage on-premises systems, Baltazar added.

Baltazar advised Red Hat customers moving from Gluster-based OCS to Ceph-based OCS to keep a backup copy of their data to restore in the event of a problem, as they would with any migration. He said any users who are moving a large data set to public cloud storage needs to factor in network bandwidth and migration time and consider egress changes if they need to bring the data back from the cloud.

Go to Original Article
Author:

Power BI platform remains a vibrant, respected suite

With a rapid release schedule that enables it to keep up with emerging trends, Microsoft’s Power BI platform remains a powerful and respected business intelligence suite.

While many vendors issue quarterly updates, Microsoft rolls out minor updates to Power BI on a weekly basis and more comprehensive updates each month. And that flexibility and attention to detail has helped the Power BI platform stay current while some other longtime BI vendors battle the perception that their platforms have fallen behind the times.

Most recently, in December, Microsoft added to Power BI an updated connector to its Azure data lake, a new connector to the Power Platform application platform and new data visualization formats.

“I think they’re leading the pack, and they’re putting a lot of pressure on Tableau,” said Wayne Eckerson, president of Eckerson Group, referring to the Microsoft Power BI competitor, which was acquired last year by Salesforce. “The philosophy of a new release every week in itself puts a lot of pressure on Tableau.”

In addition, Eckerson noted, the Power BI platform’s built-in ability to integrate with other Microsoft platforms — as evidenced by the new connectors — gives it a significant advantage over BI platforms offered by some independent vendors.

I think they’re leading the pack, and they’re putting a lot of pressure on Tableau. The philosophy of a new release every week in itself puts a lot of pressure on Tableau.
Wayne EckersonPresident, Eckerson Group

“It’s part of the Azure platform and tightly integrated with SQL Server Integration Service, Data Factory, and SQL Server Reporting Services,” Eckerson said. “Most importantly, it has a data model behind it — or semantic layer as we have called it.”

Beyond the updates, a recent focus of the Power BI platform has been data protection.

Arun Ulagaratchagan, general manager of Power BI, said that all vendors have some level of data protection, but as users export data outside of their BI products and across their organizations, the BI system can no longer secure the data.

Microsoft is trying to change that with Power BI, he said.

“We’re adding data protection to Power BI, integrating it with Microsoft Data Protection,” Ulagaratchagan said. “It secures the data when it’s exported out of Power BI so that only people who have been given prior authority can access it.”

Despite Microsoft’s ability to update the Power BI platform on an almost constant basis, its capabilities aren’t viewed as the most innovative on the market.

Those capabilities are in line with the features other vendors are offering, but with Power BI, Microsoft is not necessarily introducing revolutionary technology that the rest of the market needs to react to or get left behind, analysts said.

Instead, Power BI is seen as quickly reactive to trends within the analytics space and to new features first released by other vendors.

“All of their recent updates have been incremental – there hasn’t been anything particularly exciting,” said Donald Farmer, principal at TreeHive Strategy. “It’s good work, but it’s incremental, which is as it should be.”

Similarly, Eckerson noted that while the updates are important, they don’t feature much that will force other vendors to respond.

“There’s all kinds of small stuff, which is important if you’re using the tool,” he said.

Where Microsoft is moving the market forward, and appears to be forcing competitors to respond, is Azure Synapse Analytics, which launched in preview November.

Synapse attempts to joins data warehousing and data analytics in a single cloud service and integrates with both Power BI and Azure Machine Learning. Essentially, Synapse is the next step in the evolution of Azure SQL Data Warehouse.

“Synapse is where Microsoft has been innovative and made a big bet,” Farmer said.

Beyond placing an emphasis — from the perspective of innovation — on Synapse rather than the Power BI platform, Farmer noted that Power BI simply doesn’t need to be the most spectacular BI suite on the market.

Users of the Power BI platform often don’t seek it out the same way as they do other BI tools. Instead, many simply use Power BI because they’re Windows users and Power BI comes with Windows.

“It’s essentially a default option, but it’s a good default option,” Farmer said. “Tableau, for example, is a tool of choice. … [Microsoft] is not setting the world alight with innovation. Instead, their efforts are on integration with other Microsoft applications, and that’s where they’re interesting.”

While Microsoft doesn’t publicly disclose its product roadmap, Ulagaratchagan said BI for mobile devices, the ability to handle larger and larger data sets, and embedded analytics are important trends as BI advances, as is the idea of openness and trust with data.

Also, AI for BI will continue to advance.

“That’s an area where we have an advantage,” Ulagaratchagan asserted. “We can steal from the Azure team and take that and make it easy to use for our end users and citizen data scientists. We want to get data in the hands of everyone.”

Go to Original Article
Author:

How to install and test Windows Server 2019 IIS

Transcript – How to install and test Windows Server 2019 IIS

In this video, I want to show you how to install Internet Information Services, or IIS, and prepare it for use.

I’m logged into a domain-joined Windows Server 2019 machine and I’ve got the Server Manager open. To install IIS, click on Manage and choose the Add Roles and Features option. This launches the Add Roles and Features wizard. Click Next on the welcome screen and choose role-based or feature-based installation for the installation type and click Next.

Make sure that My Server is selected and click Next. I’m prompted to choose the roles that I want to deploy. We have an option for web server IIS. That’s the option I’m going to select. When I do that, I’m prompted to install some dependency features, so I’m going to click on Add Features and I’ll click Next.

I’m taken to the features screen. All the dependency features that I need are already being installed, so I don’t need to select anything else. I’ll click Next, Next again, Next again on the Role Services — although if you do need to install any additional role services to service the IIS role, this is where you would do it. You can always enable these features later on, so I’ll go ahead and click Next.

I’m taken to the Confirmation screen and I can review my configuration selections. Everything looks good here, so I’ll click install and IIS is being installed.

Testing Windows Server 2019 IIS

The next thing that I want to do is test IIS to make sure that it’s functional. I’m going to go ahead and close this out and then go to local server. I’m going to go to IE Enhanced Security Configuration. I’m temporarily going to turn this off just so that I can test IIS. I’ll click OK and I’ll close Server Manager.

The next thing that I want to do is find this machine’s IP address, so I’m going to right-click on the Start button and go to Run and type CMD to open a command prompt window, and then from there, I’m going to type ipconfig.

Here I have the server’s IP address, so now I can open up an Internet Explorer window and enter this IP address and Internet Information Services should respond. I’ve entered the IP address, then I press enter and I’m taken to the Internet Information Services screen. IIS is working at this point.

I’ll go ahead and close this out. If this were a real-world deployment, one of the next things that you would probably want to do is begin uploading some of the content that you’re going to use on your website so that you can begin testing it on this server.

I’ll go ahead and open up file explorer and I’ll go to this PC, driver and inetpub folder and the wwwroot subfolder. This is where you would copy all of your files for your website. You can configure IIS to use a different folder, but this is the one used by default for IIS content. You can see the files right here that make up the page that you saw a moment ago.

How to work with the Windows Server 2019 IIS bindings

Let’s take a look at a couple of the configuration options for IIS. I’m going to go ahead and open up Server Manager and what I’m going to do now is click on Tools, and then I’m going to choose the Internet Information Services (IIS) Manager. The main thing that I wanted to show you within the IIS Manager is the bindings section. The bindings allow traffic to be directed to a specific website, so you can see that, right now, we’re looking at the start page and, right here, is a listing for my IIS server.

I’m going to go ahead and expand this out and I’m going to expand the site’s container and, here, you can see the default website. This is the site that I’ve shown you just a moment ago, and then if we look over here on the Actions menu, you can see that we have a link for Bindings. When I open up the Bindings option, you can see by default we’re binding all HTTP traffic to port 80 on all IP addresses for the server.

We can edit [the site bindings] if I select [the site] and click on it. You can see that we can select a specific IP address. If the server had multiple IP addresses associated with it, we could link a different IP address to each site. We could also change the port that’s associated with a particular website. For example, if I wanted to bind this particular website to port 8080, I could do that by changing the port number. Generally, you want HTTP traffic to flow on port 80. The other thing that you can do here is to assign a hostname to the site, for example www.contoso.com or something to that effect.

The other thing that I want to show you in here is how to associate HTTPS traffic with a site. Typically, you’re going to have to have a certificate to make that happen, but assuming that that’s already in place, you click on Add and then you would change the type to HTTPS and then you can choose an IP address; you can enter a hostname; and then you would select your SSL certificate for the site.

You’ll notice that the port number is set to 443, which is the default port that’s normally used for HTTPS traffic. So, that’s how you install IIS and how you configure the bindings for a website.

+ Show Transcript

Go to Original Article
Author:

S/4HANA Cloud integrates Qualtrics for continuous improvement

SAP is focused on better understanding what’s on the minds of their customers with the latest release of S/4HANA Cloud.

SAP S/4HANA Cloud 1911, which is now available, has SAP Qualtrics experience management (XM) embedded into the user interface, creating a feedback loop for the product management team about the application. This is one of the first integrations of Qualtrics XM into SAP products since SAP acquired the company a year ago for $8 billion.

“Users can give direct feedback on the application,” said Oliver Betz, global head of product management for S/4HANA Cloud at SAP. “It’s context-sensitive, so if you’re on a homescreen, it asks you, ‘How do you like the homescreen on a scale of one to five?’ And then the user can provide more detailed feedback from there.”

The customer data is consolidated and anonymized and sent to the S/4HANA Cloud product management team, Betz said.

“We’ll regularly screen the feedback to find hot spots,” he said. “In particular we’re interested in the outliers to the good and the bad, areas where obviously there’s something we specifically need to take care of, or also some areas where users are happy about the new features.”

Oliver BetzOliver Betz

Because S/4HANA Cloud is a cloud product that sends out new releases every quarter, the customer feedback loop that Qualtrics provides will inform developers on how to continually improve the product, Betz said.

“This is the first phase in the next iteration [of S/4HANA Cloud], which will add more granular features,” he said. “From a product management perspective, you can potentially have a new application and have some questions around the application to better understand the usage, what customers like and what they don’t like, and then to take it in a feedback loop to iterate over the next quarterly shipments so we can always provide new enhancements.”

Qualtrics integration may take time to provide value

It has taken a while, but it’s a good thing that SAP has now begun a real Qualtrics integration story, said Jon Reed, analyst and co-founder of Diginomica.com, an analysis and news site that focuses on enterprise applications. Still, SAP faces a few obstacles before the integration into S/4HANA Cloud can be a real differentiator.

Jon ReedJon Reed

“This isn’t a plug-and-play thing where customers are immediately able to use this the way you would a new app on your phone, like a new GPS app. This is useful experiential data which you must then analyze, manage and apply,” Reed said. “Eventually, you could build useful apps and dashboards with it, but you still have to apply the insights to get the value. However, if SAP has made those strides already on integrating Qualtrics with S/4HANA Cloud 1911, that’s a positive for them and we’ll see if it’s an advantage they can use to win sales.”

The Qualtrics products are impressive, but it’s still too early in the game to judge how the SAP S/4HANA integration will work out, said Vinnie Mirchandani, analyst and founder of Deal Architect, an enterprise applications focused blog.

“SAP will see more traction with Qualtrics in the employee and customer experience feedback area,” Mirchandani said. “Experiential tools have more impact where there are more human touchpoints — employees, customer service, customer feedback on product features — so I think the blend with SuccessFactors and C/4HANA is more obvious. This doesn’t mean that S/4 won’t see benefits, but the traction may be higher in other parts of the SAP portfolio.”

Vinnie MirchandaniVinnie Mirchandani

SAP SuccessFactors is also beginning to integrate Qualtrics into its employee experience management functions.

It’s a good thing that SAP is attempting to become a more customer-centric company, but it will need to follow through on the promise and make it a part of the company culture, said Faith Adams, senior analyst who focuses on customer experience at Forrester Research.

Many companies are making efforts to appear to be customer-centric, but aren’t following through with the best practices that are required to become truly customer-centric, like taking actions on the feedback they get, Adams said.

“It’s sometimes more of a ‘check the box’ activity rather than something that is embedded into the DNA or a way of life,” Adams said. “I hope that SAP does follow through on the best practices, but that’s to be determined.”

Bringing analytics to business users

SAP S/4HANA Cloud 1911 also now has SAP Analytics Cloud directly embedded. This will enable business users to take advantage of analytics capabilities without going to separate applications, according to SAP’s Betz.

It comes fully integrated out of the box and doesn’t require configuration, Betz said. Users can take advantage of included dashboards or create their own.

“The majority usage at the moment is in the finance application where you can directly access your [key performance indicators] there and have it all visualized, but also create and run your own dashboards,” he said. “This is about making data more available to business users instead of waiting for a report or something to be sent; everybody can have this information on hand already without having some business analyst putting [it] together.”

Dana GardnerDana Gardner

The embedded analytics capability could be an important differentiator for SAP in making data analytics more democratic across organizations, said Dana Gardner, president of IT consultancy Interarbor Solutions LLC. He believes companies need to break data out of “ivory towers” now as machine learning and AI grow in popularity and sophistication.

“The more people that use more analytics in your organization, the better off the company is,” Gardner said. “It’s really important that SAP gets aggressive on this, because it’s big and we’re going to see much more with machine learning and AI, so you’re going to need to have interfaces with the means to bring the more advanced types of analytics to more people as well.”

Go to Original Article
Author:

TigerGraph Cloud releases graph database as a service

With the general release of TigerGraph Cloud on Wednesday, TigerGraph introduced its first native graph database as a service.

In addition, the vendor announced that it secured $32 million in Series B funding, led by SIG.

TigerGraph, founded in 2012 and based in Redwood City, Ca., is a native graph database vendor whose products, first released in 2016, enable users to manage and access their data in different ways than traditional relational databases.

Graph databases simplify the connection of data points and enable them to simultaneously connect with more than one other data point. Among the benefits are the ability to significantly speed up the process of developing data into insights and to quickly pull data from disparate sources.

Before the release of TigerGraph Cloud, TigerGraph customers were able to take advantage of the power of graph databases, but they were largely on-premises users, and they had to do their own upgrades and oversee the management of the database themselves.

“The cloud makes life easier for everyone,” said Yu Xu, CEO of TigerGraph. “The cloud is the future, and more than half of database growth is coming from the cloud. Customers asked for this. We’ve been running [TigerGraph Cloud] in a preview for a while — we’ve gotten a lot of feedback from customers — and we’re big on the cloud. [Beta] customers have been using us in their own cloud.”

Regarding the servicing of the databases, Xu added: “Now we take over this control, now we host it, we manage it, we take care of the upgrades, we take care of the running operations. It’s the same database, but it’s an easy-to-use, fully SaaS model for our customers.”

In addition to providing graph database management as a service and enabling users to move their data management to the cloud, TigerGraph Cloud provides customers an easy entry into graph-based data analysis.

Some of the most well-known companies in the world, at their core, are built on graph databases.

Google, Facebook, LinkedIn and Twitter are all built on graph technology. Those vendors, however, have vast teams of software developers to build their own graph databases and teams of data scientists do their own graph-based data analysis, noted TigerGraph chief operating officer Todd Blaschka.

“That is where TigerGraph Cloud fits in,” Blaschka said. “[TigerGraph Cloud] is able to open it up to a broader adoption of business users so they don’t have to worry about the complexity underneath the hood in order to be able to mine the data and look for the patterns. We are providing a lot of this time-to-value out of the box.”

TigerGraph Cloud comes with 12 starter kits that help customers quickly build their applications. It also doesn’t require users to configure or manage servers, schedule monitoring or deal with potential security issues, according to TigerGraph.

That, according Donald Farmer, principal at TreeHive Strategy, is a differentiator for TigerGraph Cloud.

It is the simplicity of setting up a graph, using the starter kits, which is their great advantage. Classic graph database use cases such as fraud detection and recommendation systems should be much quicker to set up with a starter kit, therefore allowing non-specialists to get started.
Donald FarmerPrincipal, TreeHive Strategy

“It is the simplicity of setting up a graph, using the starter kits, which is their great advantage,” he said. “Classic graph database use cases such as fraud detection and recommendation systems should be much quicker to set up with a starter kit, therefore allowing non-specialists to get started.”

Graph databases, however, are not better for everyone and everything, according to Farmer. They are better than relational databases for specific applications, in particular those in which augmented intelligence and machine learning can quickly discern patterns and make recommendations. But they are not yet as strong as relational databases in other key areas.

“One area where they are not so good is data aggregation, which is of course a significant proportion of the work for business analytics,” Farmer said. “So relational databases — especially relational data warehouses — still have an advantage here.”

Despite drawbacks, the market for graph databases is expected to grow substantially over the next few years.

And much of that growth will be in the cloud, according to Blaschka.

Citing a report from Gartner, he said that 68% of graph database market growth will be in the cloud, while the graph database market as whole is forecast to have at least 100 percent year-over-year annual growth through 2022.

“The reason we’re seeing this growth so fast is that graph is the cornerstone for technologies such as machine learning, such as artificial intelligence, where you need large sets of data to find patterns to find insight that can drive those next-gen applications,” he said. “It’s really becoming a competitive advantage in the marketplace.”

With respect to the $32 million TigerGraph raised in Series B financing, according to Xu it will be used to help TigerGraph expand its reach into new markets and accelerate its emphasis on the cloud.

Go to Original Article
Author: