Tag Archives: release

EG Enterprise v7 focuses on usability, user experience monitoring

Software vendor EG Innovations will release version 7 of its EG Enterprise software, its end-user experience monitoring tool, on Jan. 31.

New features and updates have been added to the IT monitoring software with the goal of making it more user-friendly. The software focuses primarily on monitoring end-user activities and responses.

“Many times, vendor tools monitor their own software stack but do not go end to end,” said Srinivas Ramanathan, CEO of EG Innovations. “Cross-tier, multi-vendor visibility is critical when it comes to monitoring and diagnosing user experience issues. After all, users care about the entire service, which cuts across vendor stacks.”

Ramanathan said IT issues are not as simple as they used to be.

“What you will see in 2020 is now that there is an ability to provide more intelligence to user experience, how do you put that into use?” said Mark Bowker, senior analyst at Enterprise Strategy Group. “EG has a challenge of when to engage with a customer. IT’s a value to them if they engage with the customer sooner in an end-user kind of monitoring scenario. In many cases, they get brought in to solve a problem when it’s already happened, and it would be better for them to shift.”

New features in EG Enterprise v7 include:

  • Synthetic and real user experience monitoring: Users can create simulations and scripts of different applications that can be replayed to further help diagnose a problem and notifies IT operations teams of impending problems.
  • Layered monitoring: Enables users to monitor every tier of an application stack via a central console.
  • Automated diagnosis: Lets users use machine learning and automation to find root causes to issues.
  • Optimization plan: Users can customize optimization plans through capacity and application overview reports.

“Most people look at user experience as just response time for accessing any application. We see user experience as being broader than this,” Ramanthan said. “If problems are not diagnosed correctly and they reoccur again and again, it will hurt user experience. If the time to resolve a problem is high, users will be unhappy.”

Pricing for EG Enterprise v7 begins at $2 per user per month in a digital workspace. Licensing for other workloads depends on how many operating systems are being monitored. The new version includes support for Citrix and VMWare Horizon.

Go to Original Article
Author:

Red Hat OpenShift Container Storage seeks to simplify Ceph

The first Red Hat OpenShift Container Storage release to use multiprotocol Ceph rather than the Gluster file system to store application data became generally available this week. The upgrade comes months after the original late-summer target date set by open source specialist Red Hat.

Red Hat — now owned by IBM — took extra time to incorporate feedback from OpenShift Container Storage (OCS) beta customers, according to Sudhir Prasad, director of product management in the company’s storage and hyper-converged business unit.

The new OCS 4.2 release includes Rook Operator-driven installation, configuration and management so developers won’t need special skills to use and manage storage services for Kubernetes-based containerized applications. They indicate the capacity they need, and OCS will provision the available storage for them, Prasad said.

Multi-cloud support

OCS 4.2 also includes multi-cloud support, through the integration of NooBaa gateway technology that Red Hat acquired in late 2018. NooBaa facilitates dynamic provisioning of object storage and gives developers consistent S3 API access regardless of the underlying infrastructure.

Prasad said applications become portable and can run anywhere, and NooBaa abstracts the storage, whether AWS S3 or any other S3-compatible cloud or on-premises object store. OCS 4.2 users can move data between cloud and on-premises systems without having to manually change configuration files, a Red Hat spokesman added.

Customers buy OCS to use with the Red Hat OpenShift Container Platform (OCP), and they can now manage and monitor the storage through the OCP console. Kubernetes-based OCP has more than 1,300 customers, and historically, about 40% to 50% attached to OpenShift Container Storage, a Red Hat spokesman said. OCS had about 400 customers in May 2019, at the time of the Red Hat Summit, according to Prasad.

One critical change for Red Hat OpenShift Container Storage customers is the switch from file-based Gluster to multiprotocol Ceph to better target data-intensive workloads such as artificial intelligence, machine learning and analytics. Prasad said Red Hat wanted to give customers a more complete platform with block, file and object storage that can scale higher than the product’s prior OpenStack S3 option. OCS 4.2 can support 5,000 persistent volumes and will support 10,000 in the upcoming 4.3 release, according to Prasad.

Migration is not simple

Although OCS 4 may offer important advantages, the migration will not be a trivial one for current customers. Red Hat provides a Cluster Application Migration tool to help them move applications and data from OCP 3/OCS 3 to OCP 4/OCS 4 at the same time. Users may need to buy new hardware, unless they can first reduce the number of nodes in their OpenShift cluster and use the nodes they free up, Prasad confirmed.

“It’s not that simple. I’ll be upfront,” Prasad said, commenting on the data migration and shift from Gluster-based OCS to Ceph-backed OCS. “You are moving from OCP 3 to OCP 4 also at the same time. It is work. There is no in-place migration.”

One reason that Red Hat put so much emphasis on usability in OCS 4.2 was to abstract away the complexity of Ceph. Prasad said Red Hat got feedback about Ceph being “kind of complicated,” so the engineering team focused on simplifying storage through the operator-driven installation, configuration and management.

“We wanted to get into that mode, just like on the cloud, where you can go and double-click on any service,” Prasad said. “That took longer than you would have expected. That was the major challenge for us.”

OpenShift Container Storage roadmap

The original OpenShift Container Storage 4.x roadmap that Red Hat laid out last May at its annual customer conference called for a beta release in June or July, OCS 4.2 general availability in August or September, and a 4.3 update in December 2019 or January 2020. Prasad said February is the new target for the OCS 4.3 release.

The OpenShift Container Platform 4.3 update became available this week, with new security capabilities such as Federal Information Processing Standard (FIPS)-compliant encryption. Red Hat eventually plans to return to its prior practice of synchronizing new OCP and OCS releases, said Irshad Raihan, the company’s director of storage product marketing.

The Red Hat OpenShift Container Storage 4.3 software will focus on giving customers greater flexibility, such as the ability to choose the type of disk they want, and additional hooks to optimize the storage. Prasad said Red Hat might need to push its previously announced bare-metal deployment support from OCS 4.3 to OCS 4.4.

OCS 4.2 supports converged-mode operation, with compute and storage running on the same node or in the same cluster. The future independent mode will let OpenShift use any storage backend that supports the Container Storage Interface. OCS software would facilitate access to the storage, whether it’s bare-metal servers, legacy systems or public cloud options.

Alternatives to Red Hat OpenShift Container Storage include software from startups Portworx, StorageOS, and MayaData, according to Henry Baltazar, storage research director at 451 Research. He said many traditional storage vendors have added container plugins to support Kubernetes. The public cloud could appeal to organizations that don’t want to buy and manage on-premises systems, Baltazar added.

Baltazar advised Red Hat customers moving from Gluster-based OCS to Ceph-based OCS to keep a backup copy of their data to restore in the event of a problem, as they would with any migration. He said any users who are moving a large data set to public cloud storage needs to factor in network bandwidth and migration time and consider egress changes if they need to bring the data back from the cloud.

Go to Original Article
Author:

Power BI platform remains a vibrant, respected suite

With a rapid release schedule that enables it to keep up with emerging trends, Microsoft’s Power BI platform remains a powerful and respected business intelligence suite.

While many vendors issue quarterly updates, Microsoft rolls out minor updates to Power BI on a weekly basis and more comprehensive updates each month. And that flexibility and attention to detail has helped the Power BI platform stay current while some other longtime BI vendors battle the perception that their platforms have fallen behind the times.

Most recently, in December, Microsoft added to Power BI an updated connector to its Azure data lake, a new connector to the Power Platform application platform and new data visualization formats.

“I think they’re leading the pack, and they’re putting a lot of pressure on Tableau,” said Wayne Eckerson, president of Eckerson Group, referring to the Microsoft Power BI competitor, which was acquired last year by Salesforce. “The philosophy of a new release every week in itself puts a lot of pressure on Tableau.”

In addition, Eckerson noted, the Power BI platform’s built-in ability to integrate with other Microsoft platforms — as evidenced by the new connectors — gives it a significant advantage over BI platforms offered by some independent vendors.

I think they’re leading the pack, and they’re putting a lot of pressure on Tableau. The philosophy of a new release every week in itself puts a lot of pressure on Tableau.
Wayne EckersonPresident, Eckerson Group

“It’s part of the Azure platform and tightly integrated with SQL Server Integration Service, Data Factory, and SQL Server Reporting Services,” Eckerson said. “Most importantly, it has a data model behind it — or semantic layer as we have called it.”

Beyond the updates, a recent focus of the Power BI platform has been data protection.

Arun Ulagaratchagan, general manager of Power BI, said that all vendors have some level of data protection, but as users export data outside of their BI products and across their organizations, the BI system can no longer secure the data.

Microsoft is trying to change that with Power BI, he said.

“We’re adding data protection to Power BI, integrating it with Microsoft Data Protection,” Ulagaratchagan said. “It secures the data when it’s exported out of Power BI so that only people who have been given prior authority can access it.”

Despite Microsoft’s ability to update the Power BI platform on an almost constant basis, its capabilities aren’t viewed as the most innovative on the market.

Those capabilities are in line with the features other vendors are offering, but with Power BI, Microsoft is not necessarily introducing revolutionary technology that the rest of the market needs to react to or get left behind, analysts said.

Instead, Power BI is seen as quickly reactive to trends within the analytics space and to new features first released by other vendors.

“All of their recent updates have been incremental – there hasn’t been anything particularly exciting,” said Donald Farmer, principal at TreeHive Strategy. “It’s good work, but it’s incremental, which is as it should be.”

Similarly, Eckerson noted that while the updates are important, they don’t feature much that will force other vendors to respond.

“There’s all kinds of small stuff, which is important if you’re using the tool,” he said.

Where Microsoft is moving the market forward, and appears to be forcing competitors to respond, is Azure Synapse Analytics, which launched in preview November.

Synapse attempts to joins data warehousing and data analytics in a single cloud service and integrates with both Power BI and Azure Machine Learning. Essentially, Synapse is the next step in the evolution of Azure SQL Data Warehouse.

“Synapse is where Microsoft has been innovative and made a big bet,” Farmer said.

Beyond placing an emphasis — from the perspective of innovation — on Synapse rather than the Power BI platform, Farmer noted that Power BI simply doesn’t need to be the most spectacular BI suite on the market.

Users of the Power BI platform often don’t seek it out the same way as they do other BI tools. Instead, many simply use Power BI because they’re Windows users and Power BI comes with Windows.

“It’s essentially a default option, but it’s a good default option,” Farmer said. “Tableau, for example, is a tool of choice. … [Microsoft] is not setting the world alight with innovation. Instead, their efforts are on integration with other Microsoft applications, and that’s where they’re interesting.”

While Microsoft doesn’t publicly disclose its product roadmap, Ulagaratchagan said BI for mobile devices, the ability to handle larger and larger data sets, and embedded analytics are important trends as BI advances, as is the idea of openness and trust with data.

Also, AI for BI will continue to advance.

“That’s an area where we have an advantage,” Ulagaratchagan asserted. “We can steal from the Azure team and take that and make it easy to use for our end users and citizen data scientists. We want to get data in the hands of everyone.”

Go to Original Article
Author:

How to install and test Windows Server 2019 IIS

Transcript – How to install and test Windows Server 2019 IIS

In this video, I want to show you how to install Internet Information Services, or IIS, and prepare it for use.

I’m logged into a domain-joined Windows Server 2019 machine and I’ve got the Server Manager open. To install IIS, click on Manage and choose the Add Roles and Features option. This launches the Add Roles and Features wizard. Click Next on the welcome screen and choose role-based or feature-based installation for the installation type and click Next.

Make sure that My Server is selected and click Next. I’m prompted to choose the roles that I want to deploy. We have an option for web server IIS. That’s the option I’m going to select. When I do that, I’m prompted to install some dependency features, so I’m going to click on Add Features and I’ll click Next.

I’m taken to the features screen. All the dependency features that I need are already being installed, so I don’t need to select anything else. I’ll click Next, Next again, Next again on the Role Services — although if you do need to install any additional role services to service the IIS role, this is where you would do it. You can always enable these features later on, so I’ll go ahead and click Next.

I’m taken to the Confirmation screen and I can review my configuration selections. Everything looks good here, so I’ll click install and IIS is being installed.

Testing Windows Server 2019 IIS

The next thing that I want to do is test IIS to make sure that it’s functional. I’m going to go ahead and close this out and then go to local server. I’m going to go to IE Enhanced Security Configuration. I’m temporarily going to turn this off just so that I can test IIS. I’ll click OK and I’ll close Server Manager.

The next thing that I want to do is find this machine’s IP address, so I’m going to right-click on the Start button and go to Run and type CMD to open a command prompt window, and then from there, I’m going to type ipconfig.

Here I have the server’s IP address, so now I can open up an Internet Explorer window and enter this IP address and Internet Information Services should respond. I’ve entered the IP address, then I press enter and I’m taken to the Internet Information Services screen. IIS is working at this point.

I’ll go ahead and close this out. If this were a real-world deployment, one of the next things that you would probably want to do is begin uploading some of the content that you’re going to use on your website so that you can begin testing it on this server.

I’ll go ahead and open up file explorer and I’ll go to this PC, driver and inetpub folder and the wwwroot subfolder. This is where you would copy all of your files for your website. You can configure IIS to use a different folder, but this is the one used by default for IIS content. You can see the files right here that make up the page that you saw a moment ago.

How to work with the Windows Server 2019 IIS bindings

Let’s take a look at a couple of the configuration options for IIS. I’m going to go ahead and open up Server Manager and what I’m going to do now is click on Tools, and then I’m going to choose the Internet Information Services (IIS) Manager. The main thing that I wanted to show you within the IIS Manager is the bindings section. The bindings allow traffic to be directed to a specific website, so you can see that, right now, we’re looking at the start page and, right here, is a listing for my IIS server.

I’m going to go ahead and expand this out and I’m going to expand the site’s container and, here, you can see the default website. This is the site that I’ve shown you just a moment ago, and then if we look over here on the Actions menu, you can see that we have a link for Bindings. When I open up the Bindings option, you can see by default we’re binding all HTTP traffic to port 80 on all IP addresses for the server.

We can edit [the site bindings] if I select [the site] and click on it. You can see that we can select a specific IP address. If the server had multiple IP addresses associated with it, we could link a different IP address to each site. We could also change the port that’s associated with a particular website. For example, if I wanted to bind this particular website to port 8080, I could do that by changing the port number. Generally, you want HTTP traffic to flow on port 80. The other thing that you can do here is to assign a hostname to the site, for example www.contoso.com or something to that effect.

The other thing that I want to show you in here is how to associate HTTPS traffic with a site. Typically, you’re going to have to have a certificate to make that happen, but assuming that that’s already in place, you click on Add and then you would change the type to HTTPS and then you can choose an IP address; you can enter a hostname; and then you would select your SSL certificate for the site.

You’ll notice that the port number is set to 443, which is the default port that’s normally used for HTTPS traffic. So, that’s how you install IIS and how you configure the bindings for a website.

+ Show Transcript

Go to Original Article
Author:

S/4HANA Cloud integrates Qualtrics for continuous improvement

SAP is focused on better understanding what’s on the minds of their customers with the latest release of S/4HANA Cloud.

SAP S/4HANA Cloud 1911, which is now available, has SAP Qualtrics experience management (XM) embedded into the user interface, creating a feedback loop for the product management team about the application. This is one of the first integrations of Qualtrics XM into SAP products since SAP acquired the company a year ago for $8 billion.

“Users can give direct feedback on the application,” said Oliver Betz, global head of product management for S/4HANA Cloud at SAP. “It’s context-sensitive, so if you’re on a homescreen, it asks you, ‘How do you like the homescreen on a scale of one to five?’ And then the user can provide more detailed feedback from there.”

The customer data is consolidated and anonymized and sent to the S/4HANA Cloud product management team, Betz said.

“We’ll regularly screen the feedback to find hot spots,” he said. “In particular we’re interested in the outliers to the good and the bad, areas where obviously there’s something we specifically need to take care of, or also some areas where users are happy about the new features.”

Oliver BetzOliver Betz

Because S/4HANA Cloud is a cloud product that sends out new releases every quarter, the customer feedback loop that Qualtrics provides will inform developers on how to continually improve the product, Betz said.

“This is the first phase in the next iteration [of S/4HANA Cloud], which will add more granular features,” he said. “From a product management perspective, you can potentially have a new application and have some questions around the application to better understand the usage, what customers like and what they don’t like, and then to take it in a feedback loop to iterate over the next quarterly shipments so we can always provide new enhancements.”

Qualtrics integration may take time to provide value

It has taken a while, but it’s a good thing that SAP has now begun a real Qualtrics integration story, said Jon Reed, analyst and co-founder of Diginomica.com, an analysis and news site that focuses on enterprise applications. Still, SAP faces a few obstacles before the integration into S/4HANA Cloud can be a real differentiator.

Jon ReedJon Reed

“This isn’t a plug-and-play thing where customers are immediately able to use this the way you would a new app on your phone, like a new GPS app. This is useful experiential data which you must then analyze, manage and apply,” Reed said. “Eventually, you could build useful apps and dashboards with it, but you still have to apply the insights to get the value. However, if SAP has made those strides already on integrating Qualtrics with S/4HANA Cloud 1911, that’s a positive for them and we’ll see if it’s an advantage they can use to win sales.”

The Qualtrics products are impressive, but it’s still too early in the game to judge how the SAP S/4HANA integration will work out, said Vinnie Mirchandani, analyst and founder of Deal Architect, an enterprise applications focused blog.

“SAP will see more traction with Qualtrics in the employee and customer experience feedback area,” Mirchandani said. “Experiential tools have more impact where there are more human touchpoints — employees, customer service, customer feedback on product features — so I think the blend with SuccessFactors and C/4HANA is more obvious. This doesn’t mean that S/4 won’t see benefits, but the traction may be higher in other parts of the SAP portfolio.”

Vinnie MirchandaniVinnie Mirchandani

SAP SuccessFactors is also beginning to integrate Qualtrics into its employee experience management functions.

It’s a good thing that SAP is attempting to become a more customer-centric company, but it will need to follow through on the promise and make it a part of the company culture, said Faith Adams, senior analyst who focuses on customer experience at Forrester Research.

Many companies are making efforts to appear to be customer-centric, but aren’t following through with the best practices that are required to become truly customer-centric, like taking actions on the feedback they get, Adams said.

“It’s sometimes more of a ‘check the box’ activity rather than something that is embedded into the DNA or a way of life,” Adams said. “I hope that SAP does follow through on the best practices, but that’s to be determined.”

Bringing analytics to business users

SAP S/4HANA Cloud 1911 also now has SAP Analytics Cloud directly embedded. This will enable business users to take advantage of analytics capabilities without going to separate applications, according to SAP’s Betz.

It comes fully integrated out of the box and doesn’t require configuration, Betz said. Users can take advantage of included dashboards or create their own.

“The majority usage at the moment is in the finance application where you can directly access your [key performance indicators] there and have it all visualized, but also create and run your own dashboards,” he said. “This is about making data more available to business users instead of waiting for a report or something to be sent; everybody can have this information on hand already without having some business analyst putting [it] together.”

Dana GardnerDana Gardner

The embedded analytics capability could be an important differentiator for SAP in making data analytics more democratic across organizations, said Dana Gardner, president of IT consultancy Interarbor Solutions LLC. He believes companies need to break data out of “ivory towers” now as machine learning and AI grow in popularity and sophistication.

“The more people that use more analytics in your organization, the better off the company is,” Gardner said. “It’s really important that SAP gets aggressive on this, because it’s big and we’re going to see much more with machine learning and AI, so you’re going to need to have interfaces with the means to bring the more advanced types of analytics to more people as well.”

Go to Original Article
Author:

TigerGraph Cloud releases graph database as a service

With the general release of TigerGraph Cloud on Wednesday, TigerGraph introduced its first native graph database as a service.

In addition, the vendor announced that it secured $32 million in Series B funding, led by SIG.

TigerGraph, founded in 2012 and based in Redwood City, Ca., is a native graph database vendor whose products, first released in 2016, enable users to manage and access their data in different ways than traditional relational databases.

Graph databases simplify the connection of data points and enable them to simultaneously connect with more than one other data point. Among the benefits are the ability to significantly speed up the process of developing data into insights and to quickly pull data from disparate sources.

Before the release of TigerGraph Cloud, TigerGraph customers were able to take advantage of the power of graph databases, but they were largely on-premises users, and they had to do their own upgrades and oversee the management of the database themselves.

“The cloud makes life easier for everyone,” said Yu Xu, CEO of TigerGraph. “The cloud is the future, and more than half of database growth is coming from the cloud. Customers asked for this. We’ve been running [TigerGraph Cloud] in a preview for a while — we’ve gotten a lot of feedback from customers — and we’re big on the cloud. [Beta] customers have been using us in their own cloud.”

Regarding the servicing of the databases, Xu added: “Now we take over this control, now we host it, we manage it, we take care of the upgrades, we take care of the running operations. It’s the same database, but it’s an easy-to-use, fully SaaS model for our customers.”

In addition to providing graph database management as a service and enabling users to move their data management to the cloud, TigerGraph Cloud provides customers an easy entry into graph-based data analysis.

Some of the most well-known companies in the world, at their core, are built on graph databases.

Google, Facebook, LinkedIn and Twitter are all built on graph technology. Those vendors, however, have vast teams of software developers to build their own graph databases and teams of data scientists do their own graph-based data analysis, noted TigerGraph chief operating officer Todd Blaschka.

“That is where TigerGraph Cloud fits in,” Blaschka said. “[TigerGraph Cloud] is able to open it up to a broader adoption of business users so they don’t have to worry about the complexity underneath the hood in order to be able to mine the data and look for the patterns. We are providing a lot of this time-to-value out of the box.”

TigerGraph Cloud comes with 12 starter kits that help customers quickly build their applications. It also doesn’t require users to configure or manage servers, schedule monitoring or deal with potential security issues, according to TigerGraph.

That, according Donald Farmer, principal at TreeHive Strategy, is a differentiator for TigerGraph Cloud.

It is the simplicity of setting up a graph, using the starter kits, which is their great advantage. Classic graph database use cases such as fraud detection and recommendation systems should be much quicker to set up with a starter kit, therefore allowing non-specialists to get started.
Donald FarmerPrincipal, TreeHive Strategy

“It is the simplicity of setting up a graph, using the starter kits, which is their great advantage,” he said. “Classic graph database use cases such as fraud detection and recommendation systems should be much quicker to set up with a starter kit, therefore allowing non-specialists to get started.”

Graph databases, however, are not better for everyone and everything, according to Farmer. They are better than relational databases for specific applications, in particular those in which augmented intelligence and machine learning can quickly discern patterns and make recommendations. But they are not yet as strong as relational databases in other key areas.

“One area where they are not so good is data aggregation, which is of course a significant proportion of the work for business analytics,” Farmer said. “So relational databases — especially relational data warehouses — still have an advantage here.”

Despite drawbacks, the market for graph databases is expected to grow substantially over the next few years.

And much of that growth will be in the cloud, according to Blaschka.

Citing a report from Gartner, he said that 68% of graph database market growth will be in the cloud, while the graph database market as whole is forecast to have at least 100 percent year-over-year annual growth through 2022.

“The reason we’re seeing this growth so fast is that graph is the cornerstone for technologies such as machine learning, such as artificial intelligence, where you need large sets of data to find patterns to find insight that can drive those next-gen applications,” he said. “It’s really becoming a competitive advantage in the marketplace.”

With respect to the $32 million TigerGraph raised in Series B financing, according to Xu it will be used to help TigerGraph expand its reach into new markets and accelerate its emphasis on the cloud.

Go to Original Article
Author:

Dremio Data Lake Engine 4.0 accelerates query performance

Dremio is advancing its technology with a new release that supports AWS, Azure and hybrid cloud deployments, providing what the vendor refers to as a Data Lake Engine.

The Dremio Data Lake Engine 4.0 platform is rooted in multiple open source projects, including Apache Arrow, and offers the promise of accelerated query performance for data lake storage.

Dremio made the platform generally available on Sept. 17. The Dremio Data Lake Engine 4.0 update introduces a feature called column-aware predictive pipelining that helps predict access patterns, which makes queries faster. The new Columnar Cloud Cache (C3) feature in Dremio also boosts performance by caching data closer to where compute execution occurs.

For IDC analyst Stewart Bond, the big shift in the Dremio 4.0 update is how the data lake engine vendor has defined its offering as a “Data Lake Engine” focused on AWS and Azure.

In some ways, Dremio had previously struggled to define what its technology actually does, Bond said. In the past, Dremio had been considered a data preparation tool, a data virtualization tool and even a data integration tool, he said. It does all those things, but in ways, and with data, that differ markedly from traditional technologies in the data integration software market.

“Dremio offers a semantic layer, query and acceleration engine over top of object store data in AWS S3 or Azure, plus it can also integrate with more traditional relational database technologies,” Bond said. “This negates the need to move data out of object stores and into a data warehouse to do analytics and reporting.”

For data in a data lake to be valuable, it typically needs to be extracted, refined and delivered to data warehouses, analytics, machine learning, or operational applications where it can also be transformed into something different when blended with other data ingredients.
Stewart BondAnalyst, IDC

Simply having a data lake doesn’t do much for an organization. A data lake is just data, and just as with natural lakes, water needs to be extracted, refined and delivered for consumption, Bond said.

“For data in a data lake to be valuable, it typically needs to be extracted, refined and delivered to data warehouses, analytics, machine learning or operational applications where it can also be transformed into something different when blended with other data ingredients,” Bond said. “Dremio provides organizations with the opportunity to get value out of data in a data lake without having to move the data into another repository, and can offer the ability to blend it with data from other sources for new insights.”

How Dremio Data Lake Engine 4.0 works

Organizations use technologies like ETL (extract, transform, load), among other things, to move data from data lake storage into a data warehouse because they can’t query the data fast enough where it is, said Tomer Shiran, co-founder and CTO of Dremio. That performance challenge is one of the drivers behind the C3 feature in Dremio 4.

“With C3 what we’ve developed is a patent pending real-time distributed cache that takes advantage of the NVMe devices that are on the instances that we’re running on to automatically cache data from S3,” Shiran explained. “So when the query engine is accessing a piece of data for the second time, it’s at least 10 times faster than getting it directly from S3.”

Screenshot of Dremio data lake architecture
Dremio data lake architecture

The new column-aware predictive pipelining feature in Dremio Data Lake Engine 4.0 further accelerates query performance for the initial access. The features increases data read throughput to the maximum that is allowed on a given network, Shiran explained.

While Dremio is positioning its technology as a data lake engine that can be used to query data stored in a data lake, Shiran noted that the platform also has data virtualization capabilities. With data virtualization, pointers or links to sources of data enables creating a logical data layer.

Apache Arrow

One of the foundational technologies that enables the Dremio Data Lake Engine is the open source Apache Arrow project, which Shiran helped to create.

“We took the internal memory format of Dremio, and we open sourced that as Apache Arrow, with the idea that we wanted our memory format to be an industry standard,” Shiran said.

Arrow has become increasingly popular over the past three years and is now used by many different tools, including Apache Spark.

With the growing use of Arrow, Dremio’s goal is to make communications between its platform and other tools that use Arrow as fast as possible. Among the ways that Dremio is helping to make Arrow faster is with the Gandiva effort that is now built into Dremio 4, according to the vendor. Gandiva is an execution kernel that is based on the LLVM compiler, enabling real-time code compilation to accelerate queries.

Dremio will continue to work on improving performance, Shiran said.

“At the end of the day, customers want to see more and more performance, and more data sources,” he said. “We’re also making it more self-service for users, so for us we’re always looking to reduce friction and the barriers.”

Go to Original Article
Author:

IBM Storage syncs new DS8900F array to z15 mainframe launch

IBM Storage launched new faster all-flash, standard-rack-sized DS8900F arrays to coincide with the release of its new z15 mainframe.

The DS8900F models use IBM’s latest Power Systems Power 9 processors and an optimized software stack to boost performance over their Power 8-based DS8880 predecessors. IBM claimed users will see lower latency (from 20 microseconds to 18 μs), improved IOPS and twice the bandwidth when using the DS8900F arrays connected to z15 mainframes equipped with zHyperLink I/O adapter cards compared to using the DS8800.

IBM storage customers will note similar performance improvements when they use the DS8900F arrays with z14 mainframes that have zHyperLink cards. Those that use older z13 mainframes without the zHyperLink cards will see response time drop from 120 μs to 90 μs, IBM claims.

IDC research vice president Eric Burgener said IBM mainframe customers who use a FICON host connection and zHyperLink cards could see latency that’s lower than what any other storage array in the industry can deliver, outside of host-side storage using persistent memory, such as NetApp’s MAX Data.

New IBM storage array is flash only

The prior DS8880 family included all-flash, all-disk and hybrid options mixing disk and solid-state drives (SSDs). But the new DS8900F array that IBM plans to ship next month will use only flash-based SSDs. The maximum capacities are 5.9 PB for the DS8950F model and 2.9 PB for the DS8910F when configured with 15.36 TB flash drives.

Another difference between the DS8900F and its DS8880 predecessor is availability. The DS8900F offers seven 9s (99.99999% availability) compared to DS8880’s six 9s (99.9999% availability). Eric Herzog, CMO and vice president of storage channels at IBM, said seven 9s would translate to 3.1 seconds of statistical downtime with round-the-clock operation over the course of a year.

“From five-plus minutes to less than four seconds doesn’t sound like much on an annual basis, but it really is,” said David Hill, founder and principal analyst at Mesabi Group. “It greatly decreases the chances that a system will ever go down in a year, and that is not a bad thing.”

Although the availability boost may be important for some customers, IBM partner Truenorth has found that six 9s is more than enough for its clients’ needs, according to Truenorth software principal Tomas Perez.  

Perez said a more important new feature is the industry-standard rack size that will make the DS8910F homogeneous with other equipment. That should be helpful as Puerto Rico-based Truenorth puts together disaster recovery (DR) systems for its customers. Truenorth’s clients, including Puerto Rico’s treasury department, have focused on DR in the wake of Hurricane Maria.

The new DS8900F arrays conform to industry-standard rack dimensions of 19 inches wide, 36 inches deep and 42 inches tall. Most IBM DS8880 models have a standard width of 19 inches but non-standard depth, at 54 inches, and height, at 40 inches, with expansion options to 46. In 2018, IBM added a standard-sized DS8882F model to fit into new standard-sized z14 Model ZR1 and LinuxOne Rockhopper II mainframe models released in 2018.

IBM storage security enhancements

With the latest systems, IBM is adding the ability to encrypt data in flight between the new z15 mainframe and the DS8900F array. IBM supported only data-at-rest encryption in prior models.

Herzog said the hardware-based data encryption would not affect performance because the system uses an encryption coprocessor. Prior models use the main CPU for hardware-based encryption, so there could be a performance impact depending on the configuration or workload, Herzog said.

Endpoint security is another new capability that IBM is adding with its Z, LinuxOne and DS8900F systems. Herzog described the new functionality as a “custom handshake” to ensure that the array and the Z system know they’re talking to each other, rather than any spoofed system. 

The DS8900F will also support previously available IBM capabilities. Safeguarded Copy enables up to 500 immutable, point-in-time snapshots of data for protection against malware and ransomware attacks. IBM Storage Insights’ predictive analytics assists with capacity and performance management. And IBM’s Transparent Cloud Tiering supports protecting and archiving encrypted block-based data to S3-based object storage, from providers such as AWS and IBM Cloud, without a separate gateway.

Besides supporting IBM Z mainframes, the DS8900F also works with non-mainframes such as Unix-, Linux-, Windows- and VMware-based systems. The new z15 mainframe is due to become generally available next week, and the DS8200F storage array will follow in October. The starting price is $134,000 for the DS8910F and $196,000 for the DS8950F, according to IBM.

IDC’s Burgener said IBM’s DS competitors, Dell EMC and Hitachi Vantara, generally support distributed systems before adding support for mainframes six to 12 months later. He said IBM’s DS arrays, by contrast, always support mainframes on day one. IBM owns 40% to 50% of the mainframe-attached storage market, Burgener said.

“We should see a noticeable bump in IBM’s overall storage revenues over the next 12 months as their customers go through refresh cycles, and that bump may be a bit higher, if for no other reason than the fact that they are including these new arrays on every new mainframe quote,” Burgener said.

Go to Original Article
Author:

Magento BI update a benefit to vendor’s e-commerce customers

With the rollout of the Magento Business Intelligence Summer 2019 Release on Thursday, the Magento BI platform will get improved scheduling capabilities along with a host of new dashboard visualizations.

Magento, founded in 2008 and based in Culver City, Calif., is primarily known for its e-commerce platform. In 2018 the vendor was acquired by Adobe for $1.7 billion and is now part of the Adobe Experience Cloud.

With the vendor’s focus on e-commerce, the Magento BI platform isn’t designed to compete as a standalone tool against the likes of Microsoft Power BI, Qlik, Tableau and other leading BI vendors. Instead, it’s designed to slot in with Magento’s e-commerce platform and is intended for existing Magento customers.

“I love the BI angle Magento is taking here,” said Mike Leone, a senior analyst at Enterprise Strategy Group. “I would argue that many folks that utilize their commerce product are by no means experts at analytics. Magento will continue to empower them to gain more data-driven insights in an easy and seamless way. It is enabling businesses to take the next step into data-driven decision making without adding complexity.”

Similarly, Nicole France, principal analyst at Constellation Research, noted the importance of enhancing the BI capabilities of Magento’s commerce customers.

“This kind of reporting on commerce systems is undoubtedly useful,” she said. “The idea here seems to be reaching a wider audience than the folks directly responsible for running commerce. That means putting the right data in the appropriate context.”

The updated Magento BI platform comes with 13 data visualization templates, now including bubble charts, and over 100 reports.

Bubble charts such as this sample showing an organization's customer breakdown by state are now part of Magento's business intelligence platform.
Asample bubble chart from Magento shows an organization’s customer breakdown by state.

In addition, it comes with enhanced sharing capabilities. Via email, users can schedule reports to go out to selected recipients on a one-time basis or any repeating schedule they want. They can also keep track of the relevancy of the data with time logs and take all necessary actions from a status page.

“It finds the insights merchants want,” said Daniel Rios, product manager at Adobe. “It brings BI capabilities to merchants.”

Matthew Wasley, product marketing manager at Adobe, added: “Now there’s a better way to share insights that goes right to the inbox of a colleague and is part of their daily workflow.

“They can see the things they need to see — it bridges the gap,” Wasley said. “It’s an email you actually want to open.”

According Wasley, the Magento BI platform provides a full end-to-end data stack that services customers from the data pipeline through the data warehouse and ultimately to the dashboard visualization layer.

While some BI vendors offer products with similar end-to-end capabilities, others offer only one layer and need to be paired with other products to help a business client take data from its raw form and develop it into a digestible form.

“We’re ahead of the curve with Magento,” Wasley said.

He added that the end-to-end capability of the Magento BI tool is something other vendors are trying to put together through acquisitions. Though he didn’t name any companies specifically, Google with its purchase of Looker and Salesforce with its acquisition of Tableau are two that fit the mold.

We see our BI as a differentiator for our commerce platform. Standalone BI is evolving in itself. It’s tailored, and differentiates our commerce product.
Matthew WasleyProduct marketing manager, Adobe

Still, the Magento BI tool isn’t designed to compete on the open market against vendors who specialize in analytics platforms.

“We see our BI as a differentiator for our commerce platform,” said Wasley. “Standalone BI is evolving in itself. It’s tailored, and differentiates our commerce product.”

Moving forward, like the BI tools offered by other vendors, the Magento BI platform will become infused with more augmented intelligence and machine learning capabilities with innovation enhanced by Magento’s envelopment into the Adobe universe.

“We’re seeing how important data is across Adobe,” said Wasley. “All together, it’s meant to … make better use of data. Because of the importance of data across Adobe, we’re able to innovate a lot faster over the next 6 – 12 months.”

And presumably, that means further enhancement the Magento BI platform for the benefit of the vendor’s e-commerce customers.

Go to Original Article
Author:

Microsoft patches two Windows zero-days in July Patch Tuesday

The July 2019 Patch Tuesday release included fixes for 77 vulnerabilities, two of which were Windows zero-days that were actively exploited in the wild.

The two Windows zero-days are both local escalation-of-privilege flaws that cannot be used alone to perform an attack. One zero-day, CVE-2019-0880, is a flaw in how splwow64.exe handles certain calls. The issue affects Windows 8.1, Windows 10 and Windows Server 2012, 2016 and 2019.

“This vulnerability by itself does not allow arbitrary code execution; however, it could allow arbitrary code to be run if the attacker uses it in combination with another vulnerability that is capable of leveraging the elevated privileges when code execution is attempted,” according to Microsoft.

The other Windows zero-day the vendor patched was CVE-2019-1132, which caused the Win32k component to improperly handle objects in memory. This issue affects Windows 7 and Windows Server 2008.

“To exploit this vulnerability, an attacker would first have to log on to the system,” Microsoft noted. “An attacker could then run a specially crafted application that could exploit the vulnerability and take control of an affected system.”

This zero-day was reported to Microsoft by ESET. Anton Cherepanov, senior malware researcher for ESET, detailed a highly targeted attack in Eastern Europe and recommended upgrading systems as the best remediation against attacks.

“The exploit only works against older versions of Windows, because since Windows 8 a user process is not allowed to map the NULL page. Microsoft back-ported this mitigation to Windows 7 for x64-based systems,” Cherepanov wrote in a blog post. “People who still use Windows 7 for 32-bit systems Service Pack 1 should consider updating to newer operating systems, since extended support of Windows 7 Service Pack 1 ends on January 14th, 2020. Which means that Windows 7 users won’t receive critical security updates. Thus, vulnerabilities like this one will stay unpatched forever.”

Other patches

Beyond the two Windows zero-days patched this month, there were six vulnerabilities patched that had been publicly disclosed, but no attacks were seen in the wild. The disclosures could potentially aid attackers in exploiting the issues faster, so enterprises should prioritize the following:

  • CVE-2018-15664, a Docker flaw in the Azure Kubernetes Service;
  • CVE-2019-0962, an Azure Automation escalation-of-privilege flaw;
  • CVE-2019-0865, a denial-of-service flaw in SymCrypt;
  • CVE-2019-0887, a remote code execution (RCE) flaw in Remote Desktop Services;
  • CVE-2019-1068, an RCE flaw in Microsoft SQL Server; and
  • CVE-2019-1129, a Windows escalation-of-privilege flaw.

The Patch Tuesday release also included 15 vulnerabilities rated critical by Microsoft. Some standout patches in that group included CVE-2019-0785, a DHCP Server RCE issue, and four RCE issues affecting Microsoft browsers, which Trend Micro labeled as noteworthy — CVE-2019-1004, CVE-2019-1063, CVE-2019-1104 and CVE-2019-1107.

Go to Original Article
Author: