Tag Archives: give

For Sale – MacBook Air 2019 Rose Gold 1.5Hz i5 8GB 128GB

Bought it for myself beginning of the year to give Mac a go. But turns out there are quite a lot of little things that I can’t get used to it, so going back to windows. Bought it as a refurb from Apple directly, so they put a new body and battery. Only used it lightly and still in very good condition. This is a rose gold base model and will come with the power cable in original box.

Insured postage with Royal Mail is included.

Go to Original Article
Author:

Alteryx 2020.1 highlighted by new data profiling tool

Holistic Data Profiling, a new tool designed to give business users a complete view of their data while in the process of developing workflows, highlighted the general availability of Alteryx 2020.1 on Thursday.

Alteryx, founded in 1997 and based in Irvine, Calif., is an analytics and data management specialist, and Alteryx 2020.1 is the vendor’s first platform update in 2020. It released its most recent update, Alteryx 2019.4, in December 2019, featuring a new integration with Tableau.

The vendor revealed the platform update in a blog post; in addition to Holistic Data Profiling, it includes 10 new features and upgrades. Among them are new language toggling feature in Alteryx Designer, the vendor’s data preparation product.

“The other big highlights are more workflow efficiency features,” said Ashley Kramer, Alteryx’s senior vice president of product management. “And the fact that Designer now ships with eight languages that can quickly be toggled without a reinstall is huge for our international customers.”

Holistic Data Profiling is a low-code/no-code feature that gives business users an instantaneous view of their data to help them better understand their information during the data preparation process — without having to consult a data scientist.

After dragging a Browse Tool — Alteryx’s means of displaying data from a connected tool as well as data profile information, maps, reporting snippets and behavior analysis information — onto Alteryx’s canvas, Holistic Data Profiling provides an immediate overview of the data.

Holistic Data Profiling is aimed to help business users understand data quality and how various columns of data may be related to one another, spot trends, and compare one data profile to another as they curate their data.

An overview of an organization's data is displayed in a sample Holistic Data Profiling gif from Alteryx.
A sample Holistic Data Profiling gif from Alteryx gives an overview of an organization’s data.

Users can zoom in on a certain column of data to gain deeper understanding, with Holistic Data Profiling providing profile charts and statistics about the data such as the type, quality, size and number of records.

That knowledge will subsequently inform how to proceed to the next move in order to ultimately make a data-driven decision.

It’s easy to get tunnel vision when analyzing data. Holistic Data Profiling enables end users — via low-code/no-code tooling — to quickly gain a comprehensive understanding of the current data estate.
Mike LeoneAnalyst, Enterprise Strategy Group

“It’s easy to get tunnel vision when analyzing data,” said Mike Leone, analyst at Enterprise Strategy Group. “Holistic Data Profiling enables end users — via low-code/no-code tooling — to quickly gain a comprehensive understanding of the current data estate. The exciting part, in my opinion, is the speed at which end users can potentially ramp up an analytics project.”

Similarly, Kramer noted the importance of being able to more fully understand data before the final stage of analysis.

“It is really important for our customers to see and understand the landscape of their data and how it is changing every step of the way in the analytic process,” she said.

Alteryx customers were previously able to view their data at any point — on a column-by-column or defined multi-column basis — but not to get a complete view, Kramer added.

“Experiencing a 360-degree view of your data with Holistic Data Profiling is a brand-new feature,” she said.

In addition to Holistic Data Profiling, the new language toggle is perhaps the other signature feature of the Alteryx platform update.

Using Alteryx Designer, customers can now switch between eight languages to collaborate using their preferred language.

Alteryx previously supported multiple languages, but for users to work in their preferred language, each individual user had to install Designer in that language. With the updated version of Designer, they can click on a new globe icon in their menu bar and select the language of their choice to do analysis.

“To truly enable enterprise-wide collaboration, breaking down language barriers is essential,” Leone said. “And with Alteryx serving customers in 80 different countries, adding robust language support further cements Alteryx as a continued leader in the data management space.”

Among the other new features and upgrades included in Alteryx 2020.1 are a new Power BI on-premises loader that will give users information about Power BI reports and automatically load those details into their data catalog in Alteryx Connect; the ability to input selected rows and columns from an Excel spreadsheet; and new Virtual Folder Connect to save custom queries.

Meanwhile, a streamlined loader of big data from Alteryx to the Snowflake cloud data warehouse is now in beta testing.

“This release and every 2020 release will have a balance of improving our platform … and fast-forwarding more innovation baked in to help propel their efforts to build a culture of analytics,” Kramer said.

Go to Original Article
Author:

How to Use Failover Clusters with 3rd Party Replication

In this second post, we will review the different types of replication options and give you guidance on what you need to ask your storage vendor if you are considering a third-party storage replication solution.

If you want to set up a resilient disaster recovery (DR) solution for Windows Server and Hyper-V, you’ll need to understand how to configure a multi-site cluster as this also provides you with local high-availability. In the first post in this series, you learned about the best practices for planning the location, node count, quorum configuration and hardware setup. The next critical decision you have to make is how to maintain identical copies of your data at both sites, so that the same information is available to your applications, VMs, and users.

Multi-Site Cluster Storage Planning

All Windows Server Failover Clusters require some type of shared storage to allow an application to run on any host and access the same data. Multi-site clusters behave the same way, but they require multiple independent storage arrays at each site, with the data replicated between them. The data for the clustered application or virtual machine (VM) on each site should use its own local storage array, or it could have significant latency if each disk IO operation had to go to the other location.

If you are running Hyper-V VMs on your multi-site cluster, you may wish to use Cluster Shared Volumes (CSV) disks. This type of clustered storage configuration is optimized for Hyper-V and allows multiple virtual hard disks (VHDs) to reside on the same disk while allowing the VMs to run on different nodes. The challenge when using CSV in a multi-site cluster is that the VMs must make sure that they are always writing to their disk in their site, and not the replicated copy. Most storage providers offer CSV-aware solutions, and you must make sure that they explicitly support multi-site clustering scenarios. Often the vendors will force writes at the primary site by making the CSV disk at the second site read-only, to ensure that the correct disks are always being used.

Understanding Synchronous and Asynchronous Replication

As you progress in planning your multi-site cluster you will have to select how your data is copied between sites, either synchronously or asynchronously. With asynchronous replication, the application will write to the clustered disk at the primary site, then at regular intervals, the changes will be copied to the disk at the secondary site. This usually happens every few minutes or hours, but if a site fails between replication cycles, then any data from the primary site which has not yet been copied to the secondary site will be lost. This is the recommended configuration for applications that can sustain some amount of data loss, and this generally does not impose any restrictions on the distance between sites. The following image shows the asynchronous replication cycle.

Asynchronous Replication in a Multi-Site Cluster

Asynchronous Replication in a Multi-Site Cluster

With synchronous replication, whenever a disk write command occurs on the primary site, it is then copied to the secondary site, and an acknowledgment is returned to both the primary and secondary storage arrays before that write is committed. Synchronous replication ensures consistency between both sites and avoids data loss in the event that there is a crash between a replication cycle. The challenge of writing to two sets of disks in different locations is that the physical distance between sites must be close or it can affect the performance of the application. Even with a high-bandwidth and low-latency connection, synchronous replication is usually recommended only for critical applications that cannot sustain any data loss, and this should be considered with the location of your secondary site.  The following image shows the asynchronous replication cycle.

Synchronous Replication in a Multi-Site Cluster

Synchronous Replication in a Multi-Site Cluster

As you continue to evaluate different storage vendors, you may also want to assess the granularity of their replication solution. Most of the traditional storage vendors will replicate data at the block-level, which means that they track specific segments of data on the disk which have changed since the last replication. This is usually fast and works well with larger files (like virtual hard disks or databases), as only blocks that have changed need to be copied to the secondary site. Some examples of integrated block-level solutions include HP’s Cluster Extension, Dell/EMC’s Cluster Enabler (SRDF/CE for DMX, RecoverPoint for CLARiiON), Hitachi’s Storage Cluster (HSC), NetApp’s MetroCluster, and IBM’s Storage System.

There are also some storage vendors which provide a file-based replication solution that can run on top of commodity storage hardware. These providers will keep track of individual files which have changed, and only copy those. They are often less efficient than the block-level replication solutions as larger chunks of data (full files) must be copied, however, the total cost of ownership can be much less. A few of the top file-level vendors who support multi-site clusters include Symantec’s Storage Foundation High Availability, Sanbolic’s Melio, SIOS’s Datakeeper Cluster Edition, and Vision Solutions’ Double-Take Availability.

The final class of replication providers will abstract the underlying sets of storage arrays at each site. This software manages disk access and redirection to the correct location. The more popular solutions include EMC’s VPLEX, FalconStor’s Continuous Data Protector and DataCore’s SANsymphony. Almost all of the block-level, file-level, and appliance-level providers are compatible with CSV disks, but it is best to check that they support the latest version of Windows Server if you are planning a fresh deployment.

By now you should have a good understanding of how you plan to configure your multi-site cluster and your replication requirements. Now you can plan your backup and recovery process. Even though the application’s data is being copied to the secondary site, which is similar to a backup, it does not replace the real thing. This is because if the VM (VHD) on one site becomes corrupted, that same error is likely going to be copied to the secondary site. You should still regularly back up any production workloads running at either site.  This means that you need to deploy your cluster-aware backup software and agents in both locations and ensure that they are regularly taking backups. The backups should also be stored independently at both sites so that they can be recovered from either location if one datacenter becomes unavailable. Testing recovery from both sites is strongly recommended. Altaro’s Hyper-V Backup is a great solution for multi-site clusters and is CSV-aware, ensuring that your disaster recovery solution is resilient to all types of disasters.

If you are looking for a more affordable multi-site cluster replication solution, only have a single datacenter, or your storage provider does not support these scenarios, Microsoft offers a few solutions. This includes Hyper-V Replica and Azure Site Recovery, and we’ll explore these disaster recovery options and how they integrate with Windows Server Failover Clustering in the third part of this blog series.

Let us know if you have any questions in the comments form below!


Go to Original Article
Author: Symon Perriman

[email protected] Launches a New YouTube Channel for Independent Gaming – Xbox Wire

Today marks the launch of the [email protected] YouTube channel! Click here to give it a visit. With this channel, we are going to present the very best in Independent games coming to Xbox One and Windows PC, and maybe even some fun original programming along the way. [email protected] has been around since 2013 (ed – holy cow has it been that long?). The very first [email protected] game, Strike Suit Zero, shipped on April 4, 2014, and since then we’ve helped Independent developers, both big and small, launch nearly 1,500 games on Xbox.

We know there’s a community out there that’s passionate about Independent gaming – we interact with them every day on social media, at shows and events, on Mixer, and in games. And we’re extremely passionate about Independent games ourselves, if you hadn’t guessed! With this channel we simply want to share stuff that we think is awesome and see what you think. Please feel free to jump in the comments and let us know what you like and want to see more of, and what games you’re excited about.

We’ve created a video to celebrate the launch – check it out to meet some members of the [email protected] and hear what some of our favorite games are. The [email protected] YouTube channel is live now and don’t forget to subscribe so you don’t miss any of the amazing content from Independent games going forward.

Go to Original Article
Author: Microsoft News Center

Tibco analytics capabilities get upgrade in Spotfire X

Spotfire X, the latest iteration of the Tibco analytics and data visualization platform, aims to give users a more streamlined experience by incorporating more AI and machine learning capabilities when the upgraded platform is released this fall.

Notably, the platform update, characterized by what Tibco has dubbed a new “A(X) Experience,” will enable users to type in requests to navigate and visualize their data through natural language processing (NLP), to automatically record dataflows that can later be explored and edited. It also will natively stream data in real time from dozens of sources.

The new Spotfire X features are designed to create a faster and simpler user experience, according to Brad Hopper, vice president of product strategy, analytics and streaming at the integration and analytics software vendor. “This will allow us to take a complete novice off the street, put them in front of the tool, and no matter what they will get something back,” he said.

Search for simple

With the rise of citizen data scientists, it has become a trend for self-service analytics vendors to design platforms that are easier to use and more automatic, turning to employing AI and machine learning algorithms to do so.

Brad Hopper, TibcoBrad Hopper

Earlier this year, a Tibco competitor, Tableau, acquired MIT AI startup Empirical Systems, whose technology is expected to provide Tableau platforms with more advanced predictive analytics capabilities and better automated models. Also this year, Qlik, another big-name self-service analytics vendor, acquired startup Podium Data in a bid to better automate parts of its platforms and make them simpler to use.

“There is a trend in the market … for AI and machine learning to be used to explore all the possible data, all the possible variables,” said Rita Sallam, a Gartner analyst.

With the new Spotfire X features, Tibco analytics is looking forward, even if the features aren’t necessarily innovative on their own, she said.

“They’re leveraging natural language as a way to initiate a question and they are, based on that question, generating all the statistically meaningful insight on that data so the user can see all the possible insights on that data,” Sallam said.

A(X) Experience in Tibco Spotfire X
The A(X) Experience in Tibco’s Spotfire X enables faster and easier analytics with NLP tools and improved AI

AI advice

With the A(X) Experience, Spotfire X also will deliver AI-driven recommendations for users.

“We’ve built in a fairly sophisticated machine learning model behind the scenes,” Hopper said.

The Tibco analytics platform can already use AI to automatically index different pieces of data and suggest relationships between them.

Now from the Spotfire X’s NLP-powered search box, users will be able to receive a list of visualization recommendations, starting first with “classical recommendations” before getting to “a ranked list of interesting structural variations,” Hopper explained.

Forrester analyst Boris Evelson said the Tibco analytics and Spotfire X moves are “yet another confirmation of a trend that leading BI products need a dose of AI to remain effective.”

While AI is not replacing BI, BI tools that infuse AI functionality will displace the tools that don’t.
Boris Evelsonanalyst, Forrester

“While AI is not replacing BI, BI tools that infuse AI functionality will displace the tools that don’t,” Evelson said.

Tibco made the Spotfire X announcements during the Tibco Now conference in Las Vegas in early September 2018. 

The enhancements to Tibco analytics capabilities were among other product developments unveiled at the event. Others included the a user-partner collaboration program called Tibco Labs, more tools for Tibco Cloud, and a new collaboration between Tibco and manufacturing services company Jabil.

Slack encryption will soon include enterprise key management

Slack will soon give businesses an additional level of security by letting them manage their encryption keys. The feature will appeal to a small number of large organizations for now, but it could help the startup expand its footprint in the enterprise market.

Slack already encrypts the messages and files that flow through its premium platform for large businesses, called Enterprise Grid. Now, the vendor plans to give customers control of the keys that unlock that encryption.

“Enterprise key management is another significant step that Slack needs to take to meet increasing security demands — and according to their promise, without hurting speed or usability, [which are] common side effects of EKM,” said Wayne Kurtzman, analyst at IDC.

Slack touted the forthcoming feature as providing “all the security of an on-premises solution, with all the benefits of a cloud tool.” But the vendor clarified that the keys will be created and stored in Amazon’s public cloud.

“In the future, we may expand this offering to support an on-prem or private cloud [hardware security module] key store,” said Ilan Frank, director of Slack’s enterprise products.

Cisco Webex Teams lets businesses manage encryption keys on premises or in the cloud. It also provides end-to-end encryption. In contrast, Slack only encrypts data in transit and at rest, which means the data may get decrypted at certain routing points in the cloud.

Slack has no plans to change its encryption model, Frank said, citing potential “usability drawbacks” related to search and advanced app and bot features.

Symphony also offers end-to-end encryption and enterprise key management. Its team collaboration app has found a niche among banks and other financial firms, which generally have strict compliance and regulatory standards.

“I think, from Slack’s case, it’s a good first step in allowing customers to control their own keys,” said Zeus Kerravala, founder and principal analyst at ZK Research in Westminster, Mass. But Slack should also ensure businesses can store those keys in their own data centers and eventually pursue end-to-end encryption, he said.

Slack’s enterprise key management feature will be particularly useful for external communications done through Slack, said Alan Lepofsky, a vice president and principal analyst at Constellation Research, based in Cupertino, Calif.

When partners communicate through a shared channel in Slack, the company that established the channel will have control over the encryption keys.

“I think this will be a very important use case, as it’s that external communication where you really want to ensure security and privacy,” Lepofsky said.

Slack expects to make enterprise key management available for purchase to Enterprise Grid customers sometime this winter.

Slack looks to appeal to more large enterprises

Slack launched Enterprise Grid last year in an attempt to expand beyond its traditional base of teams and small businesses. The platform lets large organizations unify and manage multiple Slack workspaces.

Slack said in January that more than 150 organizations had deployed Enterprise Grid, including 21st Century Fox, Target, Capital One and IBM. But the vendor did not mention the product in May when it announced that 8 million people at more than 500,000 organizations worldwide were using Slack daily.

As the vendor tries to win more contracts with large businesses, Slack faces competition from vendors that already have deep penetration in the enterprise market — notably Cisco and Microsoft.

Cisco recently tied its team collaboration app to the online meetings platform Webex, which has 140 million users. Also, Microsoft has been aggressively building out the features of Microsoft Teams, which integrates with the Office 365 productivity tools relied upon by 135 million people.

“[Enterprise key management] is an important addition to Slack as it becomes more mature for enterprise needs,” Lepofsky said.

HPE’s HCI system takes aim at space-constrained data centers

The latest addition to HPE’s HCI portfolio aims to give smaller IT shops a little less bang for a lot less buck.

The HPE SimpliVity 2600 configures up to four compute modules in a 2U space, and features “always-on” deduplication and compression. Those capabilities often appeal to businesses with space-constrained IT environments or with no dedicated data center at all, particularly ones that deploy VDI applications on remote desktops for complex workloads and require only moderate storage.

Examples include branch offices, such as supermarkets or retailers with no dedicated data center room, who might likely keep a server in a manager’s office, said Thomas Goepel, director of HPE’s product management for hyper-converged systems.

Higher-end HPE HCI products, such as the SimpliVity 380, emphasize operational efficiencies, but their compute power may exceed the needs of many remote branch offices, and at a higher cost, so the 2600’s price-performance ratio may be more attractive, said Dana Gardner, principal analyst at Interarbor Solutions LLC in Gilford, N.H.

“Remote branch offices tend to look at lower-cost approaches over efficiencies,” he said. “Higher-end [HPE HCI systems] and in some cases the lower-end boxes, may not be the right fit for what we think of as a ROBO server.”

Dana Gardner, Interarbor SolutionsDana Gardner

On the other hand, many smaller IT shops lack internal technical talent and may struggle to implement more complex VDI workloads.

“[VDI] requires a lot of operational oversight to get it up and rolling and tuned in with the rest of the environment,” Gardner said.

The market for higher compute density HCI to run complex workloads that involve VDI applications represents a rich opportunity, concurred Steve McDowell, a senior analyst at Moor Insights & Strategy. “It’s a smart play for HPE, and should compete well against Nutanix,” he said.

There has been a tremendous appetite [among users] for HCI products in general because they come packaged and ready to install.
Dana Gardnerprincipal analyst, Interarbor Solutions

The HPE SimpliVity 2600, based on the company’s Apollo 2000 platform, also overlaps with HPE’s Edgeline systems unveiled last month, although there are distinct differences in the software stack and target applications, McDowell said. The 2600 is more of an appliance with a fixed feature set contained in a consolidated management framework.

The Edgeline offering, meanwhile, targets infrastructure consolidation out on the edge with a more even balance of compute, storage and networking capabilities.

Higher-end HPE HCI offerings have gained traction among corporate users. Revenues for these systems surged 280% in this year’s first quarter compared with a year ago, versus 76% growth for the overall HCI market, according to IDC, the market research firm based in Framingham, Mass.

“There has been a tremendous appetite for HCI products in general because they come packaged and ready to install,” Gardner said. “HPE is hoping to take advantage of this with iterations that allow them to expand their addressable market, in this case downward.”

The 2600 will be available sometime by mid-July, according to HPE.

Tableau acquisition of MIT AI startup aims at smarter BI software

Tableau Software has acquired AI startup Empirical Systems in a bid to give users of its self-service BI platform more insight into their data. The Tableau acquisition, announced today, adds an AI-driven engine that’s designed to automate the data modeling process without requiring the involvement of skilled statisticians.

Based in Cambridge, Mass., Empirical Systems started as a spinoff from the MIT Probabilistic Computing Project. The startup claims its analytics engine and data platform is able to automatically model data for analysis and then provide interactive and predictive insights into that data.

The technology is still in beta, and Francois Ajenstat, Tableau’s chief product officer, wouldn’t say how many customers are using it as part of the beta program. But he said the current use cases are broad and include companies in retail, manufacturing, healthcare and financial services. That wide applicability is part of the reason why the Tableau acquisition happened, he noted.

Catch-up effort with advanced technology

In some ways, however, the Tableau acquisition is a “catch-up play” on providing automated insight-generation capabilities, said Jen Underwood, founder of Impact Analytix LLC, a product research and consulting firm in Tampa. Some other BI and analytics vendors “already have some of this,” Underwood said, citing Datorama and Tibco as examples.

The Tableau acquisition adds an AI-driven engine that’s designed to automate the data modeling process without requiring the involvement of skilled statisticians.

Empirical’s automated modeling and statistical analysis tools could put Tableau ahead of its rivals, she said, but it’s too soon to tell without having more details on the integration plans. Nonetheless, she said she thinks the technology will be a useful addition for Tableau users.

“People will like it,” she said. “It will make advanced analytics easier for the masses.”

Tableau already has been investing in AI and machine learning technologies internally. In April, the company released its Tableau Prep data preparation software, with embedded fuzzy clustering algorithms that employ AI to help users group data sets together. Before that, Tableau last year released a recommendation engine that shows users recommended data sources for analytics applications. The feature is similar to how Netflix suggests movies and TV shows based on what a user has previously watched, Ajenstat explained.

Integration plans still unclear

Ajenstat wouldn’t comment on when the Tableau acquisition will result in Empirical’s software becoming available in Tableau’s platform, or whether customers will have to pay extra for the technology.

[embedded content]

Empirical CEO Richard Tibbetts on its automated data
modeling technology.

“Whether it’s an add-on or how it’s integrated, it’s too soon to talk about that,” he said.

However, he added that the Empirical engine will likely be “a foundational element” in Tableau, at least partially running behind the scenes, with a goal that “a lot of different things in Tableau will get smarter.”

Unlike some predictive algorithms that require large stores of data to function properly, Empirical’s software works with “data of all sizes, both large and small,” Ajenstat said. When integration does eventually begin to happen, Ajenstat said Tableau hopes to be able to better help users identify trends and outliers in data sets and point them toward factors they could drill into more quickly.

Augmented analytics trending

Tableau’s move around augmented analytics is in line with what Gartner pointed to as a key emerging technology in its 2018 Magic Quadrant report on BI and analytics platforms.

Various vendors are embedding machine learning tools into their software to aid with data preparation and modeling and with insight generation, according to Gartner. The consulting and market research firm said the augmented approach “has the potential to help users find the most important insights more quickly, particularly as data complexity grows.”

Such capabilities have yet to become mainstream product requirements for BI software buyers, Gartner said in the February 2018 report. But they are “a proof point for customers that vendors are innovating at a rapid pace,” it added.

The eight-person team from Empirical Systems will continue to work on the software after the Tableau acquisition. Tableau, which didn’t disclose the purchase price, also plans to create a research and development center in Cambridge.

Senior executive editor Craig Stedman contributed to this story.