Tag Archives: introduced

AT&T design for open routing covers many uses

AT&T has introduced an ambitious open design for a distributed, disaggregated chassis that hardware makers can use to build service provider-class routers ranging from single line card systems to large clusters of routing hardware.

AT&T recently submitted the specs for its white box architecture to the Open Compute Project, an initiative to share with the general IT industry designs for server and data center components. The AT&T design builds a router chassis around Broadcom’s StrataDNX Jericho2 system-on-a-chip for Ethernet switches and routers.

AT&T has been a leading advocate of open, disaggregated hardware to reduce CapEx costs. It plans to use the new design for edge and core routers that comprise its global Common Backbone. The CBB is the network that handles the service provider’s IP traffic.

Also, AT&T plans to use the Jericho2 chip in its design to power 400 Gbps interfaces for the carrier’s next-generation 5G wireless network services.

For several years, AT&T has advocated for an open disaggregated router, which means the hardware is responsible only for data traffic while its control plane runs in separate software. Therefore, AT&T’s new design specs are not a surprise.

“What is indeed interesting is that they are taking the approach to all router use cases including high-performance, high-capacity routing using this distributed chassis scale-out approach,” Rajesh Ghai, an analyst at IDC, said.

AT&T design committed to hardware neutrality

AT&T’s hardware-agnostic design is ambitious because its use in carrier-class routing would require a new approach to procuring, deploying, managing and orchestrating hardware, Ghai said. “I know they have tried [to develop that approach] in the lab over the past year with a startup.”

Whether hardware built on AT&T specs can find a home outside of the carrier’s data centers remains to be seen.

“AT&T’s interest in releasing the specs for everyone is to drive adoption of the open hardware approach by other SPs [service providers] and hence drive a new market for disaggregated routers,” Ghai said. “But this requires sophistication on the part of the SP that few have. So, we’ll have to see who jumps in next.”

At the very least, vendors know the specifications they must meet to sell router software to AT&T, Ghai said.

AT&T’s design specifies three key building blocks for router clusters. The smallest is a line card system that supports 40 100 Gbps ports, plus 13 400 Gbps fabric-facing ports. In the middle is a line card system supporting 10 400 Gbps client ports, plus 13 400 Gbps fabric-facing ports.

For the largest systems, there is a fabric device that supports 48 400 Gbps ports. AT&T’s specs also cover a fabric system with 24 400 Gbps ports.

AT&T has taken a more aggressive approach to open hardware than rival Verizon. The latter has said it would run its router control plane in the cloud and use it to manage devices from Cisco and Juniper Networks, Ghai said.

Go to Original Article
Author:

New Azure Blueprint enables SWIFT CSP compliance on Azure

This morning at the SIBOS conference in London we announced how our new Azure Blueprint is being introduced by Microsoft in conjunction with the recent efforts to enable SWIFT connectivity in the cloud. It supports our joint customers in compliance monitoring and auditing of SWIFT infrastructure for cloud native payments, as described on the Official Microsoft Blog

SWIFT is the world’s leading provider of secure financial messaging services used and trusted by more than 11,000 financial institutions in more than 200 countries and territories. Today, enterprises and banks conduct these transactions by sending payment messages over the highly secure SWIFT network which leverages on-premises installations of SWIFT technology. SWIFT Cloud Connect creates a bank-like wire transfer experience with the added operational, security, and intelligence benefits the Microsoft Cloud offers.

Azure Blueprints is a free service that enables customers to define a repeatable set of Azure resources that implement and adhere to standards, patterns, and requirements. Azure Blueprints allow customers to set up governed Azure environments that can scale to support production implementations for large-scale migrations. Azure Blueprints include mappings for key compliance standards such as ISO 27001, NIST SP 800-53, PCI-DSS, UK Official, IRS 1075, and UK NHS. 

The new SWIFT blueprint maps Azure built-in polices to CSP’s security controls framework, enabling financial service organizations to have agility in creating and monitoring secure and compliant SWIFT infrastructure environments.

The Azure blueprint includes mappings to:

We are committed to helping our customers leverage Azure in a secure and compliant manner. Over the next few months, we will release new built-in blueprints for HITRUST, FedRAMP, and Center for Internet Security (CIS) Benchmark. If you have suggestions for new or existing compliance blueprints, please share them via the Azure Governance Feedback Forum.

Learn more about the SWIFT CSP blueprint in our documentation.

Go to Original Article
Author: Microsoft News Center

TigerGraph Cloud releases graph database as a service

With the general release of TigerGraph Cloud on Wednesday, TigerGraph introduced its first native graph database as a service.

In addition, the vendor announced that it secured $32 million in Series B funding, led by SIG.

TigerGraph, founded in 2012 and based in Redwood City, Ca., is a native graph database vendor whose products, first released in 2016, enable users to manage and access their data in different ways than traditional relational databases.

Graph databases simplify the connection of data points and enable them to simultaneously connect with more than one other data point. Among the benefits are the ability to significantly speed up the process of developing data into insights and to quickly pull data from disparate sources.

Before the release of TigerGraph Cloud, TigerGraph customers were able to take advantage of the power of graph databases, but they were largely on-premises users, and they had to do their own upgrades and oversee the management of the database themselves.

“The cloud makes life easier for everyone,” said Yu Xu, CEO of TigerGraph. “The cloud is the future, and more than half of database growth is coming from the cloud. Customers asked for this. We’ve been running [TigerGraph Cloud] in a preview for a while — we’ve gotten a lot of feedback from customers — and we’re big on the cloud. [Beta] customers have been using us in their own cloud.”

Regarding the servicing of the databases, Xu added: “Now we take over this control, now we host it, we manage it, we take care of the upgrades, we take care of the running operations. It’s the same database, but it’s an easy-to-use, fully SaaS model for our customers.”

In addition to providing graph database management as a service and enabling users to move their data management to the cloud, TigerGraph Cloud provides customers an easy entry into graph-based data analysis.

Some of the most well-known companies in the world, at their core, are built on graph databases.

Google, Facebook, LinkedIn and Twitter are all built on graph technology. Those vendors, however, have vast teams of software developers to build their own graph databases and teams of data scientists do their own graph-based data analysis, noted TigerGraph chief operating officer Todd Blaschka.

“That is where TigerGraph Cloud fits in,” Blaschka said. “[TigerGraph Cloud] is able to open it up to a broader adoption of business users so they don’t have to worry about the complexity underneath the hood in order to be able to mine the data and look for the patterns. We are providing a lot of this time-to-value out of the box.”

TigerGraph Cloud comes with 12 starter kits that help customers quickly build their applications. It also doesn’t require users to configure or manage servers, schedule monitoring or deal with potential security issues, according to TigerGraph.

That, according Donald Farmer, principal at TreeHive Strategy, is a differentiator for TigerGraph Cloud.

It is the simplicity of setting up a graph, using the starter kits, which is their great advantage. Classic graph database use cases such as fraud detection and recommendation systems should be much quicker to set up with a starter kit, therefore allowing non-specialists to get started.
Donald FarmerPrincipal, TreeHive Strategy

“It is the simplicity of setting up a graph, using the starter kits, which is their great advantage,” he said. “Classic graph database use cases such as fraud detection and recommendation systems should be much quicker to set up with a starter kit, therefore allowing non-specialists to get started.”

Graph databases, however, are not better for everyone and everything, according to Farmer. They are better than relational databases for specific applications, in particular those in which augmented intelligence and machine learning can quickly discern patterns and make recommendations. But they are not yet as strong as relational databases in other key areas.

“One area where they are not so good is data aggregation, which is of course a significant proportion of the work for business analytics,” Farmer said. “So relational databases — especially relational data warehouses — still have an advantage here.”

Despite drawbacks, the market for graph databases is expected to grow substantially over the next few years.

And much of that growth will be in the cloud, according to Blaschka.

Citing a report from Gartner, he said that 68% of graph database market growth will be in the cloud, while the graph database market as whole is forecast to have at least 100 percent year-over-year annual growth through 2022.

“The reason we’re seeing this growth so fast is that graph is the cornerstone for technologies such as machine learning, such as artificial intelligence, where you need large sets of data to find patterns to find insight that can drive those next-gen applications,” he said. “It’s really becoming a competitive advantage in the marketplace.”

With respect to the $32 million TigerGraph raised in Series B financing, according to Xu it will be used to help TigerGraph expand its reach into new markets and accelerate its emphasis on the cloud.

Go to Original Article
Author:

Eclipse launches Che 7 IDE for Kubernetes development

SAN FRANCISCO — The Eclipse Foundation has introduced Eclipse Che 7, a new developer workspace server and IDE to help developers build cloud-native, enterprise applications on Kubernetes.

The foundation debuted the new technology at the Oracle Code One conference here. Eclipse Che is essentially a cloud-based IDE built on technology Red Hat acquired from Codenvy, and Red Hat developers are still heavily involved with the Eclipse project. With a focus on Kubernetes, Eclipse Che 7 abstracts away some of the development complexities associated with Kubernetes and helps to close the gap between the development and operations environments, said Mike Milinkovich, executive director of the Eclipse Foundation.

“We think this is important because it’s the first cloud-based IDE that tends to be natively Kubernetes,” he said. “It provides all of the pieces that a cognitive developer needs to be able to build and deploy a Kubernetes application.”

Eclipse Che 7 helps developers who may not be so familiar with Kubernetes by providing not just the IDE, but also its plug-ins and their dependencies. In addition, Che 7 automatically adds all the build and debugging tools developers need for their applications.

Mike MilinkovichMike Milinkovich

“It helps reduce the learning curve that’s related to Kubernetes that a lot of developers struggle with, in terms of setting up Kubernetes and getting their first apps locations up and running on Kubernetes,” Milinkovich said.

The technology can be deployed on a public Kubernetes cluster or an on-premises data center, and it provides centrally hosted private developer workspaces. In addition, the Eclipse Che IDE is based on an extended version of Eclipse Theia that provides an in-browser experience like Microsoft’s Visual Studio Code, Milinkovich said.

Eclipse Che and Eclipse Theia are part of cloud-native offerings from vendors such as Google, IBM and Broadcom. And it lies at the core of Red Hat CodeReady Workspaces, a development for Red Hat OpenShift.

Moreover, Broadcom’s CA Brightside product uses Eclipse Che to bring a modern, open approach to the mainframe platform. Che also integrates with IBM Codewind to provide a low barrier to entry for developing in a production container environment.

Kubernetes is hard to manage, so it will be helpful to have an out-of-the-box offering from an IDE vendor.
Holger MuellerAnalyst, Constellation Research

“It had to happen, and it happened sooner than later: The first IDE delivered inside Kubernetes,” said Holger Mueller, an analyst at Constellation Research.

There are benefits of having developers build software with the same mechanics and platforms on the IDE side as their target production environment, he explained, including similar experience and faster code deployments.

“And Kubernetes is hard to manage, so it will be helpful to have an out-of-the-box offering from an IDE vendor,” Mueller said. “But nothing beats the advantage of being able to standardize and quickly launch uniform and consistent developer environments. This gives development team scale to build their next-gen applications and helps their enterprise accelerate.”

Eclipse joins a group that includes major vendors that want to limit the complexity of Kubernetes. IBM and VMware recently introduced technology to reduce Kubernetes complexity for developers and operations staff.

For instance, IBM’s Kabanero open source project to simplify development and deployment of apps on Kubernetes uses Che as its hosted IDE.

The future of developer tools will be cloud-based, Milinkovich said. “Because of the complexity of the application scenarios today, developers are spending a lot of their time and energy building out development environments when they could just move developer workspaces into containers,” he said. “It’s far easier to update the entire development team to new runtime requirements. And you can push out new tools across the entire development team.”

The IDE is the last big piece of technology that developers use on a daily basis that has not moved into the cloud, so moving the IDE into the cloud is the next logical step, Milinkovich said.

Go to Original Article
Author:

Equip Yourself for Battle with the Xbox Wireless Controller – Midnight Forces II Special Edition – Xbox Wire

In 2014, we introduced the fan-favorite military-inspired Forces series for the Xbox Wireless Controller. Today, we’re introducing a new take on one of the most popular designs from the series – the new Xbox Wireless Controller – Midnight Forces II Special Edition. The Xbox Wireless Controller – Midnight Forces II Special Edition features the modern blue camouflage-pattern you love plus textured grip to help you stay on target in the heat of the battle and a 3.5mm stereo headset jack to plug in for a fully immersive gaming experience. Like all Xbox Wireless Controllers, the Xbox Wireless Controller – Midnight Forces II Special Edition comes with Bluetooth technology for gaming on Windows 10 devices or Samsung Gear VR and custom button mapping through the Xbox Accessories app to customize your controller just the way you like it.

Make your camo-inspired gaming set complete with the officially licensed Midnight Forces II Special Edition Xbox Pro Charging Stand by Controller Gear. This charging stand is built with the same high-quality materials as Xbox Wireless Controllers so it’s always an exact match, while the magnetic contact system ensures a perfect fit and secure charge every time. Each Xbox Pro Charging Stand comes with a premium charging stand, battery cover, rechargeable battery and 6-foot power cord.

The Midnight Forces II Special Edition Xbox Pro Charging Stand and the Xbox Wireless Controller – Midnight Forces II Special Edition are available today at Microsoft Store in the U.S., Canada and Mexico. The controller is also available online through Wal-Mart in U.S. and Canada starting today and coming to their physical stores beginning mid-October.

Go to Original Article
Author: Microsoft News Center

New Skype features boost your productivity and enrich your chat experience | Skype Blogs

We recently introduced several features that help you boost your productivity when sending messages in Skype* and enrich your overall chat experience. New features include draft messages, the ability to bookmark messages and preview media and files before sending, as well as a new approach to display multiple photos or videos. We also launched split window, so you never mix up conversations again!

Message drafts

Now you’ll never forget about messages that didn’t get sent. Any message that you typed, but didn’t send, is saved in the corresponding conversation and marked with the [draft] tag—so you can easily recognize, finish, and send it later. Messages saved as drafts are even available when you leave and come back to your Skype app.

Message bookmarks

You can now bookmark any message in Skype—whether it’s work related or family photos—and come back to it with one click or tap anytime! Just right click or long press the message and click or tap Add bookmark. The message is added to the Bookmarks screen and is saved with your other bookmarked messages.

Preview media and files before sending

You can now preview photos, videos, and files that you’ve selected to share before sending. Once you select media and files to share, they’re displayed in the message panel, so you can ensure they’re the ones you want to share with your contact. You can also remove ones added by mistake or add new ones right from the panel. In addition, should you want to write an explanation or description for what you’re sending, you can add a message that will be sent along with the files.

New approach for displaying multiple photos or videos sent at once

If you want to share a bunch of photos with your friends or family after great vacation or nice eventjust do it and Skype will make sure they’re nicely presented in a conversation. You’ll see a nice album in the chat history with all the photos combined. And you can see each one by navigating and clicking between the photos or videos in an album.

Never mix up conversations in Skype again with split window

A few months back, we announced the launch of split window for Windows 10, which lets you put your contact list in one window, and each conversation you open in separate windows. We’re pleased to say that this feature is now available for all versions of Windows, Mac, and Linux on the latest version of Skype.* To learn more about how to use the split window view, visit our FAQs.

Let us know what you think

At Skype we’re driven by the opportunity to connect our global community of hundreds of millions, empowering them to feel closer and achieve more together. As we pursue these goals, we’re always looking for new ways to enhance the experience and improve quality and reliability. We listen to your feedback and are wholly committed to improving the Skype experience based on what you tell us. We’re passionate about bringing you closer to the people in your life—so if we can do that better, please let us know.

*These new features are available on the latest version of Skype across all platforms, except for split window, which is currently only available on desktop.

Go to Original Article
Author: Microsoft News Center

SAP Product Content Hub underpins CX initiatives in C/4HANA

SAP has introduced cloud-based software that lets C/4HANA customers distribute critical product information across e-commerce, marketing, sales and customer support teams.

The new Product Content Hub, released this week, includes APIs that companies can use to maintain up-to-date product catalogs, documentation and pricing on 1,500 e-commerce platforms, including Amazon, eBay and Google.

SAP built the software into C/4HANA to also provide users with a single tool for distributing vital product information across customer service, sales and marketing teams. C/4HANA is SAP’s cloud-based customer experience and e-commerce platform.

SAP developed Product Content Hub to improve product information management (PIM) within C/4HANA, said Riad Hijal, vice president of commerce strategy. AI built into C/4HANA can tag and route content from the PIM platform as managers make product updates.

Keeping product data on e-commerce sites currently is difficult because organizations often have disparate sources, said Constellation Research analyst Nicole France. Having one trusted place where users know the latest product information resides could save time for workers.

Product Content Hub launches SAP into the PIM market, where it will compete with products from InRiver, Salsify and Pimcore. SAP will differentiate itself from the pack by integrating its PIM platform with back-end product integration with in the company’s flagship ERP system as well as C/4HANA.

The reason tools like these are getting more attention is that companies are figuring out that when they rethink processes, it really doesn’t make sense to have a bunch of isolated islands of information.
Nicole FranceAnalyst, Constellation Research

However, using C/4HANA as a one-stop repository for managing product data will require users to change how they work, France said.

“The reason tools like these are getting more attention is that companies are figuring out that when they rethink [workflow] processes, it really doesn’t make sense to have a bunch of isolated islands of information,” France said.

However, even with Product Content Hub, SAP customers are likely to continue using some separate content management repositories, such as Box, OpenText and Microsoft Excel.

“The idea here is not to tell customers to stop using any existing tools that do the job,” Hijal said.

SAP went live with Product Content Hub early this year, but waited to formally announce it to iron out technical issues and add a few features, Hijal said. Monthly subscription pricing varies, depending on the number of products on the SaaS platform and the number of distribution channels.

Go to Original Article
Author:

3 Fundamental Capabilities of VM Groups You Can’t Ignore

In a previous post, I introduced you to VM groups in Hyper-V and demonstrated how to work with them using PowerShell. I’m still working with them to see how I will incorporate them into my everyday Hyper-V work, but I already know that I wish the cmdlets for managing groups worked a little differently. But that’s not a problem. I can create my own tooling around these commands and build a solution that works for me. Let me share what I’ve come up with so far.

1. Finding Groups

As I explained last time, you can have a VM group that contains a collection of virtual machines, or nested management groups. By default, Get-VMGroup will return all groups. Yes, you can filter by name but you can’t filter by group type. If I want to see only Management groups, I need to use a PowerShell expression like this:

This is not a complicated expression but it becomes tedious when I am repeatedly typing or modifying this command. This isn’t an issue in a script, but for everyday interactive work, it can be a bit much. My solution was to write a new command, Find-VMGroup, that works identically to Get-VMGroup except this version allows you to specify a group type.

Finding specific VM Group types with PowerShell

Your output might vary from the screenshot but I think you get the idea. The default is to return all groups, but then you might as well use Get-VMGroup. And because the group type is coded into the function, you can use tab complete to select a value.

Interested in getting the Find-VMGroup command? I have a section on how to install the module a little further down the page.

2. Expanding Groups

Perhaps the biggest issue (and even that might be a bit strong) I had with the VM Group command is that ultimately, what I really want are the members of the group. I want to be able to use groups to do something with all of the members of that group. And by members, I mean virtual machines. It doesn’t matter to me if the group is a VM Collection or Management Collection. Show me the virtual machines!

Again, this isn’t technically difficult.

 Getting VM Group members

If you haven’t figured out by now I prefer simple. Getting virtual machines from a management group requires even more steps. Once again, I wrote my own command called Expand-VMGroup.

Expanding a single VM group with a custom PowerShell command

The output has been customized a bit to provide a default, formatted view. There are in fact other properties you could work with.

Viewing all properties of an expanded VM group

Depending on the command, you might be able to pipe these results to another Hyper-V command. But I know that many of the Hyper-V cmdlets will take pipeline input by value. This allows you to pass a list of virtual machine names to a command. I added a parameter to Expand-VMGroup that will write just the virtual machine names to the pipeline as a list. Now I can run commands like this:

Piping Expand-VMGroup to another Hyper-V command

Again, the module containing this command can be found near the end of the article and can be installed using Install-Module

3. Starting and Stopping Groups

The main reason I want to use VM groups is to start and stop groups of virtual machines all at once. I could use Expand-VMGroup and pipe results to Start-VM or Stop-VM but I decided to make specific commands for starting and stopping all virtual machine members of a group. If a member of the group is already in the targeted state, it is skipped.

Starting members of a VM group

The third member of this group was already running so it was skipped. Now I’ll shut down the group.

Stopping members of a VM group

It may not seem like much but every little thing I can do to get more done with less typing and effort is worth my time. I’m using full parameter names and typing out more than I actually need to for the sake of clarity.

How Do I Get These Commands

Normally, I would show you code samples that you could use. But in this case, I think these commands are ready to use as-is. You can get the commands from my PSHyperVTools module which is free to install from the PowerShell Gallery.

If you haven’t installed anything before you might get a prompt to update the version of nuget. Go ahead and say yes.  You’ll also be prompted if you want to install from a non-trusted repository. You aren’t installing this on a mission-critical server so you should be OK. Once installed, you can use the commands that I’ve demonstrated. They should all have help and examples.

Getting help for Expand-VMGroup

The module is open source so if you’d like to review the code first or the README, jump over to https://github.com/jdhitsolutions/PSHyperV. There are a few other commands and features of the module that I hope to write about in a future article or two. But for now, I hope you’ll give these commands a spin and let me know what you think in the comments section below!

Go to Original Article
Author: Jeffery Hicks

CloudKnox Security adds privileged access features to platform

CloudKnox Security, a vendor in identity privilege management, introduced new features to its Cloud Security Platform, including Privilege-on-Demand, Auto-Remediation for Machine Identities and Anomaly Detection.

The offerings intend to increase enterprise protection from identity and resource risks in hybrid cloud environments. According to CloudKnox Security, the new release is an improvement on its existing Just Enough Privileges Controller, which enables enterprises to reduce overprovisioned identity privileges to appropriate levels across VMware, AWS, Azure and Google Cloud.

Privileged accounts are often targets for attack, and a successful hacking attempt can result in full control of an organization’s data and assets. The 2019 Verizon Data Breach Investigations Report highlighted privileged account misuse as the top threat for security incidents and the third-leading cause of security breaches.

The Privilege-on-Demand feature from CloudKnox Security enables companies to grant privileges to users for a certain amount of time and on a specific resource on an as-needed basis. The options include Privilege-on-Request, Privilege Self-Grant or Just-in-Time Privilege that give users access to a specific resource within a set time to perform an action.

The Auto-Remediation feature can frequently and automatically dismiss unused privileges of machine identities, according to the vendor. For example, the feature can be useful dealing with service accounts that perform repetitive tasks with limited privileges, because when these accounts are overprovisioned, organizations will be particularly vulnerable to privilege misuse.

The Anomaly Detection feature creates risk profiles for users and resources based on data obtained by CloudKnox’s Risk Management Module. According to the vendor, the software intends to detect abnormal behaviors from users, such as a profile carrying out a high-risk action for the first time on a resource they have never accessed.

The company will demonstrate the new features at Black Hat USA in Las Vegas this year for the first time. CloudKnox’s update to its Cloud Security Platform follows competitor CyberArk‘s recent updates to its own privileged access management offering, including zero-trust access, full visibility and control of privileged activities for customers, biometric authentication and just-in-time provisioning. Other market competitors that promise insider risk reduction, identity governance and privileged access management include BeyondTrust and One Identity.

Go to Original Article
Author:

Adobe Experience Platform adds features for data scientists

After almost a year in beta, Adobe has introduced Query Service and Data Science Workspace to the Adobe Experience Platform to enable brands to deliver tailored digital experiences to their customers, with real-time data analytics and understanding of customer behavior.

Powered by Adobe Sensei, the vendor’s AI and machine learning technology, Query Service and Data Science Workspace intend to automate tedious, manual processes and enable real-time data personalization for large organizations.

The Adobe Experience Platform — previously the Adobe Cloud Platform — is an open platform for customer experience management that synthesizes and breaks down silos for customer data in one unified customer profile.

According to Adobe, the volume of data organizations must manage has exploded. IDC predicted the Global DataSphere will grow from 33 zettabytes in 2018 to 175 zettabytes by 2025. And while more data is better, it makes it difficult for businesses and analysts to sort, digest and analyze all of it to find answers. Query Service intends to simplify this process, according to the vendor.

Query Service enables analysts and data scientists to perform queries across all data sets in the platform instead of manually combing through siloed data sets to find answers for data-related questions. Query Service supports cross-channel and cross-platform queries, including behavioral, point-of-sale and customer relationship management data. Query Service enables users to do the following:

  • run queries manually with interactive jobs or automatically with batch jobs;
  • subgroup records based on time and generate session numbers and page numbers;
  • use tools that support complex joins, nested queries, window functions and time-partitioned queries;
  • break down data to evaluate key customer events; and
  • view and understand how customers flow across all channels.

While Query Service simplifies the data identification process, Data Science Workspace helps to digest data and enables data scientists to draw insights and take action. Using Adobe Sensei’s AI technology, Data Science Workspace automates repetitive tasks and understands and predicts customer data to provide real-time intelligence.

Also within Data Science Workspace, users can take advantage of tools to develop, train and tune machine learning models to solve business challenges, such as calculating customer predisposition to buy certain products. Data scientists can also develop custom models to pull particular insights and predictions to personalize customer experiences across all touchpoints.

Additional capabilities of Data Science Workstation enable users to perform the following tasks:

  • explore all data stored in Adobe Experience Platform, as well as deep learning libraries like Spark ML and TensorFlow;
  • use prebuilt or custom machine learning recipes for common business needs;
  • experiment with recipes to create and train tracked unlimited instances;
  • publish intelligent services recipes without IT to Adobe I/O; and
  • continuously evaluate intelligent service accuracy and retrain recipes as needed.

Adobe data analytics features Query Service and Data Science Workspace were first introduced as part of the Adobe Experience Platform in beta in September 2018. Adobe intends these tools to improve how data scientists handle data on the Adobe Experience Platform and create meaningful models off of which developers can work. 

Go to Original Article
Author: