Tag Archives: FRANCISCO

John Deere’s software and AI journey

SAN FRANCISCO — John Deere, the brand name of Deere & Company, brings to mind green tractors in a golden field. It elicits thoughts of the earth, of planting and growing, of hard labor. Yet, for this classic American corporation, those thoughts are only part of the picture.

John Deere has manufactured and sold agricultural machinery and equipment for more than 180 years. It’s one of the biggest farm machinery manufacturers in the world. Over the last few years the multibillion-dollar company has made a significant progress on its AI journey, to develop AI-driven technology and embed it into its machines.

Not necessarily new

But developing and using advanced technologies isn’t new to John Deere, said Julian Sanchez, director of precision agriculture at John Deere, in an interview at the AI Summit conference Sept. 26.

For some 25 years, the company has put GPS capabilities into its tractors and other machines, enabling farmers to track their work. John Deere has also built  self-driving machines for more than two decades.

In the last decade, John Deere technology teams have worked to embed intelligent capabilities such as computer vision and machine learning into its machines.

“We’re a company that has very, very quickly reinvented itself from a hardware manufacturer to a developer” of software and AI, Sanchez said.

Despite the company’s long history of developing machinery and technology, making that major push to create advanced software didn’t happen quickly.

It really started with recognizing that we are rapidly becoming a software company.
Julian SanchezDirector of precision agriculture, John Deere

“It really started with recognizing that we are rapidly becoming a software company” more than a decade ago, Sanchez said. John Deere began recruiting heavily, looking for talent from universities and research programs.

To advance its AI journey, company focused heavily on developing software teams and creating a software culture.

For example, John Deere maintains an internal list of which languages employees can speak, Sanchez said. They began adding programming languages to that list and hiring large numbers of software developers at a time.

Marrying hardware with software

 It was difficult to merge the equipment with the software, Sanchez said.

“We’ve been working very hard the last decade to marry those two,” he said.

On the hardware side, the company rolled out significant changes to its machinery about once a year. Yet, with software, changes can be introduced much faster, sometimes as often as weekly. Putting out significant key updates to bring new features to older pieces of hardware wasn’t easy, Sanchez said.

Still, he added, the last two model years of harvesting combine machines have received significant feature updates, adding new capabilities, without having to change any hardware.

The work appears to have paid off. A number of John Deere machinery can automatically perform farming actions with little to no real-time human input. For example, Sanchez said, the company makes harvesters armed with video cameras.

With computer vision and machine learning, the harvesters can analyze the quality of the grain as its harvested, and make adjustments to prevent damage, providing farmers with a consistent grain quality.

The farmers

The company’s farmer customers tend to adapt quickly to any new software, Sanchez said.

“Farmers have actually always been to a large extent early adopters of technology,” he said.

Farming is challenging, he added, and farmers move quickly to use technology that will make their lives easier or help cut costs.

John Deere tries to ensure its technology is “walk up easy,” as Sanchez called it.

“For the average farmer, we want the majority of functions for a vehicle or technology easily learnable if not in minutes then in hours,” he said.

In 2017, the agricultural company opened a technology office, John Deere Labs, in San Francisco, focused on furthering its AI journey and developing machine learning and AI-driven technologies. The office has been operating since then.

The AI Summit was held Sept. 25 to 26 at the Palace of Fine Arts.

Go to Original Article
Author:

Oracle applications development EVP on Fusion, SaaS and what’s ahead

SAN FRANCISCO — Oracle executive vice president Steve Miranda has worked at the company since 1992 and leads all application development at the vendor. He was there well before Oracle made its acquisition-driven push against application rival SAP in the mid-2000s, with the purchases of PeopleSoft and Siebel.

In 2007, Oracle put Miranda in charge of Fusion Applications, the next-generation software suite that took a superset of earlier application functionality, added a modern user experience and embedded analytics, and offered both on-premises and cloud deployments. Fusion Applications became generally available in 2011, and since then the Oracle has continued to flesh out its portfolio with acquisitions and in-house development.

Of the three main flavors of cloud computing, SaaS has been by far the most successful for Oracle applications, as it draws in previously on-premises workloads and attracts new customers. The competition remains fierce, with Oracle jockeying not only with longtime rival SAP but also the likes of Salesforce and Workday.

Miranda spoke to TechTarget at Oracle’s OpenWorld conference in a conversation that covered Fusion’s legacy, the success of SaaS deployments compared with on-premises ones, Oracle’s app acquisitions of late and the road ahead.

Software project cost overruns and outright failures have been an unfortunate staple of the on-premises world. The same hasn’t happened in SaaS. Part of this is because SaaS is vendor-managed from the start, but issues like change management and training are still risk factors in the cloud. Explain what’s happening from your perspective.

We have a reasonably good track record, even in the on-premises days. The noticeable difference I’ve seen [with cloud] is as follows:

In on-premise, because you had a set version, and because you knew you’re going to move for years, you started the implementation, but you had to have everything, because there wasn’t another version coming [soon].

Now, inevitably, that meant it took a while. And then what that meant is your business sometimes changed. New requirements came in. That meant you had to change configuration, or buy a third-party [product] or customize. That meant the implementation pushed out. But [initially], you had this sort of one-time cliff, where you had to go or no-go. Because you weren’t going to touch the system, forever more, because that was sort of the way it was. Or maybe you look at years later. It put a tremendous amount of pressure [on customers].

Steve Miranda, executive vice president of applications,Oracle
Steve Miranda, executive vice president of Oracle applicationsdevelopment, addresses attendees Oracle OpenWorld last week.

So what happened was, while companies tried to control scope, because there wasn’t a second phase, or the second phase was way out, it was really hard to control scope.

In SaaS, the biggest shift that I’ve seen from customers is that mentality is all different, given that they know, by the nature of the product we’ve built, they’re going to get regular updates. Their mindset is “OK, we’re going to take advantage of new features. We’re going to continue more continually change our development process or our business process.”

Do last-minute things pop up? Sure. Do project difficulties pop up? Sure. But [you need] the willingness to say, “You know what? We’re going to keep phase one, the date’s not moving, which means your cost doesn’t move.”

In SaaS, projects aren’t perfect, sometimes there’s [a scope issue], but you have something live. You get some payback, and there’s some kind of finish line for that. That’s the biggest difference that I’ve seen.

The Fusion Applications portfolio and brand persists today and was a big focus at OpenWorld. But Fusion was announced in 2005, and became GA in 2011. That’s eight years ago. So in total, Fusion’s application architectural pattern is about 15 years old. How old is too old?

Are they old compared to on-premise products? Definitely not. Are they old compared to our largest SaaS competitor [Editor’s note: Salesforce]? No, that’s actually an older product.

Okay, now, just in a standalone way, is Fusion old? Well, I would say a lot of the technology is not old. We are updating to OCI, the latest infrastructure, we’ve moved our customers there. We are updating to the latest version of the Oracle database to an Autonomous Database. We’ve refreshed our UI once already, and in this conference, we announced the upcoming UI.

Now. If you go through every layer of the stack, and how it’s architected and how it’s built, you know, there’s some technical debt. It depends on what you mean by old.

We’re moving to more of a microservices architecture; we build that part a piece at a time. Once we get done with that, there’s going to be something else behind it. [Oracle CTO and chairman Larry Ellison] talked about serverless and elasticity of the cloud. We’re modifying the apps architecture to more fully leverage that.

So if the question is in hindsight, did we make mistakes? The biggest mistake for me personally is, look: We had a very large customer installed base across PeopleSoft Siebel, E-Business Suite, JD Edwards and the expectation from our customers, is when Oracle says we’ve got something, that they can move to it, and they can move to the cloud.

And so what we tried to do with Fusion V1, and one of the reasons it took us longer than anticipated is that we had this scope.

Any company now, it’s sort of cliche, they have this concept of minimum viable product. You introduce a product, and does it service all of the Oracle customer base? No. Will it serve a certain customer base? Sure, yeah. And then you get those customers and you add to it, you get more customers, you add to it, you improve it.

We had this vision of, let’s get a bigger and bigger scope. Had I done it over again? We’ve got a minimum viable product, we would announce it to a subset our customer and then some of this noise that you hear of like, oh, Oracle took too long, or Oracle’s late to markets or areas wouldn’t have been there.

I would argue in a lot of the areas, while it may have taken us longer to come to market, we came out with a lot more capabilities than our competitors right out the box, because we had a different mindset.

Oracle initially stressed how Fusion Applications could be run both on-premises and as SaaS, in part to ease customer concerns over the longer-term direction for Oracle applications. But most initial Fusion customers went with SaaS because running it on-premises was too complicated. Why did things play out that way?

While it may have taken us longer to come to market, we came out with a lot more capabilities than our competitors right out the box, because we had a different mindset.
Steve MirandaExecutive vice president of applications development, Oracle

I would take issue with the following: Let’s say we had the on-prem install, like, perfect. One button press, everything’s there. Do I think that we would have had a lot of uptake of Fusion on-premises as opposed SaaS? No. I think the SaaS model is better.

Did we optimize the on-premises install? No. We didn’t intentionally make it complicated. But, you know, we were focused on the SaaS market. We were [handling] the complexity. Was it perfect? No. Did that affect some customers? Yes. Did it affect the overall market? No, because I think SaaS was going to [win] anyway.

The classic debate for application customers and vendors is best-of-breed versus suites. Each approach has its own advantages and tradeoffs. Is this the status quo today, in SaaS? Has a third way emerged?

I don’t know if it’s a third way. We believe we have best-of-breed in many, many areas. Secondly, we believe in an integrated solution. Now let’s take that again. I view the customer as having three constituents they care about. They care about their own customers, they care about their employees and they care about their stakeholders, because public company, that’s shareholders, if it’s a private company, it’s different.

If you told me for any given company, there are two or five best-of-breed applications out for some niche feature that benefits one of those three audiences? OK. You go with it, no problem.

If you told me there were 20 or 50 best-of-breed options for a niche feature? It’s almost impossible for there to be that many niche features that matter to those three important people, particularly in areas where really we specialize in: ERP, supply chain, finance, HR, a lesser extent in CRM, slightly lesser in some parts of HR.

So this notion of “Oh, let’s best-of-breed everything.” Good luck. I mean, you could do it. But I don’t think you’re going to be happy because of the number of integrations. I don’t believe in that at all.

Let’s move forward to today. Apart from NetSuite in 2016, there haven’t been any mega-acquisitions in Oracle applications lately. Rather, it’s been around companies that play in the CX space, particularly ones focused on data collection and curation. What’s the thinking here?

Without data, you can automate a map, right? You can find out how to go from here to Palo Alto. No problem. You have in your phone, you can do directions, etc. But when you add data, and you turn on Waze, it gives you a different route, because you have real-time data, traffic alerts and road closures, it’s a lot more powerful.

And so we think real-time data matters, especially in CRM but also, frankly, in ERP. You might have a supplier and you have the other status, they go through an M&A, or other things. You want to have an ERP and CRM system that doesn’t ignore the outside world. You actually have data much more freely available today. You want to have a system that assumes that. So that’s our investment.

Oracle has recently drawn closer to Microsoft, forming a partnership around interoperability between Azure and Oracle Cloud Infrastructure. Microsoft is placing a big bet on Graph data connect, which pulls together information from its productivity software and customers’ internal business data. It seems like a place where your partnership could expand for mutual benefit.

I’m not going to answer that. I can’t comment on that. It’s a great question.

Go to Original Article
Author:

Oracle Cloud Infrastructure SVP shares competitive strategy

SAN FRANCISCO — Oracle’s late entry to the public cloud space has been met with skepticism, but the company has a few strategies, particularly around autonomous cloud, unexpected partnerships and shared responsibility, that executives expect to make up for its tardiness.

Oracle Cloud Infrastructure (OCI), the company’s second attempt at a public IaaS, was built with the help of former AWS technical employees like Clay Magouyrk, who joined Oracle in 2014.

Now senior vice president of engineering for OCI, Magouyrk spoke with TechTarget during Oracle’s OpenWorld conference about the company’s market position, its efforts to attract customers and what the future may hold.

A common refrain is that Oracle is very late — maybe too late — to arrive at a viable IaaS strategy. What’s your response to this?

Clay Magouyrk: I think you have to understand where we’re at in our evolution of the cloud. I remember when everybody had a kind of, you know, a dumb phone. And I remember it took two years to get to a place where everyone had a smartphone. The switching speed was amazing.

So then the question is, [the industry has] been doing cloud infrastructure for 15 years. Why isn’t it all [migrated over from on-premises] yet? It’s still 90% greenfield. One of the things I worked on at Amazon before I left was the Fire Phone, and the Fire Phone was too late, in the same way that the Windows Phone reboot was too late. Once Apple and Android had 90%-plus market share it was impossible [to catch up].

The reason I spend my day here every day working really hard is because we’re still at the 10% penetration [level]. People act like we’re 90% of the way into the cloud infrastructure transition, but we’re not anywhere close to it.

One of the biggest OCI announcements at OpenWorld was Maximum Security Zones, which you positioned as a response to data breaches at IaaS providers caused by misconfigured systems. But it doesn’t fully reject the shared responsibility stance around security forwarded by AWS and others.

Magouyrk: I think, fundamentally, in the long term, you always end up with some shared responsibility. What we’re trying to say here at Oracle is that we are going to push that boundary.

If you look at cloud in general, cloud providers have the responsibility of patching the hypervisor. And then, okay, your job is to patch your OS. Cool, well, with Autonomous Linux, now it’s our job to patch the OS. So we’re bringing that up the stack.

We’re going to create a construct where you can’t misconfigure it. We’re taking the stance that this is not just your responsibility, we are going to work with you on it.
Clay MagouyrkSVP, Oracle Cloud Infrastructure

If you look at the vast majority of security incidents these days, they’re not [committed by] massively sophisticated hackers. It’s misconfiguration. When you moved to the cloud, we gave you all these tools. And now you’ve got a million tools. And you have this programmable infrastructure. It’s so easy to do stuff, right?

We’re going to create a construct where you can’t misconfigure it. We’re taking the stance that this is not just your responsibility, we are going to work with you on it.

Think about it from our perspective on SaaS. We take on a ton of responsibility for that. There’s no way for you to mess it up. As we can make infrastructure to where you can’t mess it up, I don’t see any reason why we can’t take on more responsibility the same way we do in SaaS. The problem is that if people still need that control, to be able to mess it up, then it has to be on them.

You plan to dramatically expand OCI’s global footprint, bringing it to 36 regions by the end of 2020. Tell us about Oracle’s process here.

Magouyrk: There are things under the hood that people don’t see. One of those things is that we’ve massively optimized how we build regions, both from an infrastructure perspective, as well as a software perspective.

When I joined Oracle in 2014, we had zero OCI regions. I knew that if we were going to compete, we had to build regions that are way faster than our competitors. I knew how [AWS] built them, because I had worked there. And we hired people that also worked there. What we did is we took a much more aggressive approach.

Clay Magouyrk, SVP of Oracle Cloud Infrastructure
Clay Magouyrk, senior vice president, Oracle Cloud Infrastructure

The way most companies build these regions is you have 200 teams, and they all do it by hand. So the physical hardware gets rolled in, and the actual software teams spend all their time installing the stuff.

When you bought a Windows CD back in the day, Microsoft didn’t send an engineer to install it for you, right? They had an installer. Well, you can do that for the cloud; it’s a bunch of work. But we’ve made that investment from a software perspective.

If you look back to when Amazon got into this, they didn’t know where they were going. It was very early days. And they were like, we’re going to build these big honkin’ regions, and they’re going to cost a jillion dollars and there is going to be so much stuff there.

We realized, by the time I came to Oracle, we were going to need to put regions everywhere, but they’re going to start small and be able to grow. We’ve engineered for that. From a technical perspective, that gives us a lot more flexibility than some of our competitors.

It’s my understanding that Oracle is doing this buildout largely through co-location agreements and not actually building its own data centers, in the interest of speed. Can you elaborate?

Magouyrk: I think the term [co-location] has changed. If you go back 10 or 15 years, co-lo meant you get one rack in a small data center, that kind of a thing. But as you actually see these large cloud providers — the way they’re doing these worldwide rollouts — co-lo providers have actually changed their model. They still do that kind of retail stuff. But they also have a wholesale model. And that’s what has enabled a lot of this rapid global expansion.

Every major cloud provider uses a ton of colocation facilities. There’s just no way that it makes sense otherwise. As a co-lo provider, you can do things where you put stuff in a campus. You buy a bunch of land, and then you start small. But then, as that fills up, you build new buildings and you can deeply interconnect that with fiber. As a [co-lo] customer, you have this expansion plan in that same area, but you don’t have to build a giant data center [yourself].

One OCI announcement this week at OpenWorld concerned a new “always free” tier, which seemed clearly aimed at attracting developers to the platform. Tell us more about your efforts in this area.

Magouyrk: A lot of it is social proof. While we would like to think that everyone’s making a fully informed, highly educated decision, the reality is that most humans do close approximations based on what everyone else does. The key is to create a flywheel that gets enough going, and then it becomes oh, well, they all chose Oracle, why can’t we?

We have dedicated startup investments where we go to startups and give them a bunch of free credits and incentives. Around developers in general, we put a lot of energy into attracting people out of college. We work with universities to give them access to cloud computing resources. They use that in their classrooms to get people familiar with it.

I view [the free tier] as the start of something … Because it’s not just making it free, it’s making the signup easy, the support experience easy. Do you have the collateral and the ecosystem around it? Do you have the right forums for people to ask questions? You do all of those things in a row, and it builds.

The initial use case you put forward with your interoperability partnership between Microsoft Azure and OCI was to split an application up, with the logic and presentation tiers on Azure tied back to an Oracle database running on Exadata inside OCI. Why would a customer want to do that?

Magouyrk: When [Oracle CTO and chairman] Larry [Ellison] gets up there, and he’s like, look, let me show you the performance difference between what you can get from this versus an Exadata, I don’t think people actually believe it, but it’s true. There are all these amazing workloads that need giant relational databases that just can’t move anywhere.

My job is to get you in OCI and then just keep pulling more. Maybe you add some Analytics Cloud on top of it to analyze that data. You just get people hooked that way.

What about going further and bringing Exadata boxes right into Azure data centers? That would eliminate the need for the high-speed interconnect between OCI sites and Microsoft’s.

Magouyrk: As you can imagine, those types of conversations do happen in the abstract. I’m sure Microsoft would love that. The thing you have to understand is that we did a lot of work in Oracle Cloud infrastructure to make Exadata run well. We offer bare-metal; we offer off-the-box virtualized networking. There’s a whole bunch of features.

Let’s say that we were to make a deal and I gave Microsoft a bunch of Exadatas. It would be off in a little tiny part of the network. It’s not actually integrated into their experience. They wouldn’t have a database service wrapped around it. The experience would be terrible. For them, it’s not important [to host Exadatas].

For us, it’s so valuable that we make it really, really good. What we’re not going to do is take the thing that we think is incredibly valuable and then have other people do it badly.

What is next for this deal? Will we see others like it, say with Google? Going even further, is a détente between Oracle and AWS possible?

Magouyrk: Right now, we have great ideas, and we have good buzz, and we have very interesting customers. What we have to do over the next six months is convert those into a bunch of very happy, very public reference customers. That’s the next level of uptick in the process. Until we have that play out, I don’t think anyone’s going to know.

In terms of where we’re taking this with Microsoft, I think it’s about us working much better together. We’re making Oracle Linux work better in [Microsoft’s] cloud. A big part of the reason we chose them is because we have such customer overlap. It might be interesting, technically, for us to do it with Google. But it’s not like Google is already in every single one of our customer accounts. We’re doing this from a customer-driven perspective.

Go to Original Article
Author:

Eclipse launches Che 7 IDE for Kubernetes development

SAN FRANCISCO — The Eclipse Foundation has introduced Eclipse Che 7, a new developer workspace server and IDE to help developers build cloud-native, enterprise applications on Kubernetes.

The foundation debuted the new technology at the Oracle Code One conference here. Eclipse Che is essentially a cloud-based IDE built on technology Red Hat acquired from Codenvy, and Red Hat developers are still heavily involved with the Eclipse project. With a focus on Kubernetes, Eclipse Che 7 abstracts away some of the development complexities associated with Kubernetes and helps to close the gap between the development and operations environments, said Mike Milinkovich, executive director of the Eclipse Foundation.

“We think this is important because it’s the first cloud-based IDE that tends to be natively Kubernetes,” he said. “It provides all of the pieces that a cognitive developer needs to be able to build and deploy a Kubernetes application.”

Eclipse Che 7 helps developers who may not be so familiar with Kubernetes by providing not just the IDE, but also its plug-ins and their dependencies. In addition, Che 7 automatically adds all the build and debugging tools developers need for their applications.

Mike MilinkovichMike Milinkovich

“It helps reduce the learning curve that’s related to Kubernetes that a lot of developers struggle with, in terms of setting up Kubernetes and getting their first apps locations up and running on Kubernetes,” Milinkovich said.

The technology can be deployed on a public Kubernetes cluster or an on-premises data center, and it provides centrally hosted private developer workspaces. In addition, the Eclipse Che IDE is based on an extended version of Eclipse Theia that provides an in-browser experience like Microsoft’s Visual Studio Code, Milinkovich said.

Eclipse Che and Eclipse Theia are part of cloud-native offerings from vendors such as Google, IBM and Broadcom. And it lies at the core of Red Hat CodeReady Workspaces, a development for Red Hat OpenShift.

Moreover, Broadcom’s CA Brightside product uses Eclipse Che to bring a modern, open approach to the mainframe platform. Che also integrates with IBM Codewind to provide a low barrier to entry for developing in a production container environment.

Kubernetes is hard to manage, so it will be helpful to have an out-of-the-box offering from an IDE vendor.
Holger MuellerAnalyst, Constellation Research

“It had to happen, and it happened sooner than later: The first IDE delivered inside Kubernetes,” said Holger Mueller, an analyst at Constellation Research.

There are benefits of having developers build software with the same mechanics and platforms on the IDE side as their target production environment, he explained, including similar experience and faster code deployments.

“And Kubernetes is hard to manage, so it will be helpful to have an out-of-the-box offering from an IDE vendor,” Mueller said. “But nothing beats the advantage of being able to standardize and quickly launch uniform and consistent developer environments. This gives development team scale to build their next-gen applications and helps their enterprise accelerate.”

Eclipse joins a group that includes major vendors that want to limit the complexity of Kubernetes. IBM and VMware recently introduced technology to reduce Kubernetes complexity for developers and operations staff.

For instance, IBM’s Kabanero open source project to simplify development and deployment of apps on Kubernetes uses Che as its hosted IDE.

The future of developer tools will be cloud-based, Milinkovich said. “Because of the complexity of the application scenarios today, developers are spending a lot of their time and energy building out development environments when they could just move developer workspaces into containers,” he said. “It’s far easier to update the entire development team to new runtime requirements. And you can push out new tools across the entire development team.”

The IDE is the last big piece of technology that developers use on a daily basis that has not moved into the cloud, so moving the IDE into the cloud is the next logical step, Milinkovich said.

Go to Original Article
Author:

Oracle CDP moves beyond marketing data

SAN FRANCISCO — Oracle has entered the customer data platform market, but some observers wonder whether the category is already headed into obsolescence.

The Oracle Customer Data Platform (CDP) joins Adobe’s, released earlier this year. Salesforce and SAP have their own CDPs in development. While they attempt to solve a difficult technical problem — matching, updating and deduplicating customer records across marketing, sales, service and e-commerce systems — CDPs are difficult to explain to c-suite leaders who sign off on large IT purchases.

Moreover, for CIOs, a CDP represents another tool to support and secure in already complex cloud enterprise application stacks.

No one disputes the need for B2C and B2B companies to aggregate customer data to drive faster, more precisely personalized sales and promotions, said Paul Gaynor, a technology consulting leader for PwC, a tax and audit services firm based in London. But when selling clients on CX initiatives, he said, his team leaves the CDP discussion to the developers, instead focusing on outcomes and bottom-line potential.

“We don’t make it about the data,” Gaynor said. “The data is the currency, a really important part of the equation, for sure. But the infrastructure and how it has to pass from platform to platform to drive AI- or human-based decisions … that’s just part of workflow.”

That said, he sees potential for the Oracle CDP to derive more specific, usable insights from many more data sources than customer experience platforms , even reaching into supply-chain systems to shape personalized customer offers.

Oracle EVP Rob Tarkoff presenting at OpenWorld.
Rob Tarkoff, Oracle EVP, delivers the CX keynote at Oracle OpenWorld.

Oracle CDP goes beyond marketing

When Oracle talks to customers in advertising-heavy sectors, those users believe that CDPs are the technology answer to melding customer data from third-party advertising platforms with their own marketing data, said Oracle EVP Rob Tarkoff. In other sectors, CDPs are less important, Tarkoff said.

Yet Oracle bills its CX Unity platform as “more than a CDP,” able to reach past marketing systems and draw deeper insights from ERP and other peripheral data systems. In Tarkoff’s mind, current CDPs tend to be limited to marketing automation. Yet in conversations with some customers, CDPs “are coming up all the time,” Tarkoff said.

Whatever the platform is called, profile veracity — the ability to dedupe, normalize and resolve different data sets to real identity, at scale — is a big challenge for these data platforms.

“That, and in every industry, there’s a different schema for how you want to represent a customer profile,” Tarkoff said. “A bank has a different set of attributes than an insurance company, a communications company or a retailer.”

Data wrangling to remain difficult

Some observers, such as Deloitte Digital Principal and CTO Sam Kapreilian, believe that despite the difficulty of easily explaining CDPs — let alone their value — customer data platforms will become bedrock technology to garner data insights and drive revenue in the years to come. Rather than headed toward obsolescence, Deloitte’s customers see the potential of new versions of the tool like Oracle CDP.

[CDP is] an ongoing project, it’s going to take years. It’s like the journey to self-improvement — it never ends.
Michael KrigsmanAnalyst and founder, CXOTalk

“This stuff wasn’t possible two to three years ago,” Kapreilian said. “It just wasn’t affordable.”

Michael Krigsman, analyst and founder of CXOTalk, said whatever future platforms perform the processes currently assigned to CDPs will have to solve the same problem: Figure out how to find and track revenue in data that often is far removed from the final sales process, aggregate it in a single platform and ultimately assign a value to AI-fueled data personalization.

“It’s an ongoing project; it’s going to take years,” Krigsman said. “It’s like the journey to self-improvement — it never ends.”

Go to Original Article
Author:

Oracle andVMware forge new IaaS cloud partnership

SAN FRANCISCO — VMware’s virtualization stack will be made available on Oracle’s IaaS, in a partnership that underscores changing currents in the public cloud market and represents a sharp strategic shift for Oracle.

Under the pact, enterprises will be able to deploy certified VMware software on Oracle Cloud Infrastructure (OCI), the company’s second-generation IaaS. Oracle is now a member of the VMware Cloud Provider Program and will sell VMware’s Cloud Foundation stack for software-defined data centers, the companies said on the opening day of Oracle’s OpenWorld conference.

Oracle plans to give customers full root access to physical servers on OCI, and they can use VMware’s vCenter product to manage on-premises and OCI-based environments through a single tool.

“The VMware you’re running on-premises, you can lift and shift it to the Oracle Cloud,” executive chairman and CTO Larry Ellison said during a keynote. “You really control version management operations, upgrade time of the VMware stack, making it easy for you to migrate — if that’s what you want to do — into the cloud with virtually no change.”

The companies have also reached a mutual agreement around support, which Oracle characterized with the following statement: “[C]ustomers will have access to Oracle technical support for Oracle products running on VMware environments. … Oracle has agreed to support joint customers with active support contracts running supported versions of Oracle products in Oracle supported computing environments.”

It’s worth noting the careful language of that statement, given Oracle and VMware’s history. While Oracle has become more open to supporting its products on VMware environments, it has yet to certify any for VMware.

Moreover, many customers have found Oracle’s licensing policy for deploying its products on VMware devilishly complex. In fact, a cottage industry has emerged around advisory services meant to help customers keep compliant with Oracle and VMware.

Nothing has changed with regard to Oracle’s existing processor license policy, said Vinay Kumar, vice president of product management for OCI. But the VMware software to be made available on OCI will be through bundled, Oracle-sold SKUs that encompass software and physical infrastructure. Initially, one SKU based on X7 bare-metal instances will be available, according to Kumar.

Oracle and VMware have been working on the partnership for the past nine months, he added. The first SKU is expected to be available within the next six months. Kumar declined to provide details on pricing.

Oracle, VMware relations warm in cloudier days

“It seems like there is a thaw between Oracle and VMware,” said Gary Chen, an analyst at IDC. The companies have a huge overlap in terms of customers who use their software in tandem, and want more deployment options, he added. “Oracle customers are stuck on Oracle,” he said. “They have to make Oracle work in the cloud.”

Gary Chen, Analyst, IDCGary Chen

Meanwhile, VMware has already struck cloud-related partnerships with AWS, IBM, Microsoft and Google, leaving Oracle little choice but to follow. Oracle has also largely ceded the general-purpose IaaS market to those competitors, and has positioned OCI for more specialized tasks as well as core enterprise application workloads, which often run on VMware today.

Massive amounts of on-premises enterprise workloads run on VMware, but as companies look to port them to the cloud, they want to do it in the fastest, easiest way possible, said Holger Mueller, an analyst at Constellation Research in Cupertino, Calif.

The biggest cost of lift-and-shift deployments to the cloud involves revalidation and testing in the new environment, Mueller added.

It seems like there is a thaw between Oracle and VMware.
Gary ChenAnalyst, IDC

But at this point, many enterprises have automated test scripts in place, or even feel comfortable not retesting VMware workloads, according to Mueller. “So the leap of faith involved with deploying a VMware VM on a server in the corporate data center or in a public cloud IaaS is the same,” he said.

In the near term, most customers of the new VMware-OCI service will move Oracle database workloads over, but it will be Oracle’s job to convince them OCI is a good fit for other VMware workloads, Mueller added.

Go to Original Article
Author:

Oracle Cloud Infrastructure updates hone in on security

SAN FRANCISCO — Oracle hopes a focus on advanced security can help its market-lagging IaaS gain ground against the likes of AWS, Microsoft and Google.

A new feature called Maximum Security Zones lets customers denote enclaves within their Oracle Cloud Infrastructure (OCI) environments that have all security measures turned on by default. Resources within the zones are limited to configurations that are known to be secure. The system will also prevent alterations to configurations and provide continuous monitoring and defenses against anomalies, Oracle said on the opening day of its OpenWorld conference.

Through Maximum Security Zones, customers “will be better protected from the consequences of misconfigurations than they are in other cloud environments today,” Oracle said in an obvious allusion to recent data breaches, such as the Capital One-AWS hack, which have been blamed on misconfigured systems that gave intruders a way in.

“Ultimately, our goal is to deliver to you a fully autonomous cloud,” said Oracle executive chairman and CTO Larry Ellison, during a keynote. 

“If you spend the night drinking and get into your Ford F-150 and crash it, that’s not Ford’s problem,” he said. “If you get into an autonomous Tesla, it should get you home safely.”

Oracle wants to differentiate itself and OCI from AWS, which consistently promotes a shared responsibility model for security between itself and customers. “We’re trying to leapfrog that construct,” said Vinay Kumar, vice president of product management for Oracle Cloud Infrastructure.

“The cloud has always been about, you have to bring your own expertise and architecture to get this right,” said Leo Leung, senior director of products and strategy at OCI. “Think about this as a best-practice deployment automatically. … We’re going to turn all the security on and let the customer decide what is ultimately right for them.”

Security is too important to rely solely on human effort.
Holger MuellerVice president and principal analyst, Constellation Research.

Oracle’s Autonomous Database, which is expected to be a big focal point at this year’s OpenWorld, will benefit from a new service called Oracle Data Safe. This provides a set of controls for securing the database beyond built-in features such as always-on encryption and will be included as part of the cost of Oracle Database Cloud services, according to a statement.

Finally, Oracle announced Cloud Guard, which it says can spot threats and misconfigurations and “hunt down and kill” them automatically. It wasn’t immediately clear whether Cloud Guard is a homegrown Oracle product or made by a third-party vendor. Security vendor Check Point offers an IaaS security product called CloudGuard for use with OCI.

Starting in 2017, Oracle began to talk up new autonomous management and security features for its database, and the OpenWorld announcements repeat that mantra, said Holger Mueller, an analyst at Constellation Research in Cupertino, Calif. “Security is too important to rely solely on human effort,” he said.

OCI expansions target disaster recovery, compliance

Oracle also said it will broadly expand OCI’s global cloud footprint, with the launch of 20 new regions by the end of next year. The rollout will bring Oracle’s region count to 36, spread across North America, Europe, South America, the Middle East, Asia-Pacific, India and Australia.

This expansion will add multiple regions in certain geographies, allowing for localized disaster recovery scenarios as well as improved regulatory compliance around data location. Oracle plans to add multi-region support in every country it offers OCI and claimed this approach is superior to the practice of including multiple availability zones in a single region.

Oracle’s recently announced cloud interoperability partnership with Microsoft is also getting a boost. The interconnect that ties together OCI and Azure, now available in Virginia and London, will also be offered in the Western U.S., Asia and Europe over the next nine months, according to a statement. In most cases, Oracle is leasing data center space from providers such as Equinix, according to Kumar.

Holger MuellerHolger Mueller

SaaS vendors are another key customer target for Oracle with OCI. To that end, it announced new integrated third-party billing capabilities for the OCI software marketplace released earlier this year. Oracle also cited SaaS providers who are taking advantage of Oracle Cloud Infrastructure for their own underlying infrastructure, including McAfee and Cisco.

There’s something of value for enterprise customers in OCI attracting more independent software vendors, an area where Oracle also lags against the likes of AWS, Microsoft and Google, according to Mueller.

“In contrast to enterprises, they bring a lot of workloads, often to be transferred from on-premises or even other clouds to their preferred vendor,” he said. “For the IaaS vendor, that means a lot of scale, in a market that lives by economies of scale: More workloads means lower prices.”

Go to Original Article
Author:

WATCH: Hackathons show teen girls the potential for AI – and themselves – AI for Business

This summer, young women in San Francisco and Seattle spent a weekend taking their creative problem solving to a whole new level through the power of artificial intelligence. The two events were part of a Microsoft-hosted AI boot-camp program that started last year in Athens, then broadened its reach with events in London last fall and New York City in the spring. Check out the wrap-up video from the three U.S. events:

YouTube Video

“I’ve been so impressed not only with the willingness of these young women to spend an entire weekend learning and embracing this opportunity, but with the quality of the projects,” said Didem Un Ates, one of the program organizers and a senior director for AI within Microsoft. “It’s just two days, but what they come up with always blows our minds.” (Read a LinkedIn post from Un Ates about the events.)

The problems these girls tackled aren’t kid stuff: The girls chose their weekend projects from among the U.N. Sustainable Development Goals, considered to be the most difficult and highest priority for the world.

The result? Dozens of innovative products that could help solve issues as diverse as ocean pollution, dietary needs, mental health, acne and climate change. Not to mention all those young women – 129 attended the U.S. events – who now feel empowered to pursue careers to help solve those problems. They now see themselves as “Alice,” a mascot created by the project team to represent the qualities young women possess that lend themselves to changing the world through AI.

Organizers plan to broaden the reach of these events, so that girls everywhere can learn about the possibility of careers in technology.

Related:

Go to Original Article
Author: Microsoft News Center

IT pros look to VMware’s GPU acceleration projects to kick-start AI

SAN FRANCISCO — IT pros who need to support emerging AI and machine learning workloads see promise in a pair of developments VMware previewed this week to bolster support for GPU-accelerated computing in vSphere.

GPUs are uniquely suited to handle the massive processing demands of AI and machine learning workloads, and chipmakers like Nvidia Corp. are now developing and promoting GPUs specifically designed for this purpose.

A previous partnership with Nvidia introduced capabilities that allowed VMware customers to assign GPUs to VMs, but not more than one GPU per VM. The latest development, which Nvidia calls its Virtual Compute Server, allows customers to assign multiple virtual GPUs to a VM.

Nvidia’s Virtual Compute Server also works with VMware’s vMotion capability, allowing IT pros to live migrate a GPU-accelerated VM to another physical host. The companies have also extended this partnership to VMware Cloud on AWS, allowing customers to access Amazon Elastic Compute Cloud bare-metal instances with Nvidia T4 GPUs.

VMware gave the Nvidia partnership prime time this week at VMworld 2019, playing a prerecorded video of Nvidia CEO Jensen Huang talking up the companies’ combined efforts during Monday’s general session. However, another GPU acceleration project also caught the eye of some IT pros who came to learn more about VMware’s recent acquisition of Bitfusion.io Inc.

VMware acquired Bitfusion earlier this year and announced its intent to integrate the startup’s GPU virtualization capabilities into vSphere. Bitfusion’s FlexDirect connects GPU-accelerated servers over the network and provides the ability to assign GPUs to workloads in real time. The company compares its GPU vitalization approach to network-attached storage because it disaggregates GPU resources and makes them accessible to any server on the network as a pool of resources.

The software’s unique approach also allows customers to assign just portions of a GPU to different workloads. For example, an IT pro might assign 50% of a GPU’s capacity to one VM and 50% to another VM. This approach can allow companies to more efficiently use its investments in expensive GPU hardware, company executives said. FlexDirect also offers extensions to support field-programmable gate arrays and application-specific integrated circuits.

“I was really happy to see they’re doing this at the network level,” said Kevin Wilcox, principal virtualization architect at Fiserv, a financial services company. “We’ve struggled with figuring out how to handle the power and cooling requirements for GPUs. This looks like it’ll allow us to place to our GPUs in a segmented section of our data center that can handle those power and cooling needs.”

AI demand surging

Many companies are only beginning to research and invest in AI capabilities, but interest is growing rapidly, said Gartner analyst Chirag Dekate.

“By end of this year, we anticipate that one in two organizations will have some sort of AI initiative, either in the [proof-of-concept] stage or the deployed stage,” Dekate said.

In many cases, IT operations professionals are being asked to move quickly on a variety of AI-focused projects, a trend echoed by multiple VMworld attendees this week.

“We’re just starting with AI, and looking at GPUs as an accelerator,” said Martin Lafontaine, a systems architect at Netgovern, a software company that helps customers comply with data locality compliance laws.

“When they get a subpoena and have to prove where [their data is located], our solution uses machine learning to find that data. We’re starting to look at what we can do with GPUs,” Lafontaine said.

Is GPU virtualization the answer?

Recent efforts to virtualize GPU resources could open the door to broader use of GPUs for AI workloads, but potential customers should pay close attention to benchmark testing, compared to bare-metal deployments, in the coming years, Gartner’s Dekate said.

So far, he has not encountered a customer using these GPU virtualization tactics for deep learning workloads at scale. Today, most organizations still run these deep learning workloads on bare-metal hardware.

 “The future of this technology that Bitfusion is bringing will be decided by the kind of overheads imposed on the workloads,” Dekate said, referring to the additional compute cycles often required to implement a virtualization layer. “The deep learning workloads we have run into are extremely compute-bound and memory-intensive, and in our prior experience, what we’ve seen is that any kind of virtualization tends to impose overheads. … If the overheads are within acceptable parameters, then this technology could very well be applied to AI.”

Go to Original Article
Author:

Carbon Black acquisition is ‘compelling’

SAN FRANCISCO — VMware’s acquisition of Carbon Black is “the most compelling security story” Steve Athanas has heard in a while.

“I don’t know any other vendor in the ecosystem that has more visibility to more business transactions happening than VMware does,” said Athanas, VMware User Group president and associate CIO at the University of Massachusetts Lowell.

At its annual user conference, VMware announced new features within Workspace One, its digital workspace product that enables IT to manage virtual desktops and applications, and talked up the enhanced security features the company will gain through its $2.1 billion Carbon Black acquisition. Like Athanas, VMworld attendees welcomed the news.

VMware CEO Pat Gelsinger
At the opening keynote for VMworld, VMware CEO Pat Gelsinger speaks about the recent Carbon Black acquisition.

In this podcast, Athanas said Carbon Black could provide endpoint security across an entire organization once the technology is integrated, a promise he said he’s still thinking through.

“Are [chief security officers] going to buy into this model of wanting security from one vendor? I’ve heard CSOs in the past say you don’t do that because if one fails, you want another application to be able to detect something,” he said. “I don’t know where the balance and benefit is between being able to see more through that single view from Carbon Black or to have multiple vendors.”

Aside from the Carbon Black acquisition, Athanas was drawn to newly unveiled features for Workspace One that are aimed at making day-to-day processes for end users, IT and HR admins easier. For IT admins, a new Employee Experience Management feature enables IT to proactively diagnose if an end user’s device has been compromised by a harmful email or cyberattack. The feature can prevent the employee from accessing more company applications, preventing the spread of a cyberattack.

Another feature is called Virtual Assistant, which can help automate some of the onboarding and device management aspects of hiring a new employee.

“The Virtual Assistant stuff is cool, but I’m going to reserve judgement on it, because there is a ton of work that needs to go into getting that AI to give you the right answer,” Athanas said.

Go to Original Article
Author: