Tag Archives: hopes

Google Cloud networking BYOIP feature could ease migrations

Google hopes a new networking feature will spur more migrations to its cloud platform and make the process easier at the same time.

Customers can now bring their existing IP addresses to Google Cloud’s network infrastructure in all of its regions around the world. Those who do can speed up migrations, cut downtime and lower costs, Google said in a blog post.

“Each public cloud provider is looking to reduce the migration friction between them and the customer,” said Stephen Elliot, an analyst at IDC. “Networking is a big part of that equation and IP address management is a subset.”

Bitly, the popular hyperlink-shortening service, is an early user of Google Cloud bring your own IP (BYOIP).

Many Bitly customers have custom web domains that are attached to Bitly IP addresses and switching to ones on Google Cloud networking would have been highly disruptive, according to the blog. Bitly also saved money via BYOIP because it didn’t have to maintain a co-location facility for the domains tied to Bitly IPs.

BYOIP could help relieve cloud migration headaches

IP address management is a well-established discipline in enterprise IT. It is one that has become more burdensome over time, not only due to workload migrations to the cloud, but also the vast increase in internet-connected devices and web properties companies have to wrangle.

Stephen Elliot, IDCStephen Elliot

AWS offers BYOIP though its Virtual Private Cloud service but hasn’t rolled it out in every region. Microsoft has yet to create a formal BYOIP service, but customers who want to retain their IP addresses can achieve a workaround through Azure ExpressRoute, its service for making private connections between customer data centers and Azure infrastructure.

Each public cloud provider is looking to reduce the migration friction between them and the customer.
Stephen Elliot Analyst, IDC

Microsoft and AWS will surely come up to par with Google Cloud networking on BYOIP, eventually. But as the third-place contestant among hyperscale cloud providers, Google — which has long touted its networking chops as an advantage — could gain a competitive edge in the meantime.

IP address changes are a serious pain point for enterprise migrations of any sort, particularly in the cloud, said Eric Hanselman, chief analyst at 451 Research.

“Hard-coded addresses and address dependencies can be hard to find,” he added. “They wind up being the ticking time bomb in many applications. They’re hard to find beforehand, but able to cause outages during a migration that are problematic to troubleshoot.”

Deepak Mohan, IDCDeepak Mohan

Overall, the BYOIP concept provides a huge benefit, particularly for large over-the-internet services, according to Deepak Mohan, another analyst at IDC.

“They often have IPs whitelisted at multiple points in the delivery and the ability to retain IP greatly simplifies the peripheral updates needed for a migration to a new back-end location,” Mohan said.

Go to Original Article
Author:

Oracle Cloud Infrastructure updates hone in on security

SAN FRANCISCO — Oracle hopes a focus on advanced security can help its market-lagging IaaS gain ground against the likes of AWS, Microsoft and Google.

A new feature called Maximum Security Zones lets customers denote enclaves within their Oracle Cloud Infrastructure (OCI) environments that have all security measures turned on by default. Resources within the zones are limited to configurations that are known to be secure. The system will also prevent alterations to configurations and provide continuous monitoring and defenses against anomalies, Oracle said on the opening day of its OpenWorld conference.

Through Maximum Security Zones, customers “will be better protected from the consequences of misconfigurations than they are in other cloud environments today,” Oracle said in an obvious allusion to recent data breaches, such as the Capital One-AWS hack, which have been blamed on misconfigured systems that gave intruders a way in.

“Ultimately, our goal is to deliver to you a fully autonomous cloud,” said Oracle executive chairman and CTO Larry Ellison, during a keynote. 

“If you spend the night drinking and get into your Ford F-150 and crash it, that’s not Ford’s problem,” he said. “If you get into an autonomous Tesla, it should get you home safely.”

Oracle wants to differentiate itself and OCI from AWS, which consistently promotes a shared responsibility model for security between itself and customers. “We’re trying to leapfrog that construct,” said Vinay Kumar, vice president of product management for Oracle Cloud Infrastructure.

“The cloud has always been about, you have to bring your own expertise and architecture to get this right,” said Leo Leung, senior director of products and strategy at OCI. “Think about this as a best-practice deployment automatically. … We’re going to turn all the security on and let the customer decide what is ultimately right for them.”

Security is too important to rely solely on human effort.
Holger MuellerVice president and principal analyst, Constellation Research.

Oracle’s Autonomous Database, which is expected to be a big focal point at this year’s OpenWorld, will benefit from a new service called Oracle Data Safe. This provides a set of controls for securing the database beyond built-in features such as always-on encryption and will be included as part of the cost of Oracle Database Cloud services, according to a statement.

Finally, Oracle announced Cloud Guard, which it says can spot threats and misconfigurations and “hunt down and kill” them automatically. It wasn’t immediately clear whether Cloud Guard is a homegrown Oracle product or made by a third-party vendor. Security vendor Check Point offers an IaaS security product called CloudGuard for use with OCI.

Starting in 2017, Oracle began to talk up new autonomous management and security features for its database, and the OpenWorld announcements repeat that mantra, said Holger Mueller, an analyst at Constellation Research in Cupertino, Calif. “Security is too important to rely solely on human effort,” he said.

OCI expansions target disaster recovery, compliance

Oracle also said it will broadly expand OCI’s global cloud footprint, with the launch of 20 new regions by the end of next year. The rollout will bring Oracle’s region count to 36, spread across North America, Europe, South America, the Middle East, Asia-Pacific, India and Australia.

This expansion will add multiple regions in certain geographies, allowing for localized disaster recovery scenarios as well as improved regulatory compliance around data location. Oracle plans to add multi-region support in every country it offers OCI and claimed this approach is superior to the practice of including multiple availability zones in a single region.

Oracle’s recently announced cloud interoperability partnership with Microsoft is also getting a boost. The interconnect that ties together OCI and Azure, now available in Virginia and London, will also be offered in the Western U.S., Asia and Europe over the next nine months, according to a statement. In most cases, Oracle is leasing data center space from providers such as Equinix, according to Kumar.

Holger MuellerHolger Mueller

SaaS vendors are another key customer target for Oracle with OCI. To that end, it announced new integrated third-party billing capabilities for the OCI software marketplace released earlier this year. Oracle also cited SaaS providers who are taking advantage of Oracle Cloud Infrastructure for their own underlying infrastructure, including McAfee and Cisco.

There’s something of value for enterprise customers in OCI attracting more independent software vendors, an area where Oracle also lags against the likes of AWS, Microsoft and Google, according to Mueller.

“In contrast to enterprises, they bring a lot of workloads, often to be transferred from on-premises or even other clouds to their preferred vendor,” he said. “For the IaaS vendor, that means a lot of scale, in a market that lives by economies of scale: More workloads means lower prices.”

Go to Original Article
Author:

Batista Coming to Gears 5! “It’s About Time,” the Superstar Declares – Xbox Wire

Today, WWE Superstar Batista confirmed the long-standing hopes of many, including his own.

As revealed on his personal Twitter account, Batista will officially enter the Gears of War Universe later this month, donning the armor of Marcus Fenix as a Gears 5 multiplayer character.

Fans around the world have long expressed hope that Batista will be cast for the role of Marcus Fenix in the film version of Gears of War, an opportunity he’s described as a “dream role.”

While the movie remains in development, The Coalition didn’t want any more time to go by without giving Batista a chance to don the armor, which he wore for as part of an upcoming WWE Network promotion for Gears 5. According to those who were there, the armor which was created to match the specifications of the game, “fit him perfectly.”

Rod Fergusson, who has cast and directed every Gears title, directed Batista’s performance for Gears 5, including over seven hundred lines recorded for the game.

“Adding Batista to Gears 5, we started with the fantasy of ‘Batista as Marcus,’ putting Batista into Marcus’ armor, and starting with Marcus’ script. Then we added elements of “The Animal” Batista into his voice performance and onto his look by adding his signature Hollywood shades to his character.  Batista was great in the booth and I can’t wait for Gears and Batista fans alike to stomp some Swarm as The Animal.”

Gears 5 Batista

Xbox has partnered together with WWE to create a special countdown to Batista’s availability in the game, including behind-the-scenes with Gears 5 and Batista on WWE’s UpUpDownDown gaming channel. This all leads up to WWE Network’s “Clash of Champions” event on Sunday, September 15, available through a 30-day free subscription on the Xbox app (new subscribers only).

To unlock Batista, simply play any version of Gears 5 (including with your Xbox Game Pass membership) beginning September 15. The promotion ends October 28.

Gears 5 launches on Xbox One and Windows 10 PC on September 6 for Xbox Game Pass Ultimate members; September 10 for Xbox Game Pass members. Gears 5 is also available for pre-order today from the Microsoft Store. Click here for purchase details.

Go to Original Article
Author: Microsoft News Center

Google VM, microservices tools advance cloud migration strategy

Google hopes to advance its third-place standing in the public cloud market with additional cloud migration tools and enterprise-friendly features to entice more workloads onto the platform.

Migrate for Compute Engine can now move virtual machines from Microsoft Azure to Google Cloud, although the feature is in beta. Previously, Migrate for Compute Engine supported only migrations from AWS.

Google has also expanded support for service mesh, which provides a communication layer for application components to talk to one another. It’s favored for microservices-based application architectures and relieves the burden on developers to inject and maintain networking code in their apps.

Traffic Director, Google’s take on a service mesh control plane that was unveiled at Cloud Next in April, is now generally available, according to a blog post. Google terms Traffic Director as a global traffic manager for VMs and container workloads. It is now available for use with Anthos, the Kubernetes-based multi-cloud container management platform that became generally available in April.

Traffic Director provides a Google-managed version of Pilot, the traffic-management component of Istio, a popular open source service mesh launched by Google, IBM and Lyft in 2017. Pilot manages traffic between microservices on the network through the Envoy sidecar proxy, which executes the distributed networking functionality for apps.

If I’m just giving you a container platform that can run anywhere, there are a lot of companies that can do that with Kubernetes.
Gary ChenAnalyst, IDC

Google’s Layer 7 Internal Load Balancer is now in beta and based on Traffic Director and Envoy. It will give users comprehensive traffic control capabilities with the feel of a traditional load balancer, ideal for projects that involve legacy app migration to service mesh, according to Google.

Anthos is Google’s long-term multi-cloud play

Google has positioned Anthos as an ideal platform for companies that want to achieve parity for container workloads across multiple clouds and their own data centers. Anthos, which is built on Kubernetes, gives customers the ability to refactor applications once and then run them anywhere, according to Google.

While Google is happy to make money from customers that move VM workloads to Compute Engine, it also sees an opportunity for Anthos as a platform for application modernization. Companies that convert VMs to containers via Anthos Migrate get benefits, such as no more need to manually patch their OS, Google said in the blog.

Anthos is aimed at large enterprises and is priced accordingly. One clear rival is Red Hat’s OpenShift, which has similar intent around cross-platform workload portability, albeit with some technical differences. Now that the IBM acquisition of Red Hat is closed, Big Blue likely will unleash its massive global sales force to support OpenShift.

Gary ChenGary Chen

Although Google said it plans to offer Anthos on other public clouds, specific timelines are unclear. “I’m taking more of a wait-and-see attitude to see how that really turns out,” said Gary Chen, an analyst with IDC. Anthos is strongly tied to the Google Cloud, and its true value may be its association and integrations with other Google Cloud services, such as for big data analytics and machine learning.

“If I’m just giving you a container platform that can run anywhere, there are a lot of companies that can do that with Kubernetes,” Chen said.

Even as Google seeks to convert VM workloads to containers for Anthos, it has acknowledged a need to work with VMWare on other fronts. This week, the companies announced the upcoming availability of a VMWare on Google Cloud service.

The move followed the recent release of a similar service on Azure, while VMWare Cloud on AWS has been available since 2017 and gained significant market traction.

Ultimately, enterprise customers have more choices to port VMware workloads onto the public cloud, but they must now weigh considerations such as relative costs and the strength of associated partner ecosystems.

Go to Original Article
Author:

Green warriors from India receive Microsoft AI for Earth grants to enable a sustainable future – Microsoft News Center India

With a trained AI algorithm, the team hopes to classify the urban and rural areas, identify forest cover, river beds and other water bodies from satellite images, and create a precise grid map for the region. The team hopes to apply computer vision to create a comprehensive database of biodiversity in the region to help policymakers and local communities make better-informed economic, ecological, and infrastructure-related decisions.

“You can’t save an ecosystem if you don’t fully understand it,” exclaims Dr. Mariappan. “That’s where our data along with Microsoft’s AI resources can help.”

Tracking the monkey population in urban areas using AI-powered image recognition

A woman sitting on a table with a coffee cupThe monkey population in urban India has spiraled out of control in recent years. India’s capital city, New Delhi, alone reports at least five cases of monkey bites daily that can cause rabies and be fatal. It is estimated that 7,000 monkeys prowl the streets of the capital, damaging public property and attacking people. With their natural habitat shrinking owing to urbanization, authorities are struggling to avoid monkey attacks.

Managing the growth of the population is critical. Currently, there is no way to identify which monkeys have already been given birth control or sterilized without further handling such as tattooing a code or embedding a microchip in the monkeys. Ankita Shukla, a PhD student at Indraprastha Institute of Information Technology Delhi (IIIT Delhi), aims to use computer vision as a non-invasive alternative for identifying and tracking monkeys as it is safer and less stressful for the animals, as well as humans.

Shukla, a native of a small town near Lucknow, had earlier worked with the Wildlife Institute of India on a project to classify endangered tigers in a nature reserve with machine learning and distance-object recognition algorithms. She wants to combine this experience in wildlife monitoring with machine learning to create a tangible solution for the simian problem in cities.

She is creating an AI-enabled app that can help the community tag monkeys in photographs and upload it to a cloud where authorities can track the simian population’s growth, vaccination history, and movements. “With a bird’s eye view of the monkey population, we can deploy contraceptives more efficiently,” she says. “Training a deep neural network with image recognition to identify a monkey and its species, and whether it’s already been sterilized could go a long way towards solving this crisis,” Shukla adds.

Having teamed up with Saket Anand, a professor at IIIT Delhi, she pitched the idea to the AI for Earth panel earlier this year. The team plans to leverage the Microsoft Azure platform for the processing power required to train the AI model.

“The Microsoft resources and technical assistance helped us develop a genuinely useful app,” says Shukla. “We’re now trying to take things to the next level so that we can find a solution to the monkey menace in a scientific and humane manner.”

Atomist extends CI/CD to automate the entire DevOps toolchain

Startup Atomist hopes to revolutionize development automation throughout the application lifecycle, before traditional application release automation vendors catch on.

Development automation has been the fleeting goal of a generation of tools, particularly DevOps tools, that promise continuous integration and continuous delivery. The latest is Atomist and its development automation platform, which aims to automate as many of the mundane tasks as possible in the DevOps toolchain.

Atomist ingests information about an organization’s software projects and processes to build a comprehensive understanding of those projects. Then it creates automations for the environment, which use programming tools such as parser generators and microgrammars to parse and contextualize code.

The system also correlates event streams pulled from various stages of development and represents them as code in a graph database known as the Cortex. Because Atomist’s founders said they believe the CI pipeline model falls short, Atomist takes an event-based approach to model everything in an organization’s software delivery process as a stream of events. The event-driven model also enables development teams to compose development flows based on events.

In addition, Atomist automatically creates Git repositories and configures systems for issue tracking and continuous integration, and creates chat channels to consolidate notifications on the project and delivered information to the right people.

“Atomist is an interesting and logical progression of DevOps toolchains, in that it can traverse events across a wide variety of platforms but present them in a fashion such that developers don’t need to context switch,” said Stephen O’Grady, principal analyst at RedMonk in Portland, Maine. “Given how many moving parts are involved in DevOps toolchains, the integrations are welcome.”

Mik Kersten, a leading DevOps guru and CEO at Tasktop Technologies, has tried Atomist firsthand and calls it a fundamentally new approach to manage delivery. As these become increasingly complex, the sources of waste move well beyond the code and into the tools spread across the delivery pipeline, Kersten noted.

The rise of microservices, and tens or hundreds of services in their environments, introduce trouble spots as developers collaborate, deploy and monitor the lifecycle of these hundreds of services, Johnson said.

This is particularly important for security, where keeping services consistent is paramount. In last year’s Equifax breach, hackers gained access through an unpatched version of Apache Struts — but with Atomist, an organization can identify and upgrade old software automatically across potentially hundreds of repositories, Johnson said.

Atomist represents a new class of DevOps product that goes beyond CI, which is “necessary, but not sufficient,” said Rod Johnson, Atomist CEO and creator of the Spring Framework.

Rod Johnson, CEO, AtomistRod Johnson

Tasktop’s Kersten agreed that approach to developer-centric automation “goes way beyond what we got with CI.” The company created a Slack bot that incorporates Atomist’s automation facilities, driven by a development automation engine that is reminiscent of model-driven development or aspect-oriented programming, but provides generative facilities not only of code but across projects resources and other tools, Kersten said. A notification system informs users what the automations are doing.

Most importantly, Atomist is fully extensible, and its entire internal data model can be exposed in GraphQL.

Tasktop has already explored ways to connect Atomist to Tasktop’s Integration Hub and the 58 Agile and DevOps tools it currently supports, Kersten said.

Automation built into development

As DevOps becomes more widely adopted, integrating automation into the entire DevOps toolchain is critical to help streamline the development process so programmers can develop faster, said Edwin Yuen, an analyst at Enterprise Strategy Group in Milford, Mass.

The market to integrate automation and development will grow, as both the companies that use DevOps and the number of applications they develop increase.
Edwin Yuenanalyst, Enterprise Strategy Group

“The market to integrate automation and development will grow, as both the companies that use DevOps and the number of applications they develop increase,” he said. Atomist’s integration in the code creation and deployment process, through release and update management processes, “enables automation not just in the development process but also in day two and beyond application management,” he said.”

Atomist joins other approaches such as GitOps and Bitbucket Pipelines that target the developer who chooses the tools used across the complete lifecycle, said Robert Stroud, an analyst at Forrester Research in Cambridge, Mass.

“Selection of tooling such as Atomist will drive developer productivity allowing them to focus on code, not pipeline development — this is good for DevOps adoption and acceleration,” he said. “The challenge for these tools is although new code fits well, deployment solutions are selected within enterprises by Ops teams, and also need to support on-premises deployment environments.”

For that reason, look for traditional application release automation vendors, such as IBM, XebiaLabs and CA Technologies, to deliver features similar to Atomist’s capabilities in 2018, Stroud said.

IBM cooks up a hardware architecture for tastier cloud-based services

IBM hopes to raise its competitive profile in cloud services when it introduces new hardware and cloud infrastructure by the end of this year or early 2018.

The company will add a new collection of hardware and software products that deliver artificial intelligence (AI) and cloud-based services faster and more efficiently.

Among the server-based hardware technologies are 3D Torus, an interconnection topology for message-passing multicomputer systems, and new accelerators from Nvidia, along with advanced graphics processing unit (GPU) chips. Also included is Single Large Expensive Disk technology, a traditional disk technology currently used in mainframes and all-flash-based storage, according to sources familiar with the company’s plans.

The architecture achieves sub-20-millisecond performance latencies by eliminating routers and switches, and it embeds those capabilities into chips that communicate more directly with each other, one source said.

The new collection of hardware applies some of the same concepts as IBM’s Blue Gene supercomputer, which were among those used to create Watson. In the model of those special-purpose machines, the new system is designed specifically to do one thing: Deliver AI-flavored cloud-based services.

These technologies, which can work with both IBM Power and Intel chips in the same box, will be used only in servers housed in IBM’s data centers. IBM will not sell servers containing these technologies commercially to corporate users. The new technologies could reach IBM’s 56 data centers late this year or early next year.

AI to the rescue for IBM’s cognitive cloud

IBM’s cloud business has grown steadily from its small base over the past three to four years to revenues of $3.9 billion in the company’s second quarter reported last month and $15.1 billion over the past 12 months. The company’s annual run rate for as-a-service revenues rose 32% from a year ago to $8.8 billion.

At the same time, sales of the company’s portfolio of cognitive solutions, with Watson at its core, took a step back, falling 1% in the second quarter after 3% growth in this year’s first quarter.

That doesn’t represent a critical setback, but it has caused some concern, because the company hangs much of its future growth on Watson.

Three years ago, IBM sunk $1 billion to set up its Watson business unit in the New York City borough Manhattan. IBM CEO Ginni Rometty has often cited lofty goals for the unit when claiming Watson would reach 1 billion consumers by the end of 2017, $1 billion in revenues by the end of 2018 and, eventually, $10 billion in revenue by an unnamed date. For IBM to achieve those goals, it requires a steady infusion of AI and machine learning technologies.

IBM executives remain confident, given the technical advancements in AI and machine learning capabilities built into Watson and a strict focus on corporate business users, while competitors — most notably Amazon — pursue consumer markets.

“All of our efforts around cognitive computing and AI are aimed at businesses,” said John Considine, general manager of cloud infrastructure at IBM. “This is why we have made such heavy investments in GPUs, bare-metal servers and infrastructure, so we can deliver these services with the performance levels corporate users will require.”

However, not everyone is convinced that IBM can reach its goals for cognitive cloud-based services, at least in the predicted time frames. And it will still be an uphill climb for Big Blue, as it looks to vie with cloud competitors faster out of the gate.

Lydia Leong, an analyst with Gartner, could not confirm details of IBM’s upcoming new hardware for cloud services, but pointed to the company’s efforts around a new cloud-oriented architecture dubbed Next Generation Infrastructure. NGI will be a new platform run inside SoftLayer facilities, but it’s built from scratch by a different team within IBM, she said.

My expectation is IBM will not have a long-term speed advantage with this — I’m not even sure they will have a short-term one.
Lydia Leonganalyst, Gartner

IBM intends to catch up to the modern world of infrastructure with hardware and software more like those from competitors Amazon Web Services and Microsoft Azure, and thus deliver more compelling cloud-based services. NGI will be the foundation on which to build new infrastructure-as-a-service (IaaS) offerings, while IBM Bluemix, which remains a separate entity, will continue to run on top of bare metal.

Leong said she is skeptical, however, that any new server hardware will give the company a performance advantage to deliver cloud services.

“My expectation is IBM will not have a long-term speed advantage with this — I’m not even sure they will have a short-term one,” Leong said. “Other cloud competitors are intensely innovative and have access to the same set of technologies and tactical ideas, and they will move quickly.”

IBM has stumbled repeatedly with engineering execution in its cloud portfolio, which includes last year’s launch and demise of a new IaaS offering, OpenStack for Bluemix. “[IBM has] talked to users about this [NGI] for a while, but the engineering schedule keeps getting pushed back,” she said.

IBM now enters the cloud infrastructure market extremely late — and at a time when the core infrastructure war has been mostly won, Leong said. She suggested IBM might be better served to avoid direct competition with market leaders and focus its efforts where it has an established advantage and can differentiate with things like Watson.