Tag Archives: modern

IBM keeps pace with evolving IBM business partners

IBM has tasked itself with refocusing its channel strategy to reflect the modern challenges facing IBM business partners and push indirect business activities to outpace its internal business growth.

The vendor last week introduced an ecosystem model to benefit its traditional channel base, while simultaneously encouraging partnerships with more cutting-edge players in the market, such as ISVs, developers, managed service providers and cloud services providers. The revamped strategy streamlines benefits, tools and programs to better engage, enable and incentivize partners.

According to the vendor, partners will soon find it easier and faster to do business with IBM, including business around software-as-a-service offerings. IBM also revised its rules of engagement and said it would shift more accounts to partner coverage.

“IBM has spent the last several years transforming everything about [itself] from a hardware … a software and a services [perspective]. We know it has become very clear that the ecosystem (both our core channel partners and the new ecosystem that we are going after this year) … is requiring us to change,” said John Teltsch, general manager of global IBM business partners.

John Teltsch, general manager of global IBM business partnersJohn Teltsch

IBM currently works with about 19,000 partners worldwide. Over the past several years, the company has transformed itself from hardware-focused vendor to embrace software, services and cloud computing. The transition has included a heavy investment in cognitive computing, an area that IBM has urged partners to incorporate into their offerings.

With this latest shift in IBM ecosystem strategy, the company has set its sights on even greater market dominance in a range of technology categories.

“The growth they are looking to get is huge,” said Steve White, program vice president of channels and alliances at IDC.

Adapting to digital disruption

As we continue to move more of our hardware and software to ‘as a service’ type offerings … we need to leverage this new ecosystem and our core set of partners as they evolve and change their businesses.
John Teltschgeneral manager of global IBM business partners

Teltsch said the revamped strategy recognizes the changes that digital transformation has wrought on customers and IBM business partners alike. “We need to adjust how we engage our partners, as the digital disruption continues to impact every part of our clients, our partners and our distributors’ way of going to market,” he said. “As we continue to move more of our hardware and software to ‘as a service’ type offerings … we need to leverage this new ecosystem and our core set of partners as they evolve and change their businesses.”

Although firmly committed to expanding the IBM ecosystem, Teltsch acknowledged that executing the new strategy has its challenges.

For one thing, IBM must evolve internally to help its traditional partners adopt modern business models. For example, Teltsch said, many of IBM’s hardware partners are moving from selling solely hardware to offering managed services. “We have a lot of partners that are looking for our help as they transform their own businesses and modernize themselves into this digital world. As we are changing internally … we are helping [partners globally] modernize themselves,” he said.

Ginni Rometty, chairman, president and CEO, IBM
Ginni Rometty, chairman, president and CEO of IBM, discusses Watson with IBM business partners at PartnerWorld Leadership Conference 2017.

IBM to lower barrier of entry for new partners

Another challenge IBM faces is changing how it brings new IBM business partners into the fold. Teltsch said he aims to lower the barrier of entry, especially for “the new generation of partners … that don’t traditionally think of IBM today, or think of IBM as too large, too complex [and] not really approachable.”

“We have to simplify and lower the barrier of entry for all of [the] new partners, as well as our existing core partners to come into IBM,” he added.

To help address these challenges, IBM plans to adjust its tools, certifications, systems and contracts, Teltsch said. Additionally, the vendor will continue building out its digital capabilities to better meet the needs of core partners and the expanding IBM ecosystem.

White said he thinks IBM is trying to do the right thing through its channel refocus, yet he noted that IBM’s massive size makes for a complex shift. However, partners will likely appreciate the clarity the vendor adds to its channel strategy, he said.

According to Teltsch, the new ecosystem strategy is slated to go into effect April 10.

Announcing the general availability of Azure Event Grid

Modern applications are taking maximum advantage of the agility and flexibility of the cloud by moving away from monolithic architectures and instead using a set of distinct services, all working together. This includes foundational services offered by a cloud platform like Azure (Database, Storage, IoT, Compute, Serverless Functions, etc.) and application-specific services (inventory management, payment services, manufacturing processes, mobile experiences, etc.). In these new architectures, event-driven execution has become a foundational cornerstone. It replaces cumbersome polling for communication between services with a simple mechanism. These events could include IoT device signals, cloud provisioning notifications, storage blob events, or even custom scenarios such as new employees being added to HR systems. Reacting to such events efficiently and reliably is critical in these new app paradigms.

Today, I am excited to announce the general availability of Azure Event Grid, a fully managed event routing service that simplifies the development of event-based applications.

  • Azure Event Grid is the first of its kind, enabling applications and services to subscribe to all the events they need to handle whether they come from Azure services or from other parts of the same application.
  • These events are delivered through push semantics, simplifying your code and reducing your resource consumption. You no longer need to continuously poll for changes and you only pay per event. The service automatically scales dynamically to handle millions of events per second.
  • Azure Event Grid provides multiple ways to react to these events including using Serverless offerings such as Azure Functions or Azure Logic Apps, using Azure Automation, or even custom web hooks for your code or 3rd party services. This means any service running anywhere can publish events and subscribe to reliable Azure Events.

We make it easy to react to Azure native events and build modern apps anywhere, on-premises and cloud, without restricting you to use only our public cloud services. This is unique to Azure Event Grid.

Here is how it works:

[embedded content]

In the days since we announced public preview, we have seen many customers find innovative uses for Azure Event Grid and we’ve been blown away by all the great feedback from customers and the community. 

  • Qutotec used Azure Event Grid to rearchitect their hybrid integration platform:

“Azure Event Grid enabled us to simplify the architecture of our cloud-based enterprise wide hybrid integration platform, by making it easy to reliably respond to events and changes in the global business data without polling.”

– Henri Syrjäläinen, Director of Digital Enterprise Architecture, Outotec Oyj

  • Paycor unified their human capital management applications using Azure Event Grid:

“Event Grid empowers Paycor to provide a unified experience to our customers, across the suite of our human capital management applications.  It becomes the backbone for an event driven architecture, allowing each application to broadcast and receive events in a safe, reliable way.  It solves many of the operational and scalability concerns that traditional pub-sub solutions cannot.”

– Anthony Your, Director of Architecture, Paycor, Inc.

  • Microsoft Devices supply chain team utilized Azure Event Grid as part of its serverless pipeline to optimize operations and reduce time to market. The details are described in this Microsoft supply chain serverless case study.

Here is what we have newly available since our preview:

  • Richer scenarios enabled through integration with more services: Since preview, we have added General Purpose Storage and Azure IoT Hub as new event publishers and Azure Event Hubs as a new destination (great for event archival, streaming, and buffering of events). IoT Hub adds support for device lifecycle events such as device creation and device deletion which can then be handled in a serverless manner. These new integrations simplify the architecture and expand the possibilities for your applications whether they are in cloud or on-premises. Please see the full current list of Azure Event Grid service integrations for details and the region-wise availabilities. We will continue to add more services throughout the year. 

Event Grid service integrations

  • Availability in more regions: Azure Event Grid is globally available in the following regions: West US, East US, West US 2, East US 2, West Central US, Central US, West Europe, North Europe, Southeast Asia, and East Asia with more coming soon.
  • Increased reliability and service level agreement (SLA): We now have a 24 hour retry policy with exponential back off for event delivery. We also offer an industry-leading 99.99% availability with a financially backed SLA for your production workloads. With today’s announcement, you can confidently build your business-critical applications to rely on Azure Event Grid.
  • Better developer productivity: Today, we are also releasing new Event Grid SDKs to streamline development. Management SDKs are now available for Python, .Net, and Node.js with support for Go, Ruby, and Java coming soon. Publish SDK is now available for .Net with support for Python, Node.js, Go, Ruby, and Java coming soon. Additionally, we have now made it easier to consume events by simply fetching the JSON schema of all supported event types from our event schema store. This removes the burden of the subscriber to understand and de-serialize the events.

With today’s GA, I think you will find that Azure Event Grid becomes a critical component in your serverless application. Go ahead, give it a try with this simple and fun Event Grid Quickstart. Remember, the first 100,000 events per month are on us!

Here are some other samples/tutorials to help you get started:

  • Build serverless applications
    • Use IoT Hub and Logic apps to react to device lifecycle events [doc | video]
    • Instantly pick up and resize images in Blob Storage using a function [doc]
  • Automate your infrastructure operations
    • Appropriately tag VMs as they are spun up and send a notification to your Microsoft Teams channel [doc]
  • Facilitate communication between the different pieces of your distributed applications
    • Stream data from Event Hubs to your data warehouse [doc]

To learn more, please join us for our upcoming webinar on Tuesday, February 13, 2018. 

Register here: Building event-driven applications using serverless architectures.

Thanks,

Corey

IT monitoring, org discipline polish Nasdaq DevOps incident response

Modern IT monitoring can bring together developers and IT ops pros for DevOps incident response, but tools can’t substitute for a disciplined team approach to problems.

Dev and ops teams at Nasdaq Corporate Solutions LLC adopted a common language for troubleshooting with AppDynamics’ App iQ platform. But effective DevOps incident response also demanded focus on the fundamentals of team building and a systematic process for following up on incidents to ensure they don’t recur.

“We had some notion of incident management, but there was no real disciplined way for following up,” said Heather Abbott, senior vice president of corporate solutions technology, who joined the New York-based subsidiary of Nasdaq Inc. in 2014. “AppDynamics has [affected] how teams work together to resolve incidents … but we’ve had other housekeeping to do.”

Shared IT monitoring tools renew focus on incident resolution

Heather Abbott, NasdaqHeather Abbott

Nasdaq Corporate Solutions manages SaaS offerings for customers as they shift from private to public operations. Its products include public relations, investor relations, and board and leadership software managed with a combination of Amazon Web Services and on-premises data center infrastructure, though the on-premises infrastructure will soon be phased out.

In the past, Nasdaq’s dev and ops teams used separate IT monitoring tools, and teams dedicated to different parts of the infrastructure also had individualized dashboard views. The company’s shift to cross-functional teams, focused on products and user experience as part of a DevOps transformation, required a unified view into system performance. Now, all stakeholders share the AppDynamics App iQ interface when they respond to an incident.

With a single source of information about infrastructure performance, there’s less finger-pointing among team members during DevOps incident response, which speeds up problem resolution.

“You can’t argue with the data, and people have a better ongoing understanding of the system,” Abbott said. “So, you’re not going in and hunting and pecking every time there’s a complaint or we’re trying to improve something.”

DevOps incident response requires team vigilance

Since Abbott joined Nasdaq, incidents are down more than 35%. She cited the IT monitoring tool in part, but also pointed to changes the company made to the DevOps incident response process. The company moved from an ad hoc process of incident response divided among different departments to a companywide, systematic cycle of regular incident review meetings. Her team conducts weekly incident review meetings and tracks action items from previous incident reviews to prevent incidents from recurring. Higher levels of the organization have a monthly incident review call to review quality issues, and some of these incidents are further reviewed by Nasdaq’s board of directors.

We always need to focus on blocking and tackling … but as we move toward more complex microservices-based architectures, we’ll be building things into the platform like Chaos Monkey.
Heather Abbottsenior vice president of corporate solutions technology, Nasdaq

And there’s still room to improve the DevOps incident response process, Abbott said.

“We always need to focus on blocking and tackling,” she said. “We don’t have the scale within my line of business of Amazon or Netflix, but as we move toward more complex microservices-based architectures, we’ll be building things into the platform like Chaos Monkey.”

Like many companies, Nasdaq plans to tie DevOps teams with business leaders, so the whole organization can work together to improve customer experiences. In the past, Nasdaq has generated application log reports with homegrown tools. But this year, it will roll out AppDynamics’ Business iQ software, first with its investor-relations SaaS products, to make that data more accessible to business leaders, Abbott said.

AppDynamics App iQ will also expand to monitor releases through test, development and production deployment phases. Abbott said Nasdaq has worked with AppDynamics to create intelligent release dashboards to provide better automation and performance trends. “That will make it easy to see how system performance is trending over time, as we introduce change,” he said.

While Nasdaq mainly uses AppDynamics App iQ, the exchange also uses Datadog, because it offers event correlation and automated root cause analysis. AppDynamics has previewed automated root cause analysis based on machine learning techniques. Abbott said she looks forward to the addition of that feature, perhaps this year.

Beth Pariseau is senior news writer for TechTarget’s Cloud and DevOps Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Multi-geo service tackles Office 365 data residency issues

Many modern enterprises have workers in offices spread all over the world. While there are numerous advantages…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

to a multinational organization, the complexities of managing the data generated by a global workforce can vex even the most adept Office 365 administrator.

When the admin creates the Office 365 tenant, the Exchange Online mailboxes reside in a specific geographic region determined by the organization’s billing address. The mailboxes may be replicated to different data centers within that geographic region. To meet data residency requirements, organizations can create multiple Office 365 tenancies in different geographic regions, but this increases overall administrative complexity.

To address these Office 365 data residency needs and streamline how businesses handle them, Microsoft designed what it calls multi-geo capabilities. With multi-geo, organizations that use Exchange Online can store a mailbox in one of multiple geographic regions within a single Office 365 tenancy.

Here is some information on the multi-geo feature and its configuration for Office 365 data residency.

Multi-geo comes with restrictions

As of publication, the multi-geo feature is in a selective preview stage for Exchange Online and OneDrive for Business. Microsoft plans to release it into general availability for those services in the first half of 2018. The company intends to add multi-geo to SharePoint Online with a preview expected in the first half of 2018. Microsoft said it might add this capability to other Office 365 apps, such as Microsoft Teams, but it has not given any timelines.

However, the multi-geo service comes with restrictions. For example, the India and South Korea geographic regions are only available to organizations with licenses and billing addresses there. Other regions, such as France, are not yet available.

Microsoft recommends an organization with questions about the multi-geo feature talk to its Microsoft account team. The company has yet to unveil licensing details for the service.

Multi-geo introduces new terminology

Home geo is the term Microsoft uses for the geographic region where the Office 365 tenancy was created. Regions that the organization adds later are known as satellite geos. The multi-geo feature provisions new mailboxes in the home geo by default, but admins can start them in a satellite geo.

The organization can move existing mailboxes between home and satellite geos. This operation should not adversely affect workers because the mailboxes will remain in the same Office 365 tenancy, and the Autodiscover service automatically locates the user’s mailbox in the background. However, Microsoft said the multi-geo service does not support Exchange public folders, which must reside in the home geo.

Organizations should monitor the Microsoft Office 365 roadmap for changes in support of the multi-geo service.

PowerShell cmdlets adjust regions

In organizations where directory synchronization hasn’t been deployed, administrators can use two PowerShell cmdlets to set configuration parameters for the multi-geo feature.

Admins can use the Set-MsolCompanyAllowedDataLocation cmdlet from the Azure Active Directory (AD) PowerShell module to set up the additional geographic regions in the Office 365 tenant.

The Set-MsolUser cmdlet features a PreferredDataLocation parameter to specify the geographic region that will store the user’s Exchange Online mailbox and OneDrive for Business files. A user account can only have one PreferredDataLocation for those services.

Considerations with directory synchronization

Businesses that have deployed directory synchronization and run a hybrid configuration of Exchange, where some mailboxes are stored on premises and others in Exchange Online, need a new version of Azure AD Connect to support the multi-geo feature. Azure AD Connect synchronizes an on-premises AD user account custom attribute into the PreferredDataLocation attribute in Azure AD.

The admin sets up the geographic region of the user’s Exchange Online mailbox with the AD on-premises custom attribute. After the value is synchronized with Azure AD, Exchange Online uses that setting to place the mailbox in the proper region. This enables admins to adjust settings in on-premises AD accounts to control the geographical region of Exchange Online mailboxes.

Next Steps

Keep Office 365 data secure

Microsoft adds data loss prevention features across services

Back up Office 365 before disaster strikes

Procurement transformation a main focus at CPO Rising Summit

BOSTON — Corporate procurement and supply chain operations must undergo a modern digital transformation, or the companies will be left behind.

This procurement transformation will be driven by real-time processes and next-generation technologies that allow procurement professionals to see what’s ahead and react immediately to any changes in the conditions, according to Tom Linton, chief procurement officer and supply chain officer for Flex, a company that designs and builds intelligent devices for a variety of industries.

Linton spoke at the CPO Rising Summit, a conference for procurement and supply chain professionals sponsored by the research firm Ardent Partners.

“We have to operate in real time and have systems and business processes that operate in real time, because the velocity of the business is going to continue to get faster,” Linton said. “Everything, whether you’re looking at technology or medicine or information systems, is moving faster. If we can’t communicate or conduct business in real time, we actually consider ourselves failing or falling behind.”

Every generation of every product today is smarter than the one that came before, Linton explained, and the average generational change is just nine months. Procurement needs to keep up with this increase in intelligence and start to take advantage of the new opportunities.

“How do we operate in an age of intelligence?” Linton asked. “How do we operate in a world which is not about the internet of things, because the things themselves are getting more intelligence? How do you develop a system of intelligence in procurement that helps us identify where we are in this progression?”

Visualization helps show where you’re going

One way to do this is through visualization, where information is presented in more digestible ways for procurement.

“What if everything you need to know about your business is available to you in the same time that you can open Uber on your smartphone?” Linton asked.

Flex built a procurement environment, called Flex Pulse, which uses a 100-foot wall of interactive monitors that display up to 58 applications that tell what’s going on with purchases and transactions in real time, according to Linton.

“The idea with Flex Pulse is to take that data and actually make it actionable,” Linton said. “It’s not doing anything truly different; it’s just taking information and restructuring it to make it more digestible for the users.”

The need for the procurement transformation to get up to speed was echoed at a subsequent expert panel.

Need to build trust in transactions

Mike Palackdharry, president and CEO of Aquiire, a Cincinnati-based B2B purchasing and supply chain process technology company, said real-time and next-generation technologies will drive the transformation.

“Things like blockchain, machine learning, AI and natural language processing are all about increasing the speed, the transparency and the trust within the supply chain. And all of that is about real time and how we create communications between buyers and sellers in real time, where we can trust the transaction and the accuracy of the data,” Palackdharry said.

The ultimate goal will be to provide systems that guide buyers to where you want them to go.

“It’s about how you use all of this real-time information that you’re gathering to guide your users to the items that you want them to buy,” said Paul Blake, technology product marketing leader for GEP, a provider of procurement technology in Clark, N.J. “It’s not just about cost savings; it’s about all the value you can bring into the supply chain and how we guide the users to those items.”

Procurement software will need to be fully functional to allow users to do everything they need to do, but underlying complexity must fall under a simple user experience, according to Blake.

“Increasingly, because of our changing expectations and innovations in technology, it has to be able to be used in the same way as all the other technologies around us,” Blake said. “The user experience, ease of use, seamless and formless interface with the technology is a major driving force in what’s going to deliver value in the future. It’s simplicity and complexity represented in the single whole — difficult to achieve, but that’s where I see it going today.”

The future is now — maybe

However, Blake cautioned the procurement transformation may not happen in the immediate future.

It’s extremely difficult to change. If you have a supertanker of a mammoth corporation, you need 100 miles to slow down and change direction.
Paul Blaketechnology product marketing leader at GEP

“In the 1990s, there were major corporations that said, ‘We think we need software that helps us to buy stuff more effectively.’ And today, there are still corporations saying the same thing,” Blake said. “There’s enormous inertia in the corporate world toward adopting new technologies, not because there isn’t the will to do something or the technology isn’t there, but because it’s extremely difficult to change. If you have a supertanker of a mammoth corporation, you need 100 miles to slow down and change direction.”

The procurement transformation is interesting and has potential, but real time may not be quite ready for the real world of procurement today, according to conference attendee Lynn Meltzer, director of sourcing for Staples, the office supply retailer based in Framingham, Mass.

Staples transitioned from a largely paper- and spreadsheet-based procurement system to Coupa, a cloud-based procurement SaaS platform, in the past year, Meltzer said.

“If you are just now getting a procure-to-pay system and you’re working to pull in your processes and your data and get there, then the timeline is highly compressed from where you are today to what they’re saying about the next 10 years,” she said. “It doesn’t mean that it can’t happen; you’ve just got to show the value and senior management fully buys in.”

It will be important to define the next step on the procurement transformation journey, said Jaime Steele, Staples’ senior director of procurement operations, and that probably won’t involve advanced AI or blockchain yet.

“The next step, not only for us but in the procurement industry, is that you’ve got to punch this out to every system and company next,” Steele said. “So, the realistic next step might be a simple chatbot, and nobody has done that well yet, so you need to solve the more basic things first.”

Meltzer agreed that certain basic things need to be taken care of before procurement organizations can use technology like blockchains.

“When you think about blockchain, you can’t move yourself to that until you figure how you can get that into a place where a robot can grab it or AI can figure out how to make some kind of decision on it,” she said. “I think those are some of the things that need to get sorted through, and it’s going to take a little bit of time. I would probably put it in five to 10 years, but I don’t see full automation getting in there anytime soon.”

AWS’ Network Load Balancer caters to new app dev methods

AWS has yet again drawn back from its one-size-fits-all approach to load balancing to address modern development methods for increasingly diverse and fragmented applications.

In just over a year, the cloud provider has released two layer-specific load balancers to replace its Elastic Load Balancing feature. Application Load Balancer, unveiled last summer, adds more granularity to routing at the application layer. And last month, AWS introduced Network Load Balancer to route TCP traffic at the transport level to targets such as containers, IP addresses and Elastic Compute Cloud (EC2) instances. Network Load Balancer can balance millions of requests per second without a warmup period — a boon for customers who receive volatile spikes in traffic, according to the company.

Network Load Balancer preserves client IP addresses, which eliminates workarounds and allows an IT team to apply firewall rules to those addresses. It also supports one static IP per availability zone to help corral addresses as resources scale up. A team can assign an elastic IP per availability zone for more control over domain name system (DNS) records, and they can preserve the client IP address to apply firewall rules on those targets.

IT shops need load balancing to provide low latency and extremely high throughput for applications that run across instances, containers and servers, said Brad Casemore, an analyst at IDC. Features around static IP addresses per availability zone also cater to enterprises that may have particular whitelists they have to enforce on the firewall, to cite one example.

As AWS moves up the stack to entice more customers, the cloud provider is working to improve resources at the plumbing level. Modern workloads that require higher performance and new approaches to development outstrip the Classic Load Balancer’s capabilities, and workarounds such as X-Forwarded-For headers and proxy protocols frustrate users and hinder adoption, Casemore said. That’s a problem for AWS, which strives to reduce areas of friction for its customers.

“How much friction does a customer have to go through in order to get application availability from instances running on AWS’ cloud?” Casemore said. “TCPs are rife in the enterprise world, and these sorts of features and functionality become more important and take away some of that friction when customers are moving to the cloud.”

Network- and application-level tools are the new standard AWS load-balancing options.
AWS’ load-balancing options have evolved over the last eight years. Make sure you keep up to date.

AWS recommends using the Network and Application load-balancing options when working with EC2 instances within a Virtual Private Cloud (VPC), which occurs now by default. Apps that run on EC2-Classic instances can continue working with the Classic Load Balancer.

Adapting to app development advances

Microservices and container-based applications have changed what customers require from AWS load balancing. Monitoring takes on greater importance in microservices workloads, for example, so IT teams can respond quickly when an app component crashes. Network Load Balancer conducts health checks on both the network and application, and it pushes traffic only to healthy load balancers. Additionally, AWS customers can integrate Amazon Route 53 for DNS failover into another availability zone.

“Certain customers looking at high-value applications that they want to move to the cloud are going to want to make sure they have an application continuity and disaster recovery option,” Casemore said.

Chris Riley, partner at cPrime, a consultancy specializing in Agile transformations in Foster City, Calif., started using Application Load Balancer shortly after its release, although his clients have yet to need additional routing at the transport layer. The service is handy for URL-based routing or container-based workloads, as containers can appear on different ports through the use of a scheduler through EC2 Container Service, he said. And customers can load-balance to multiple ports on the same instance and to on-premises targets, a priority among some hybrid cloud customers.

It’s easy enough to replace load balancers to reduce downtime for latency-sensitive workloads and large amounts of traffic, Riley said, but not all of his clients want to shift to containers and REST APIs. Simpler workloads, such as traditional web apps, might not need the new load balancer’s capabilities, at least for now. And AWS’ insistence on using layer-specific load balancers for VPC-based instances means cPrime will work with the Network Load Balancer in the future.

AWS’ load-balancing enhancements could convert users who sought services and software from businesses like F5, Barracuda or Cisco, and open source tools, such as NGINX and HAProxy — much as its Application Load Balancer could pull business away from application delivery controller vendors. Nevertheless, some customers want greater integration between the application- and network-specific load balancers beyond an API that currently links the two, and they want to extend that control all the way up the stack, Casemore said.

“Right now, they have to do that through conjoining the two, but that’s a little pricier. Some would like to see the features melded,” he said.

David Carty is the site editor for SearchAWS. Contact him at dcarty@techtarget.com.

Satya Nadella at Ignite: “We collectively have the opportunity to lead in this transformation” – News Center

These ingredients transcend product category definitions and business models.

Together, we are building a modern workplace, which starts with empowering everyone in an organization to be more creative and collaborative, and ultimately apply technology to help shape the culture of work. And importantly, secure your organization’s digital estate. It’s this cultural shift that’s top of mind for every leader and every organization, and that’s what we want to enable with Microsoft 365.

We aim to unlock the creativity in all of us, so that you can bring out the best in everyone in your organization.

Ford, for example, is not only breaking down organizational boundaries, but is simultaneously breaking down geographical boundaries as well by using the power of mixed reality to brainstorm, design and develop new vehicles.

This represents a sea change at Ford. In the past, designing a new car involved building a clay model that weighed 5,000 pounds. This required moving it around in order for people to see it. But what if you could create a digital feedback loop where everyone is collaborating simultaneously? That’s the kind of innovation we’re unlocking with the modern workplace. And when you add mixed reality, the very nature of collaboration changes.

Microsoft is transforming the data and intelligence that powers the modern workplace, and the Microsoft graph is perhaps one of the most important data assets — core to thinking about how customers unlock value.

Today, we’re announcing the first phase of integration between the LinkedIn graph and the Microsoft graph. With this rich platform and rich data graph you can start building AI-first applications. You can bring the power of AI, the power of natural language, the power of deep learning to the enterprise and unlock it.

As we aim to empower people within an organization, Microsoft is taking it a step further, to ensure we’re also transforming the processes and functions surrounding them. Bing for Business combines data from the enterprise with public Web results to produce a seamless experience that’s intelligent, personal and contextual.

Achieving Greater Business Productivity with PowerApps

September 14, 2017 10:00-11:00 AM Pacific Time (UTC-7)

The advances in productivity that modern enterprise software has brought to business users have always been tempered by the limits of standard software to meet industry-specific, geographically-specific, or individual company-specific requirements. This requirement, typified by the concept of “last mile” functionality, has only increased in recent years.

Historically, the solution taken by most organizations in their quest for the “last mile” was a high degree of customization. While customization can fill in the gaps, custom apps are typically expensive to develop and expensive to maintain.

In this webinar, hear from enterprise application analyst Josh Greenbaum, as he looks at how Dynamics 365 customers are leveraging PowerApps to achieve success in:

  • filling the “last mile” gap in productivity
  • enabling greater business user empowerment
  • increasing speed and ease of development
  • leveraging existing data and applications resources
  • gathering more data and delivering faster analysis to make better decisions

In addition, you’ll also see PowerApps in action with a demo from Luis Camino, Sr. Product Marketing Manager at Microsoft.

For customers and partners looking to achieve greater productivity in the enterprise, this webinar is for you.

Join us September 14th for this free webinar