Adobe, Microsoft and SAP announce new Open Data Initiative details – Stories

Global industry partners join effort to connect data and gather powerful insights fueled by AI and intelligent services

LAS VEGAS — March 27, 2019 Wednesday at Adobe Summit, the industry’s leading customer experience conference, Adobe (Nasdaq: ADBE) CEO Shantanu Narayen and Microsoft (Nasdaq “MSFT” @microsoft) CEO Satya Nadella revealed additional details about the Open Data Initiative (ODI). As originally announced last September, Adobe, Microsoft and SAP (NYSE: SAP) have embarked on a new approach to business data that will help companies transform their customer experiences through real-time insights delivered from the cloud.

The three partners outlined a common approach and set of resources for customers in an initial announcement last September, with the ambition of helping customers create new connections across previously siloed data, more seamlessly garner intelligence, and ultimately better serve brands with an improved view of their customer interactions.

From the beginning, the ODI has been focused on enhancing interoperability between the applications and platforms of the three partners through a common data model with data stored in a customer-chosen data lake. This unified data lake is intended to allow customers their choice of development tools and applications to build and deploy services.

To improve that process, the three companies plan to deliver in the coming months a new approach for publishing, enriching and ingesting initial data feeds from Adobe Experience Platform, activated through Adobe Experience Cloud, Microsoft Dynamics 365, and Office 365 and SAP C/4HANA, into a customer’s data lake. This will enable a new level of AI and machine learning enrichment to garner new insights and better serve customers.

Unilever, a mutual customer and one of the early global brands to express support and excitement about the ODI, today announced its intention to simplify a previously complex business outcome based on these data connections. At Adobe Summit, Unilever is demonstrating how it plans to bring together disparate customer, product and resource data and use AI-driven insights to help reduce its plastic packaging and encourage consumer recycling. By eliminating the silos of data, Unilever will be able to tie inventory and plastics data into Adobe data to enhance customer experiences and encourage participation.

To accelerate development of the initiative, Adobe, Microsoft and SAP also announced today plans to convene a Partner Advisory Council consisting of over a dozen companies including Accenture, Amadeus, Capgemini, Change Healthcare, Cognizant, EY, Finastra, Genesys, Hootsuite, Inmobi, Sprinklr, and WPP. These organizations span myriad industries and customer segments and believe there is significant opportunity in the ODI for them to drive net- new value to their customers.

“Our customers are all trying to integrate behavioral, CRM, ERP and other internal data sets to have a comprehensive understanding of each consumer, and they’re struggling with the challenges of integrating this data,” said Stephan Pretorius, CTO of WPP. “We’re excited about the initiative Adobe, Microsoft and SAP have taken in this area, and we see a lot of opportunity to contribute to the development of ODI.”

About Adobe
Adobe is changing the world through digital experiences. For more information, visit www.adobe.com.

About SAP
As market leader in enterprise application software, SAP (NYSE: SAP) helps companies of all sizes and industries run better. From back office to boardroom, warehouse to storefront, desktop to mobile device – SAP empowers people and organizations to work together more efficiently and use business insight more effectively to stay ahead of the competition. SAP applications and services enable more than 404,000 business and public sector customers to operate profitably, adapt continuously, and grow sustainably. For more information, visit www.sap.com.

About Microsoft
Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, [email protected]

Stefan Offermann, Adobe, (408) 536-4023, [email protected]

Stacey Hoskin, SAP, (816) 337-7476, [email protected]

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

Go to Original Article
Author: Microsoft News Center

Scalable learning vs. scalable efficiency in the automation age

For better or worse, automation is on a course to reshape all modern institutions, including enterprises, governments and nongovernmental organizations, said John Hagel, founder and co-chairman of the Center for the Edge at Deloitte.

Speaking at the Work Rebooted conference in San Francisco last month, Hagel said the success or failure of automation in our institutions will depend, in large part, on the mindset adopted by the leaders at their helms.

The traditional approach leaders take to automation is driven by fear and follows what Hagel called the “scalable efficiency model.” Leaders focus on driving out cost from the business as the means to keep pace with competitive threats — from both competitors and other potential leaders.

“Often, people focus on one slice of automation,” Hagel said. “When we take that view of the future of work, we end up with modest results. The imperative is to take a holistic view.”

A better mindset for navigating this latest wave of automation is what he called “scalable learning,” which requires a mindset driven by curiosity and exploration, rather than by cost-cutting.

But leaders need to be pragmatic in pursuing this new paradigm, he cautioned, in order to reduce pushback.

Scalable efficiency

According to Hagel, most institutions react to market pressures by getting more efficient at ever-larger scale. As a result, managers focus on defining work as tightly specified tasks that can be routinely performed in the same way — and thus automated. One side effect of this model is the elimination of jobs.

John Hagel, founder and co-chairman of the Center for the Edge, DeloitteJohn Hagel

To be sure, some of the proponents of the scalable efficiency model also talk about reskilling people whose jobs have been eliminated through automation. But Hagel argued that most new skills don’t last long in times of rapid change, so people will need to be continuously reskilled — a commitment many companies are unwilling or unable to make. Other companies use gig economy models — hiring people on an as-needed, project-by-project basis — to make fixed labor costs variable.

The downside of the scalable efficiency model is the loss of trust and loyalty, as workers — and even leaders — are left wondering how long their jobs will last.

“The institution is going to win, and the worker will lose as we become more efficient,” Hagel said. But he said, at some point, the loss of human capital catches up with institution.

Scalable learning

Hagel contended that the institutions with the greatest chance of succeeding in the future will be driven by scalable learning. In this approach, leaders focus on how the organization can learn faster at scale. This type of learning is not about watching e-learning videos or training programs to do a job more efficiently.

Scalable learning involves creating the infrastructure and incentives that will make everyone think about creating new value for the business, rather than on learning just enough to do the job at hand.

According to proponents of the scalable learning model, by making value creation the focus of everyone — from the workers on the factory floor to the front-line sales teams to the maintenance staff — organizations will get better at identifying problems that stand in the way of success and be more likely to think about new ways of working.

Learning versus knowing

The scalable learning model only works when leaders believe value can be created by people at all levels of the organization.

In the scalable efficiency model, leaders are expected to have answers to all the questions — and if they don’t, they get replaced. In the new model, smart leaders solve problems by asking better questions of employees, customers, business partners and so on. Leaders following the scalable learning model also have to admit when they don’t have an answer and be willing to ask for help.

“Our view is that, if you ask powerful questions, exciting questions, it excites the passion in [employees] who could make a difference,” Hagel said.

Start small

Hagel warned the audience not to expect overnight success with a scalable learning model. “If you believe transformation is a rational process, about collecting the right data and presenting it to the right people, you have already lost,” Hagel said. “Transformation is fundamentally a political process, not a rational process.”

Before doing anything, he said it is important to identify and neutralize the “enemies of change.” Hagel said he has never seen a senior leader get up and announce they are an enemy of change. Rather, they go back to their office and conspire how to subvert the new program.

The next step lies in identifying and strengthening the champions of change. But don’t give them too big a budget, or it will just make their project a target of other departments, he said.  “Never underestimate the power of the immune system and antibodies that exist in every institution today.”

Be a better human

One approach for growing this scalable learning model may lie in finding ways to align the interests of workers and AI, said Benjamin Pring, co-founder and director of Cognizant’s Center for the Future of Work. He suggested AI can help empower front-line workers to find new ways to improve the enterprise.

One place to start may be getting people to think about beauty — not an attribute most enterprises put into their mission statements. But, as Pring noted, Apple’s success is often directly attributed to Steve Job’s decision to hire Jony Ive to bring beauty to the smartphone. Jobs kept going and thought about bringing beauty to the factories, because he reckoned it would inspire the workers that made them.

Not all enterprises are this enlightened today, Pring quipped, showing a picture of the Cognizant expense-tracking application. “It is ugly,” he said. “But we all know that things sucking is the real mother of invention. It is a simple equation that the future of work is to make things suck less.”

Instead of fearing that AI will automate them out of a job, workers could be asked how AI could make each of their jobs suck less, he said.

“Societies have always adapted to changes in tools, but individuals along the way haven’t,” Pring said. He has no illusions about the impact of AI and automation. He said he expects technology to automate a big swath of worker jobs, just like it did with agriculture over the last 100 years. Big business will process more insurance claims and loan applications with far fewer people, because robotic process automation  can do it far more efficiently than humans.

But this may not be a bad thing. Much of human work has amounted to the impersonation of very robotic work, Pring said. Now that the real robots are showing up, they will do this type of work much better than humans. The human response is to “double down on what makes you a good human being instead,” he said. “Don’t be a bad robot.”

Alluding to Peter Drucker, Steve Ardire, an AI startup consultant, said, “Efficiency should be delegated to machines, while effectiveness is a human pursuit.”

That’s all well and good, assuming companies don’t want to just settle for the robot.

Go to Original Article
Author:

Wanted – Mac Mini

Discussion in ‘Desktop Computer Classifieds‘ started by Jigga Jackson, Mar 21, 2019.

  1. Jigga Jackson

    Jigga Jackson

    Active Member

    Joined:
    Jan 2, 2009
    Messages:
    636
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    66
    Location:
    Essex
    Ratings:
    +377

    Hi All,

    Am after a Mac mini. Preferably nothing older than 2016. Min 8gb ram, needs to be loaded with the latest software and will be bonus if it comes with apple keyboard and trackpad!!

    What have you got lads???

    Location: Thundersley

    ______________________________________________________
    This message is automatically inserted in all classifieds forum threads.
    By replying to this thread you agree to abide by the trading rules detailed here.
    Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

    • Landline telephone number. Make a call to check out the area code and number are correct, too
    • Name and address including postcode
    • Valid e-mail address

    DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Share This Page

Loading…

Go to Original Article
Author:

How to Choose the Optimal Size for Your Azure Virtual Machine

You have finally decided to deploy your application in the cloud and now you have to begin the planning process. During this period, you will make some critical decisions, such as the cloud provider, cloud deployment type and cloud capacity that you will need. Selecting the best cloud provider depends on many factors including your current technology stack, which is outside the scope of this blog. We will use Microsoft Azure in our example, but these same best practices are applicable to all cloud providers, including Amazon AWS or Google Cloud.

Choosing your Deployment Method

Next, you need to decide on whether you will use SaaS, PaaS, IaaS, or some combination of deployment types. Software as a service (SaaS) is generally the easiest to use, but it has the least customization since you are restricted to using the software’s limited APIs or features. Platform as a service (PaaS), usually gives you the best performance by building apps that run directly in the cloud fabric, but these again may be limited by the APIs available. The most flexible option is to use infrastructure as a service (IaaS) which lets you deploy your own virtual machines (VMs), networks and storage in the cloud, allowing you to customize what runs inside those VMs. In this blog, we will help you select the best IaaS VM size to optimize your service when it runs in the public cloud.

Evaluating your Workload

Whether you are deploying a virtual machine in the public cloud or on-premises in your own datacenter, you will go through a similar process to figure out which resources need to be optimized. This means that you need to consider the virtual CPU, memory, network utilization and disk I/O just like you would for a physical server. However, unlike with physical hardware, you are not locked into your selection when you procure it, as it is easy to monitor and adjust the utilization so that you are only paying for the capacity which you need. This can provide significant cost-savings and operational flexibility as you can grow or shrink your VMs alongside your business or seasonal fluctuations. Microsoft Azure offers six categories of VMs which are optimized for different types of workloads, including:

  • General Purpose (Av2, B, DC, Dsv3, Dv3, DSv2, Dv2) – Balanced ratio of CPU and memory
  • Compute optimized (Fsv2, Fs, F) – High CPU
  • Memory optimized (Esv3, Ev3, M, GS, G, DSv2, Dv2) – High memory
  • Storage optimized (Lsv2, Ls) – High disk throughput and IO
  • GPU (NV, NVv2, NC, NCv2, NCv3, ND, Ndv2) – Specialized workloads for graphics or AI
  • High performance (H) – These are the fastest (and most expensive) VMs with high CPU and memory for complex computational problems

There are many different variations within a category which can make it confusing to pick the right size. To make the best choice, first understand which category of VM you need for that workload, then you can focus on evaluating the specific series type and VM size.

Selecting a VM size in Microsoft Azure

Figure 1 – Reviewing the numerous Azure VM sizes can be confusing!

General Purpose VMs

The Azure general purpose VMs are recommended for workloads which do not require any significant amount of computation power, network traffic or disk IO. They are recommended for small servers, low-traffic web servers, or development environments. If you are unsure about which type of VM to use, this is a good place to start. Here is a breakdown of the different series of general purpose VMs:

Compute Optimized VMs

Azure’s compute optimized VMs are designed for medium-traffic application servers, web servers or virtual appliances which need a greater ratio of CPU to memory. If your virtualized workload needs a lot of processing power, such as for gaming, analytics or batching tasks, these VMs may be a great fit and include the multiple F-series options (Fsv2, Fs, F).

Memory Optimized VMs

If you wish to run a memory-intensive workload, such as a database server, you should select one of Azure’s memory optimized VMs. These VMs are designed to scale up to support the largest databases without performance impact. Here is an overview of the memory optimized VMs:

  • DSv2 / Dv2 / G / GS-series VMs offer a variety of memory options for different ratios of memory to CPU.
  • Esv3 / Ev3-series VMs offer hyper-threaded processors which mean that they are optimized for virtualized databases, making them more powerful than the D-series.
  • M-series VMs support both massive amounts of memory (almost 4 TB) along with the highest virtual CPU count (up to 128 vCPUs), making them ideal for applications which also have a high CPU processing requirement.

Storage Optimized VMs

For workloads which require high disk throughput and IO, consider selecting a storage-optimized VM. These VMs are designed for large databases, warehouses, and big data, and support SQL, NoSQL, Cassandra, MongoDB, Cloudera, Redis and much more. If you are unable to use PaaS for the data layer and have to set up a full database server within a VM, then select from the Lsv2 or Ls-series.

GPU VMs

Azure’s GPU optimized VMs are designed for specialized workloads which run NVIDIA’s graphical processing units (GPUs). These processors are usually used for graphics, visualizations or compute-intensive artificial intelligence or machine learning applications. The GPU series (NV, NVv2, NC, NCv2, NCv3, ND, Ndv2) VMs vary mostly based on processing power, however, the NV and NVv2-series are recommended specifically for remote visualization, such as graphics rendering or computer-aided design (CAD).

High-Performance VMs

The final category of specialized VMs is for high-performance computer workloads which the GPUs cannot support, such as developing neural networks for AI, DNA modeling, or prime number factorization. These servers are often configured as nodes in a high-performance computing (HPC) cluster. As these VMs are expensive, they are usually run on a case-by-case basis for testing specific applications or workloads and are usually decommissioned when they are dormant.

Sizing Guidance with the Azure Migrate Service

Although we presented you with a lot of different considerations, Microsoft has tried to make the migration to Azure easy through its Azure Migrate service. This suite of utilities will help you determine which of your existing on-premises workloads are suitable for Azure, sizing recommendations, and estimated monthly costs. Not only will this assess your Hyper-V VMs running on Windows Server, but also any VMware VMs managed by vCenter. When looking at the storage, Azure Migrate tries to map every physical disk to a disk in Azure to evaluate the disk I/O. For the network, Azure Migrate will inventory the network adapters and measure their traffic. With the memory and CPU, Azure Migrate will recommend an Azure VMs which matches or exceeds the same resources running in the original virtual machine. Once you have successfully deployed your VMs in a public cloud, make sure that you continually monitor them for performance and regularly check your bills. It is easy for you to make adjustments with virtualized resources so that you can optimize the size of your VMs for the public cloud.

Wrap-Up

What about you? What VM sizes have worked well for your own workloads? Have you found some that fit your uses better than others? Let us know in the comments section below!

Thanks for reading!

Go to Original Article
Author: Symon Perriman

A tragedy that calls for more than words: The need for the tech sector to learn and act after events in New Zealand – Microsoft on the Issues

Four months ago, when our team at Microsoft first made plans for a visit to New Zealand that began yesterday, we did not expect to arrive on the heels of a violent terrorist attack that would kill innocent people, horrify a nation and shock the world. Like so many other people around the globe, across Microsoft we mourn the victims and our hearts go out to their families and loved ones. This includes two of the individuals killed who were part of the broader Microsoft partner community.

We appreciate the gravity of the moment. This is a time when the world needs to stand with New Zealand.

Words alone are not enough. Across the tech sector, we need to do more. Especially for those of us who operate social networks or digital communications tools or platforms that were used to amplify the violence, it’s clear that we need to learn from and take new action based on what happened in Christchurch.

Across Microsoft, we have reviewed how our various services were used by a relatively small number of individuals to try to spread the video from Christchurch. While our employees and technology tools worked quickly to stop this distribution, we have identified improvements we can make and are moving promptly to implement them. This includes the accelerated and broadened implementation of existing technology tools to identify and classify extremist violent content and changes for the process that enables our users to flag such content. We are exploring additional steps we can take as a company and will move quickly to add to these improvements.

We recognize, however, that this is just a beginning. More fundamental progress requires that we work together across the tech sector and in collaboration with governments and nongovernmental organizations so we can take bigger steps.

What should we do?

To start, we should acknowledge that no one yet has all the answers. This is an area in which companies across the tech sector need to learn, think, work and act together. Competition is obviously indispensable to a vibrant technology sector. But when it comes to saving human lives and protecting human rights, we should act in a united way and enable every company large and small to move faster.

Ultimately, we need to develop an industrywide approach that will be principled, comprehensive and effective. The best way to pursue this is to take new and concrete steps quickly in ways that build upon what already exists.

There are in fact important recent steps on which we can build. Just over two years ago, thanks in part to the leadership and urging of the British and the European Commission, four companies – YouTube, Facebook, Twitter and Microsoft – came together to create the Global Internet Forum to Counter Terrorism (GIFCT). Among other things, the group’s members have created a shared hash database of terrorist content and developed photo and video matching and text-based machine learning techniques to identify and thwart the spread of violence on their platforms. These technologies were used more than a million times in 24 hours to stop the distribution of the video from Christchurch.

While these are vital steps, one of the lessons from New Zealand is that the industry rightly will be judged not only by what it prevented, but by what it failed to stop. And from this perspective, there is clearly much more that needs to be done. As Prime Minister Jacinda Ardern noted last week, gone are the days when tech companies can think of their platforms akin to a postal service without regard to the responsibilities embraced by other content publishers. Even if the law in some countries gives digital platforms an exemption from decency requirements, the public rightly expects tech companies to apply a higher standard.

As an industry, tech companies created new services to bring out the best – not the worst – in people. To break down boundaries, not sow division. But as with virtually every technology ever invented, people are using digital services for both good and ill. Unfortunately, individuals are using online platforms to bring out the darkest sides of humanity.

The problem has multiple dimensions and we will need to address all of them. We’ve seen online platforms and digital tools used to help recruit people to violent ideologies. These same tools have been used to incite and organize violent attacks on innocent people. And as we saw in Christchurch, we’ve seen digital platforms used to amplify the impact of attacks through the widespread sharing of violent images and videos around the world.

Regardless of whether a particular technology played a big, small or no part in this event, across the industry we all can and need to be part of the solution. There is a role for everyone to play. That should be one of the most important lessons from Christchurch.

There are at least three areas where we should focus our efforts.

First, we need to focus on prevention. We need to take new steps to stop perpetrators from posting and sharing acts of violence against innocent people. New and more powerful technology tools can contribute even more than they have already. We must work across the industry to continue advancing existing technologies, like PhotoDNA, that identify and apply digital hashes (a kind of digital identifier) to known violent content. We must also continue to improve upon newer, AI-based technologies that can detect whether brand-new content may contain violence. These technologies can enable us more granularly to improve the ability to remove violent video content. For example, while robust hashing technologies allow automated tools to detect additional copies already flagged as violent, we need to further advance technology to better identify and catch edited versions of the same video.

We should also pursue new steps beyond the posting of content. For example, we should explore browser-based solutions – building on ideas like safe search – to block the accessing of such content at the point when people attempt to view and download it.

We should pursue all these steps with a community spirit that will share our learning and technology across the industry through open source and other collaborative mechanisms. This is the only way for the tech sector as a whole to do what will be required to be more effective.

We also should recognize that technology cannot solve this problem by itself. We need to consider and discuss additional controls or other measures that human beings working at tech companies should apply when it comes to the posting of this type of violent material. There are legal responsibilities that need to be discussed as well. It’s a complicated topic with important sensitivities in some parts of the tech sector. But it’s an issue whose importance can no longer be avoided.

Second, we need to respond more effectively to moments of crisis. Even with better progress, we cannot afford to assume that there will never be another tragedy. The tech sector should consider creating a “major event” protocol, in which technology companies would work from a joint virtual command center during a major incident. This would enable all of us to share information more quickly and directly, helping each platform and service to move more proactively, while simultaneously ensuring that we avoid restricting communications that are in the public interest, such as reporting from news organizations.

We should also discuss whether to define a category of agreed “confirmed events,” upon which tech companies would jointly institute additional processes to detect and prevent sharing of these types of extremist violent content. This would better enable efforts to identify and stop this content before it spreads too broadly.

Finally, we should work to foster a healthier online environment more broadly. As many have noted, while much of the focus in recent days rightly has been on the use of digital tools to amplify this violence, the language of hate has existed for decades and even centuries. Nonetheless, digital discourse is sometimes increasingly toxic. There are too many days when online commentary brings out the worst in people. While there’s obviously a big leap from hateful speech to an armed attack, it doesn’t help when online interaction normalizes in cyberspace standards of behavior that almost all of us would consider unacceptable in the real world.

Working on digital civility has been a passion for many employees at Microsoft, who have recognized that the online world inevitably reflects the best and worst of what people learn offline. In many ways, anonymity on the internet can free people to speak and behave in ways they never would in person. This is why we believe it’s important to continue to promote four tenets to live by when engaging online. Namely, we all need to treat others with respect and dignity, respect each other’s differences, pause before replying and stand up for ourselves and for others. This too is an area on which we can build further.

We all need to come together and move faster. This is the type of serious challenge that requires broad discussion and collaboration with people in governments and across civil society around the world. It also requires us to expand and deepen industrywide groups focused on these issues, including key partners from outside the industry.

Finally, we hope this will become a moment that brings together leaders from across the tech sector.

It’s sometimes easy amidst controversy for those not on the hot seat to remain silent and on the sideline. But we believe this would be a mistake. Across the tech sector we can all contribute ideas, innovate together and help develop more effective approaches.

The question is not just what technology did to exacerbate this problem, but what technology and tech companies can do to help solve it. Put in these terms, there is room – and a need – for everyone to help.

Tags: , ,

Go to Original Article
Author: Microsoft News Center

SRE software refines DevOps incident response for enterprise

A software product for site reliability engineers might seem like a contradiction, as IT pros still mostly rely on tribal knowledge to build the role.

But site reliability engineering software that measures metrics related to incident management and automates postmortem incident reviews is worth the investment for one enterprise DevOps shop to fine-tune its SRE workflows.

Procore Technologies, a construction management software firm in Carpinteria, Calif., had previously used a collection of tools to hack together a process for incident management. Among these tools was Atlassian’s Confluence collaboration software, which functioned as a document repository for incident response and postmortem information, said Stephen Westerman, senior director of engineering strategy.

“That was just a repository for those documents. You can’t really report on important metrics, time to resolution and stuff like that [in Confluence],” Westerman said.

Atlassian has Jira Ops for incident management automation and postmortem analysis, which it integrated with the Slack ChatOps tool and bolstered with the acquisition of Opsgenie in September 2018. But, a few months earlier, Procore engaged with a stealth startup called Blameless, which it encountered at an industry event, with an early access product that was similar to Jira Ops.

Procore did not hear about or consider Jira Ops, Westerman said, but Blameless’ Slack integration was a big selling point for the product.

“It allows our engineers and response teams to manage an incident directly from a tool they’re already spending their entire day in,” he said. “The Blameless integration with Slack allows them to quickly get all the right people into a channel, manage roles, checklists and incident statuses, build timelines and create follow-up actions on the fly, without having to navigate through another system that they only have to use once in a while in a stressful situation.”

SRE software improves postmortem follow-through

We’re much more confident that when it comes time for the postmortem … nothing has slipped through the cracks.
Stephen Westermansenior director of engineering strategy, Procore

As incidents unfold at Procore, Blameless SRE software defines the tasks required of each SRE team role — from the communications lead to the incident commander and the initial incident reporter — and can tie in business stakeholders as needed. As team members enter the Slack channel, Blameless posts a summary, status and list of important events for them, so other team members don’t have to catch up colleagues on the incident.

Incident postmortems are familiar territory for Procore SREs, but they previously required a team member to record and reconstruct events. The SRE software tool takes over that role, generates follow-up to-do lists, records code snippets and IT monitoring tools’ graphs, and creates a customized timeline of the incident based on respondents’ replies to its Slack messages.

The ready availability of incident management data makes Procore SREs more scrupulous about postmortems, Westerman said.

“It made the process a lot less painful,” he said. “There’s a lot less reconstruction to do. We’re much more confident that when it comes time for the postmortem, we’ve actually recorded all the follow-up actions we need to do, and nothing has slipped through the cracks.”

Blameless officially launched and made its product generally available on March 20, 2019, with $20 million in venture capital funding and 20 enterprise customers that include DigitalOcean and Home Depot. The company plans to add customizable dashboards to the product in April 2019, which should help small cross-functional DevOps teams at Procore improve mean time to resolution and incident response metrics, Westerman said.

Go to Original Article
Author:

Wanted – Basic Desktop PC

Discussion in ‘Desktop Computer Classifieds‘ started by Marekj, Mar 21, 2019.

  1. Marekj

    Marekj

    Active Member

    Joined:
    Apr 12, 2011
    Messages:
    559
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    18
    Ratings:
    +24

    Hi,

    An old Dell/HP box would be ideal.

    Requirements:

    I’d need to be able to add a GPU, if it doesn’t already have one. (nothing meaty – 1050/1060 mini card).
    If it has a GP, needs to be 1050 type level if its’ older, or thereabouts.
    8GB RAM (Or 4GB if there is an additional spare slot).
    Spare SATA to connect and SSD, should it not have one.
    Not sound like a jet engine.
    Be complete (i.e there is nothing I’d need to add, other than what I’ve outlined above)

    Looking to pay £150 max

    Location: Leamington Spa

    ______________________________________________________
    This message is automatically inserted in all classifieds forum threads.
    By replying to this thread you agree to abide by the trading rules detailed here.
    Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

    • Landline telephone number. Make a call to check out the area code and number are correct, too
    • Name and address including postcode
    • Valid e-mail address

    DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Share This Page

Loading…

Go to Original Article
Author: