Empathy and innovation: How Microsoft’s cultural shift is leading to new product development

The young Microsoft software engineer had just moved to the U.S. and was trying her best to stay in close touch with her parents back home, calling them on Skype every week.

But their internet connection in India was poor, and Swetha Machanavajhala, deaf since birth, struggled to read their lips over the glitchy video. She always had to ask her parents to turn off the lights in the background to help her focus better on their faces.

“I kept thinking, ‘Why can’t we build technology that can do this for us instead?’” Machanavajhala recalled. “So I did.”

It turned out her background-blurring feature was good for privacy reasons as well, helping to hide messy offices during video conference calls or curious café customers during job interviews. So Machanavajhala’s innovation was integrated into Microsoft Teams and Skype, and she soon found herself catapulted into the spotlight at Microsoft – as well as into the company’s work on inclusion, a joy to experience after having been excluded at a previous job where her deafness made it hard to fully participate.

Software engineer Swetha Machanavajhala poses with her parents in front of the Taj Mahal in India.
Microsoft software engineer Swetha Machanavajhala and her parents. Photo by Swetha Machanavajhala.

Microsoft employees say those twists and turns of innovation – aiming for A and ending up with a much broader B – have become more common at Microsoft in the five years since Satya Nadella was appointed chief executive officer.

Nadella’s immediate push to embolden employees to be more creative has been exemplified by the company’s annual hackathon. Machanavajhala and others say the event has helped spark a revival where employees feel energized to innovate year-round and to seek support from their managers for their ideas – even if those have nothing to do with their day jobs.

“The company has changed culturally,” Michael A. Cusumano, a professor at the Massachusetts Institute of Technology’s Sloan School of Management who wrote a book about Microsoft 20 years ago, recently told The New York Times. “Microsoft is an exciting place to work again.”

Chris Kauffman, a marketing manager in product licensing who has worked for Microsoft for 13 years, said Nadella’s focus on fostering collaboration was a turning point for her, as she noticed silos being torn down. Kauffman also realized the advent of artificial intelligence (AI) could help business people like her broach the realm of engineers and IT specialists. She and her team capitalized on both of those developments to create a chatbot and virtual colleague, answering thousands of licensing questions from around the world and helping to handle the accelerated pace of Azure cloud computing service updates.

“I went to my first hackathon three years ago and fell back in love with Microsoft,” Kauffman said. “I realized that I now have permission to talk to anyone I want to. I’m no longer limited by my job function or level. And my experience with the chatbot is a great example of how technology can be democratized and used by everybody.”

That new openness has led to an explosion in new products or fine-tuned improvements across Microsoft, for customers as well as for internal use. Employees say the resurgence is showing up both in product improvements and internal events such as TechFest, an annual showcase of Microsoft research that takes place in a few weeks.

Go to Original Article
Author: Steve Clarke

IBM renews code challenge to stress-test open source projects

SAN FRANCISCO — IBM’s latest developer outreach seeks to rebuild lives with ones and zeros as it helps organizations prevent, manage and respond to natural disasters.

IBM and the Linux Foundation issued the Call for Code challenge in May 2018, a five-year, $30 million pledge to fund developer tools, technologies and training to help prevent and manage natural disasters. Their follow-up effort, the Code and Response initiative unveiled here at IBM Think 2019, aims to put those technologies into practice.

Code and Response is a $25 million, four-year program to help field-test and deploy some of the applications built for the Call for Code challenge. For instance, Project Organization, Whereabouts and Logistics (OWL), which won the 2018 Call for Code contest — Linux creator and superstar developer Linus Torvalds was one of the judges — will take its pop-up mesh network solution to Puerto Rico next month, a country still riddled with the impact of natural disasters.

Project OWL  uses IoT devices at the network’s edge to connect and spread vital information in the event of a disaster. Incident management and predictive analytics based on a variety of data sources help first responders identify key problem areas. The system taps IBM Watson APIs, as well as Watson Studio and Weather Company APIs and runs on the IBM Cloud, said Bryan Knouse, lead of Project OWL’s New York-headquartered team. It also uses a physical network of “clusterducks” that float in flooded areas to create a mesh network where communications lines are down or nonexistent.

Project OWL's mesh network DuckLinks.
Project OWL DuckLinks makes up a mesh network in disaster recovery situations.

“We’re going to do a full-scale test of the system in three areas — an urban, mountainous and coastal region — of Puerto Rico to see how it works at scale,” Knouse said.

IBM’s Corporate Service Corps, and organizations such as the Clinton Global Initiative University, will support the Code and Response effort with training and resources.

IBM also issued its next Call for Code competition, which will again focus on social impact code challenges and encourage developers to use cloud, AI, blockchain and IoT technologies to help mitigate the impacts of natural disasters.

Code challenges can be a model for companies to ensure that technological advances benefit humanity, but there’s a big hurdle to solve first, said Brandon Purcell, an analyst at Forrester Research.

“The most difficult task for the teams participating will be finding the right data,” Purcell said. “Well-structured, trustworthy data on natural disasters is probably hard to come by.”

I’d love to see one of these projects become the Linux of disaster recovery.
Jim Zemlinexecutive director, Linux Foundation

Last year, more than 100,000 developers submitted ideas to the Call for Code challenge. Knouse’s Project OWL team took first prize and $200,000 in cash. Through IBM, it is also in discussions with venture capitalists about forming a company around the OWL team, Knouse said.

“I’d love to see one of these projects become the Linux of disaster recovery,” said Jim Zemlin, executive director of the Linux Foundation. “A small team can leverage all of the AI frameworks and open source technology and compute capacity that is now essentially freely available to go solve some pretty big problems.”

For IBM, this code competition tied to social issues helps spur interest and further adoption of complex, next-gen app development architectures, said Charlotte Dunlap, an analyst at GlobalData in Santa Cruz, Calif.

“By handing this fresh group of developers IBM tools and solutions, they’ll naturally apply their newfound knowledge and experience towards enterprise mission-critical apps going forward, leveraging IBM Cloud Platform and services,” she said.

Go to Original Article

Wanted – MVMe SSD Drive 256GB

Discussion in ‘Desktop Computer Classifieds‘ started by eugene2878, Feb 5, 2019.

  1. eugene2878

    Active Member

    Oct 16, 2005
    Products Owned:
    Products Wanted:
    Trophy Points:

    Hi. Looking for PCIe Gen3 NVMe drive.

    Location: Corby

    This message is automatically inserted in all classifieds forum threads.
    By replying to this thread you agree to abide by the trading rules detailed here.
    Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

    • Landline telephone number. Make a call to check out the area code and number are correct, too
    • Name and address including postcode
    • Valid e-mail address

    DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Share This Page


Go to Original Article

Introduction to Microsoft Azure Resource Manager (ARM)

As more organizations move to the public cloud, it has become increasingly important to centralize and standardize cloud application deployment, management, and security. Microsoft Azure has accomplished this through their unified operations portal which is known as Azure Resource Manager which can be accessed through http://portal.azure.com. This article explains exactly what Azure Resource Manager is, what it can do, and how you should be using it.

What is Azure Resource Manager?

Azure Resource Manager (ARM) supports modern cloud applications which are usually distributed and contain multi-tiered components, such as a frontend web server, a middle-tier application server, and a backend database server. Through the portal, these are still shown as separate entities but grouped as a connected service which can be managed as a single object. ARM is usually managed through the centralized GUI portal, but for customers with advanced needs, it also supports Azure PowerShell, Azure CLI, Azure REST APIs, and client SDKs. Let’s now look a bit deeper into Azure Resource Manager and its key components for management, templates, security, operations, monitoring, support, and troubleshooting.

Centralized Management with Azure Resource Manager (ARM)

When deploying a new application in Azure through ARM, the first step is to determine which Azure services you need. All of the core components of a cloud infrastructure are provided directly by Microsoft, such as virtual machines, networks, network interfaces, IP addresses, and storage accounts. The Microsoft Azure Marketplace offers thousands of third-party applications and services, all of which have been certified and optimized to run on Azure.  Once you have set up billing and subscriptions for the different cloud services which you will be using, then you can use ARM to centrally administer them.

First, these components will be organized into a Resource Group, which is the logical management container for the related components of this distributed application.  ARM lets you see and manage everything for this workload’s lifecycle in a single operation, including deployment, updating and deleting. In the screenshot below, you can see a Resource Group which was created as a backup from a production MongoDB database in a different geographic region. This Resource Group includes a virtual machine, virtual network, storage account, public IP address, network interface and network security group.

Azure Resource Manager dashboard

ARM also gives organizations the ability to tag any resource so that it can quickly be discovered, along with its related components. Organizations can categorize their resources to make them easier to sort by resource group, type, location, development state, organizational department, or cost center.  Now using the portal, it is possible to see costs, events, alerts and other relevant information as a single group.

Templates with Azure Resource Manager (ARM)

Each Azure resource (virtual machine, storage account, etc.) can be deployed by filling in parameters in a template, such as the name, location, availability zone, networks, security and more. These templates can be saved, then deployed and tested within a resource group. This allows the distributed application to be deployed repeatedly and consistently. The Resource Manager template is a JSON file which defines the resource group, its resources, their properties and any dependencies. This allows an identical copy of the application to easily be created so it can be deployed in testing, staging, production or in an additional geography to allow the service to scale out. The startup order and dependencies can also be defined so that this application comes online gracefully. All third-party Azure Marketplace solutions come with customizable templates which adhere to the ISV’s best practices to streamline deployment. ARM templates are customizable and can be built using the Azure Portal, Visual Studio or Visual Studio Code. Make sure that you fully-automate the deployment and remove any manual steps to eliminate any dependencies on human configuration. In the following screenshot you can see the template for adding a new disk to an Azure Resource Group.

Attach Unmanaged Disk ARM

Security with Azure Resource Manager (ARM)

Security is a critical component of every cloud service and Azure Resource Manager provides a breath of features to allow organizations to successfully manage these distributed applications using role-based access control (RBAC) or OAUTH authentication. The challenge with large cloud services is that they often require multiple administrators with specialized skills to configuring them, such as the cloud networking expert, the database administrator and the application owner.  ARM provides granular access control, only granting specific users with the ability to make changes on certain workloads. All actions are automatically logged so there is an audit trail for every action, event and user. Critical resources can even be ‘locked’ so that they cannot be changed accidentally or deliberately, as shown in the screenshot below.

MongoBackup - Locks

Operations with Azure Resource Manager (ARM)

Azure Resource Manager also provides a suite of tools to automate standard operations for each Resource Group. These features provide the ability to automatically turn off an application, leverage Azure’s built-in backup and replication technologies, patch the services, manage the desired state configuration and track any changes. In the screenshot below, I am using ARM to configure disaster recovery of my Resource Group to a secondary site.

Configure disaster recovery

Monitoring with Azure Resource Manager (ARM)

The monitoring capabilities of ARM also provide a centralized view of the health of the cloud application. Through a single interface, each resource within the group can be analyzed for alerts, metrics, diagnostics, logs, connections and other best practices. In the following screenshot, some of the metrics of a virtual machine are displayed.

MonogoBackup Metrics

Support & Troubleshooting with Azure resource Manager (ARM)

While Microsoft has taken great strides to making Azure resources easy to manage through ARM there may be issues which cannot be automatically repaired so advanced troubleshooting could be required. ARM centralizes the troubleshooting tools so that if an issue occurs, it is relatively easy to start the initial diagnosis. This includes viewing the resource health and performance, viewing the diagnostics of the boot log, redeploying the service, troubleshooting the network connection, or escalating the issue by creating a ticket with Microsoft’s support organization. In the screenshot below, I can quickly view the health history of my resource group.

MongoBackup Resource Health


Azure Resource Manager is a great tool for centralized management, templates, security, operations, monitoring, support and troubleshooting. By combining all the key features of application lifecycle management into a single interface, Microsoft has made it easy to organizations, developers and IT professionals to make the transition to the public cloud.  For more information about ARM, check out the official Azure Resource Manager Documentation from Microsoft.

What about you? Have you used ARM for cloud management yet? What have your experiences been? We’d love to hear! let us know in the comments section below!

Thanks for reading!

Go to Original Article
Author: Symon Perriman

Microsoft’s Power Platform Aims To ‘Make Other People Cool’

A selection of PowerApps built by London Heathrow Airport, UK.Microsoft

Microsoft has always had to straddle an arguably difficult position in the software trade. The company has always needed to appear technically intricate, granular and powerful in the eyes of hard-core software developers. At the same time, the company has always had to present its software to market with a user-friendly ‘anyone can use it’ out-of-the-box style and approach.

There’s a little of that duality in the firm’s latest power play, which is a combination pack of technologies wrapped up under the Microsoft Power Platform brand.

This is all about presenting a selection of heavyweight backend technologies to hard-core developers and data scientists, but also to would-be so-called citizen developers who are typically businesspeople with an interest in getting applications and data to work the way they want them to work.

CEO Satya: be cool (to others)

Microsoft CEO Satya Nadella has tried to explain to his developer team that it’s not always about being the most amazing software engineer that creates the next big thing. Instead, it’s about creating amazing software power and putting that power in the hands of people who need it.

“You join here [Microsoft, the company itself], not to be cool, but to make others cool,” said Nadella, in a comment that has been widely reported internally and officially referenced here on c|net.

What Nadella meant was: build something so amazing that it empowers other people. This, of course, is a platform play, not a product play i.e. he wants people to use Microsoft technologies to create something great, rather than use an existing Microsoft technology to be great per se. It’s a logical enough strategy i.e. software products come and go, but platforms are more foundational and expansive… and so (typically) form a better long term business bet.

Microsoft Power Platform

The component parts of the Microsoft Power Platform have all previously existed as more distinct entities. This is essentially a coming together of Microsoft Power BI, Microsoft PowerApps and Microsoft Flow as a more unified offering available on top of Microsoft Azure cloud services.

“Our Power Platform – spanning Power BI, PowerApps and Flow – enables anyone in an organization to start building an intelligent app or workflow where none exists. It is the only solution of its kind in the industry – bringing together no-code/low-code app development, robotic process automation and self-service analytics into a single, comprehensive platform. And it enables extensibility across Microsoft 365 and Dynamics 365 as well as the leading third-party SaaS business applications,” said Microsoft CEO Nadella, in a press statement.

So just looking at the component parts again and explaining their functions, we have Microsoft Power BI, Microsoft PowerApps and Microsoft Flow.

Microsoft Power BI is self-service Business Intelligence (BI) app that works to connect and analyze business data and present a graphical visualization of it on screen. It supports 43 languages and the data it ingests can come from an Excel spreadsheet or SharePoint list, an Oracle database or from an SAP or Salesforce application. Nearly 10 petabytes of data are uploaded to the service each month with more than 10 million report and dashboard queries executed against that data every hour.

Microsoft PowerApps forms the company’s citizen application development platform. Theoretically ‘anyone’ (says Microsoft) can use PowerApps to build web and mobile applications without writing code. There’s also a natural connection between Power BI and PowerApps so that users can put insights (from Power BI) in the hands of maintenance workers and others on the frontline in apps built using PowerApps.

Lastly here there is Flow. This is Microsoft’s user interface that allows users to work with Robotic Process Automation (RPA), a technology designed to help automate simple tasks (and reduce operational errors) through automated workflows.

Data flows, everywhere

Corporate vice president in Microsoft’s business applications group James Phillips explains that the team’s vision for Microsoft Power Platform started from the recognition that data is increasingly flowing from everything, and a belief that organizations that harness their data – to gain insights then used to drive intelligent business processes – will outperform those that don’t.

“We also recognize there aren’t enough programmers, data scientists and tech professionals to go around. So our goal was to build a platform not targeting these technology experts but for [ordinary] people – and the millions of other frontline workers who see opportunities every day to create something better than the status quo, but who’ve never been empowered to do anything about it,” wrote Philips, in a lengthy Microsoft cloud blog.

Philips and team say that the guiding vision for Microsoft Power Platform was a framework they called the ‘Triple-A Loop’ i.e. a closed-loop system allowing users to gain insights from data (Analyze) used to drive intelligent business processes via apps they build (Act) and processes they automate (Automate).

Why play platform games?

We might stand back and ask why Microsoft is so focused on its new and wider approach to platform games of this kind — and there are three fairly reasonable suggestions we can make here.

First, Microsoft has always done platforms i.e. Windows was and still is a platform and you run other things (apps, databases and other computing services) upon it.

Second, Microsoft has invested heavily in its own Azure cloud platform (which features as a key element of Microsoft Power Platform) and, over and above that, the firm has for a long time now been working to make large portions of its stack (such as Office as a platform, which we detailed here in 2015) big enough to be considered platforms in their own right.

Third, Microsoft (under CEO Nadella at least) appears to understand the power of platforms both inside the Microsoft universe and outside of it. Be that other platform Linux, be it Android or be it a major vendor’s data platform suite from the likes of SAP, Salesforce, Oracle and so on.

This is a world where data comes first — sometimes from databases, sometimes from AI computations, sometimes from the Internet of Things (IoT) and its devices and sometimes from actual users — even before the actual software applications that will feed on that data. That core fact very arguably makes any platform play strategically smarter for long term success… if perhaps not just a little cool too.


Go to Original Article
Author: Steve Clarke

Microsoft zero-day vulnerability closed on Patch Tuesday

Microsoft shut down the PrivExchange zero-day vulnerability that cropped up last month in addition to the usual fare for February Patch Tuesday.

The PrivExchange Microsoft zero-day vulnerability, publicly disclosed by security researcher Dirk-jan Mollema, allowed an attacker to exploit susceptible Exchange Server 2010 and newer systems to gain domain controller admin privileges. Microsoft initially responded with an advisory (ADV190007) and suggested administrators define a policy to prevent Exchange from sending Exchange Web Services notifications.

The root of the PrivExchange problem is that a standard installation of Exchange Server requires a lot of permissions in Active Directory, said Nathan O’Bryan, an enterprise architect at Insight and TechTarget contributor.

“Always applying more security makes managing your servers more difficult,” O’Bryan said. “Organizations have to keep up with, be aware of and make the right decision for them.”

February’s security updates delivered a fix, rated important, for the Microsoft zero-day vulnerability that the company assigned two CVE identifiers, CVE-2019-0686 and CVE-2019-0724.

Microsoft flagged the first vulnerability (CVE-2019-0686) as a public disclosure. An attacker attempting to exploit the elevation of privilege weakness would need to execute a man-in-the-middle attack to send an authentication request from the hacked inbox. They could then impersonate another Exchange Server user to access their mailbox. Applying the February security update to affected systems prevents authentication notifications to stop the bug.

Chris Goettl of IvantiChris Goettl

CVE-2019-0724, which was not publicly disclosed, explained how an attacker could execute a man-in-the-middle exploit to send an authentication request to a domain controller to gain domain admin privileges. To fix this, Microsoft reduced permissions given to Exchange servers and administrators of those systems in Active Directory domains.

“We would escalate this [CVE-2019-0724] to priority one and assume it’s a high-risk exploit,” said Chris Goettl, director of product management at Ivanti, based in South Jordan, Utah.

Microsoft addresses another public disclosure and advisory

Among the 75 unique vulnerabilities closed by February Patch Tuesday, Microsoft addressed a public disclosure and suggested mitigations with an advisory.

Administrators should prioritize the publicly disclosed Windows information disclosure vulnerability (CVE-2019-0636), rated important, that affects all supported Windows systems. Attacker could exploit this bug to run a specially crafted application and get unauthorized access to the file system. To address the vulnerability, the February security updates change how Windows discloses information.

Microsoft also released an advisory to diminish the chance of an Active Directory exploit (ADV190006). Active Directory forest trusts allow forests to share resources with identities from another forest. Researchers from SpecterOps found a vulnerability in a default setting when creating incoming trusts. Until Microsoft can address this bug in future security updates, the company recommends blocking “TGT delegation across an incoming trust by setting the netdom flag EnableTGTDelegation to No” using the instructions provided in Knowledge Base article 4490425.

Microsoft addressed a zero-day exploit in the Internet Explorer browser that is rated important for Windows client systems and low for Windows Server OSes (CVE-2019-0676). On unpatched systems, an attacker would need to get the victim to visit a malicious website to read file contents.

“Make sure the OS and IE are updated in your environments,” Goettl said. “Windows browsers and Office should also warrant some attention.”

CVEs on the rise, but admins shouldn’t worry

The number of vulnerabilities and patches has increased over the years — CVEs reported and resolved in 2018 were a record high — but this is not as alarming as it seems, Goettl said.

“What we have are more vendors that are taking a more disciplined role in properly identifying and resolving vulnerabilities and disclosing that information to the industry so people are aware of it,” he said.

Some vendors also have bug bounty programs that offer researchers a strong financial incentive to find more vulnerabilities. Goettl said Qualcomm’s Vulnerability Rewards Program, which has been around for two years, pays handsomely for potential security issues. To date, the company said it has paid more than $750,000 in bounties, with more than $200,000 going to one researcher. Since the program began in November 2016, Qualcomm said it has paid out for nearly 350 bounties.

Go to Original Article

Actifio GO turns Sky into SaaS backup

Actifio is going SaaS with its copy data management platform, allowing customers to back up and restore data without using on-premises infrastructure.

The idea behind the newly launched Actifio GO — based on Actifio Sky software — is to enable customers to quickly deploy the service and begin protecting data in multiple clouds without installing additional software or hardware.

The new software-as-a-service (SaaS) platform offers direct-to-cloud backup to AWS, Microsoft Azure, Google, IBM and Wasabi public clouds. The initial GO supports VMware vSphere virtual machines (VMs).

Actifio CEO Ash Ashutosh said the copy data management pioneer will add support for more platforms — including hypervisors and databases — and other public clouds.

Ashutosh said he wants to give enterprises a data protection tool as easy to use as Salesforce is for customer relationship management.

“To get to Actifio Sky, I have to learn about it, evaluate it, try it out and deploy it,” Ashutosh said. “It’s a long process in this day and age when people are saying, ‘Give me an API, and let me get the outcome I want,’ similar to Salesforce.”

subscription price chart for Actifio GO
The subscription price for Actifio GO is on a per-VM basis.

He claimed customers could begin backing up data within an hour of their first visit to the Actifio GO website. Customers point the VMs they want to protect to any of the supported public clouds.

Ashutosh said Actifio is starting with VMware because “it is still the biggest workload that people are looking to take to the cloud.”

Customers will be charged on a per-VM basis.

People are saying, ‘Give me an API, and let me get the outcome I want,’ similar to Salesforce.
Ash AshutoshCEO, Actifio

A one-year subscription to Actifio GO costs $7 per VM, per month, for up to 499 VMs; $6.70 per VM for 500 to 999; and $6.30 per VM for 1,000 or more. Multiyear subscriptions reduce the price per VM.

Other backup vendors have been adding SaaS management tools, such as Rubrik Polaris and Cohesity Helios. And other backup software vendors — mostly in the SMB space — enable direct backups to the cloud. Christophe Bertrand, senior analyst at Enterprise Strategy Group in Milford, Mass., said Actifio GO is unusual, because it is aimed at the enterprise and does not require on-premises appliances.

Actifio claims more than 3,600 enterprise customers and caters to backup administrators, application owners and DevOps teams.

“To be competitive in this market, it is necessary to offer multiple ways of access to the technology, and one of those ways is as a service,” Bertrand said. “The ability to go direct to cloud in the enterprise space is unique and a big differentiator.”

flowchart of backing up to cloud in Actifio GO
The setup process for cloud backup in Actifio GO.

While taking backup entirely off premises has its benefits, Bertrand warned it’s not for everyone, even if it saves money. He said properly building out infrastructure is a balancing act between operational costs, metrics, compliance levels and “risk appetite.” Bertrand said he expects most enterprises will choose to go hybrid.

“From my perspective, it’s a hybrid world. No one would be 100% on or off premises, because sometimes, it’s good to have that fast recovery on premises,” Bertrand said. “The people who understand all their variables the best will be the most successful.”

Go to Original Article

For Sale – BENQ GW2765 IPS QHD Monitor 27″ Boxed

Discussion in ‘Desktop Computer Classifieds‘ started by tmurphy, Feb 6, 2019.

  1. tmurphy

    Active Member

    Mar 5, 2003
    Products Owned:
    Products Wanted:
    Trophy Points:

Share This Page


Go to Original Article