Tag Archives: organizations

End-user security requires a shift in corporate culture

SAN FRANCISCO — An internal culture change can help organizations put end-user security on the front burner.

If an organization only addresses security once a problem arises, it’s already too late. But it’s common for companies, especially startups, to overlook security because it can get in the way of productivity. That’s why it’s important for IT departments to create a company culture where employees and decision-makers take security seriously when it comes to end-user data and devices.

“Security was definitely an afterthought,” said Keane Grivich, IT infrastructure manager at Shorenstein Realty Services in San Francisco, at last week’s BoxWorks conference. “Then we saw some of the high-profile [breaches] and our senior management fully got on board with making sure that our names didn’t appear in the newspaper.”

How to create a security-centric culture

Improving end-user security starts with extensive training on topics such as what data is safe to share and what a malicious website looks like. That forces users to take responsibility for their actions and understand the risks of certain behaviors.

Plus, if security is a priority, the IT security team will feel like a part of the company, not just an inconvenience standing in users’ way.

“Companies get the security teams they deserve,” said Cory Scott, chief information security officer at LinkedIn. “Are you the security troll in the back room or are you actually part of the business decisions and respected as a business-aligned person?”

Finger-pointing is a complete impediment to learning.
Brian Roddyengineering executive, Cisco

When IT security professionals feel that the company values them, they are more likely to stick around as well. With the shortage of qualified security pros, retaining talent is key.

Keeping users involved in the security process helps, too. Instead of locking down a user’s PC when a user accesses a suspicious file, for example, IT can send him a message checking if he performed a certain action. If the user says he accessed the file, then IT knows someone is not impersonating the user. If he did not, then IT knows there is an intruder and it must act.

To keep end-user security top of mind, it’s important to make things such as changing passwords easy for users. IT can make security easier for developers as well by setting up security frameworks that they can apply to applications they’re building.

It’s also advisable to take a blameless approach when possible.

“Finger-pointing is a complete impediment to learning,” said Brian Roddy, an engineering executive who oversees the cloud security business at Cisco, in a session. “The faster we can be learning, the better we can respond and the more competitive we can be.”

Don’t make it easy for attackers

Once the end-user security culture is in place, IT should take steps to shore up the simple things.

Unpatched software is one of the easiest ways for attackers to enter a company’s network, said Colin Black, COO at CrowdStrike, a cybersecurity technology company based in Sunnyvale, Calif.

IT can also make it harder for hackers by adding extra security layers such as two-factor authentication. 

The financial services industry banks on the Microsoft Cloud for digital transformation – The Official Microsoft Blog

Financial services organizations are at a transformational tipping point. Faced with fierce market pressures – nimble disruptors, complex regulations, digital native customers – technology transformation in the industry is essential, and it’s increasingly becoming a competitive edge. Firms leading the charge into this new era are transforming customer experiences, fostering a new culture of work, optimizing operations and driving product innovation, and they are using Microsoft cloud, AI and blockchain technologies in their journey to become digital leaders of the future. TD Bank, Sumitomo Mitsui and Credito Agricola are among the organizations that have embraced this digital transformation happening in every part of the world.

Next week at Sibos, the premier financial services industry event, Microsoft CEO Satya Nadella, myself and others from Microsoft will speak alongside other innovative business leaders driving transformation. Here are some of the scenarios and solutions Microsoft customers and partners are implementing along their journey:

Fostering a new culture of work

Fostering a digital culture is a paradigm shift, one which financial institutions must embark. Jumpstarting this culture shift means designing a workplace where every work style can thrive – one that harnesses digital intelligence to improve experiences, unleashes creativity and enables mobility while keeping organizations, people and information secure. With modern tools, employees from the first line to the back office can do their best work.

State Bank of India, a 200-year-old institution, is working with Microsoft to create a modern workplace for its 263,000 employees across 23,423 branches servicing more than 500 million customer accounts, including some in the most remote locations of the country. SBI has adopted Office 365 to enhance communication and collaboration among its workforce, and believes that Microsoft is delivering the best productivity experience for its employees while ensuring the highest standards of security, compliance and adherence to regulatory requirements.

Emirates NBD is creating more creative and collaborative workspaces for their employees, and has selected Office 365 as well as Surface Hub to deliver on their digital transformation efforts.

Transforming customer experiences

Customer expectations have changed and will continue to evolve at the speed of technology. The imperative for financial services firms is to engage customers in a way that is natural, tailored and delightful at every turn. The most innovative firms will create experiences that harness data from customers to derive actionable insights and deliver greater market success.

TD Bank is known for legendary customer service and aims to match their marquee in-person experience through digital wherever customers may be – at home, at work or on-the-go. With more than 12 million digitally active customers and expectations evolving at the speed of technology, TD has turned to Microsoft Azure and data platform services to help deliver on their promise of legendary service at every touchpoint.

Sumitomo Mitsui Banking Corporation is transforming their employee workstyles and enhancing their customer experience with the Microsoft Cloud. In addition to building a secure and highly productive office environment utilizing Azure and Office 365, SMBC has also introduced an interactive automatic response system for customer service, powered by the Microsoft Cognitive Toolkit and AI services.

Optimizing operations

Shifting line-of-business applications and other capabilities to the cloud is a foundational step along the digital transformation journey that enables banks to save big and reinvest dollars in more innovative, value-add banking services. It also enables firms to be more agile like their industry disruptor counterparts.

UBS is using Azure to power its risk management platform, technology that requires enormous computing power, to run millions of calculations daily on demand. The result — speeding calculation time by 100 percent, saving 40 percent in infrastructure costs, gaining nearly infinite scale within minutes — means the firm can have more working capital on hand and employees can make quicker, more informed decisions for their clients.

Société Générale is using Azure’s high-performance computing capabilities to power the credit value adjustment (CVA) intraday calculation, a critical front-office function, enabling cost-effective, almost limitless on-demand computing power.

Driving product innovation

Driving product and business model innovation allows organizations to harness data as a strategic asset, shifting from hindsight to foresight, automating manual processes, delivering personalization to customers, and innovating with new business models, services, products and experiences to differentiate and capture emerging opportunities.

Crédito Agricola considers open banking as a future standard for the financial services industry. The bank’s digital strategy on open banking seeks to leverage the cloud to provide seamless and frictionless interactions that delight their customers, while complying with regulation. Credito Agricola is using Microsoft Azure to deliver open banking capabilities like API management gateway, security and identity, as well as analytics and insights, with plans to extend their core banking services to third-party service providers.

Microsoft is collaborating with The Monetary Authority of Singapore and The Association of Banks in Singapore to support Project Ubin 2, using Azure to explore the use of blockchain for the clearing and settlement of payments and securities, a key milestone in Singapore’s ambition of becoming a Smart Financial Center.

For the financial services industry, a firm’s deliberate and strategic move to the cloud hinges on security, privacy and regulatory compliance. Microsoft is committed to earning the trust of our financial services customers. We engage with regulators in key markets to share information about our services, the related controls and our approach to enabling customer compliance. We also take input from leading banks and insurers across the globe on an ongoing basis to improve our offerings to help meet regulatory requirements. We do all of this to continue building our customers’ trust in cloud technology.

It’s incredible and humbling to be on this transformational journey with so many ambitious digital leaders. Come visit us at booth No. C46 if you’re in Toronto or follow our stories at our press site. You can also join my session at the Innotribe, “Creating Space for Innovation,” on Tuesday, Oct. 17, and see Microsoft CEO Satya Nadella provide the Sibos closing plenary on Thursday, Oct. 19. Satya’s plenary will also be livestreamed here.

TTW

Tags: financial services, Sibos

Vatican congress calls on public, private, faith sectors to combat child sexual abuse online – Microsoft on the Issues

On Oct. 6, a group of 150 experts from various disciplines and organizations around the world presented “The Declaration of Rome” to Pope Francis at the Vatican, pledging to protect children and young people in the digital age.

“In this era of the internet, the world faces unprecedented challenges if it is to preserve the rights and dignity of children and protect them from abuse and exploitation,” the declaration states. “These challenges require new thinking and approaches, heightened global awareness and inspired leadership.”

Screenshot excerpt from letter from Child Dignity in the Digital World

The declaration contains commitments for world leaders in government, religion and law enforcement, as well as technology companies and other private- and public-sector groups. It was produced at the World Congress on Child Dignity in the Digital World sponsored by the Centre for Child Protection at the Pontifical Gregorian University in Rome in partnership with the WePROTECT Global Alliance to End Child Sexual Exploitation Online and Telefono Azzurro, Italy’s first helpline for children at risk.

Call-to-action for technology companies
The declaration calls on technology companies to “commit to the development and implementation of tools and technologies to attack the proliferation of sex abuse images on the internet, and to interdict the redistribution of the images of identified child victims.”

In my role as an industry representative to the international advisory board of the WePROTECT Global Alliance, I had the great privilege of attending the World Congress and speaking about Microsoft’s long-standing commitment to protecting children online, and specifically our contributions to the fight against online child sexual exploitation and abuse.

I outlined Microsoft’s four-pronged strategy to protect children and our work to eliminate illegal child sexual abuse imagery from our platforms and services. As a technology company, we have a responsibility to create software, devices and services that have safety features built in from the outset. We devise and implement internal policies, standards and procedures that extend beyond legal requirements and put child safety at the center of our product development and operational efforts. Because no one entity or organization can successfully tackle these issues alone, we embrace a multi-stakeholder model, and partner with others in the tech industry, civil society and elsewhere to develop tools and techniques for identifying, reporting and removing illegal material on the open web and from our hosted consumer services. We also work to educate users about online risks and offer them tools and resources, so they can help protect themselves and their families.

As noted in my remarks, across our platforms and services, child sexual exploitation and abuse receives our utmost attention. As global citizens, we at Microsoft believe that eradicating the online creation and distribution of child sexual abuse material is a universal call to action. This sentiment was expressed by many throughout the congress, with all noting that significant progress will only be made if all groups work together to stamp out this heinous imagery.

Actions for other stakeholder groups
The World Congress brought together representatives from government, law enforcement, civil society, the technology industry, the public health sector, the Catholic church and other faith-based organizations, to learn from one another, share best practices and discuss a collective way forward to safeguard children from a range of potential online harms.

“While undoubtedly the internet creates numerous benefits and opportunities in terms of social inclusion and educational attainment, today, content that is extreme and dehumanizing is available literally at children’s fingertips,” the declaration states. “The proliferation of social media means insidious acts, such as cyberbullying, harassment and sextortion, are becoming commonplace.”

The declaration calls on other stakeholder groups to engage by, among other things:

  • Launching a global awareness campaign
  • Mobilizing members of every faith to join a global child-protection movement
  • Strengthening laws to better protect children and hold abusers accountable
  • Improving treatment programs for child victims, and
  • Enhancing training for medical professionals to more readily recognize indicators of sexual abuse.

Workshop discussions highlight other focus areas
In addition to the plenary sessions, I had the pleasure of serving as a workshop leader, organizing discussions among a small group of attendees from academia, government, civil society and the faith-based community. Our sessions were lively and engaging, and I thank the members of my working group for their participation and contributions.

We surfaced the need for common definitions and terminology, additional research, awareness-raising, education, culturally specific engagements and participation from even more stakeholder groups. At the end of the first day of the workshops, one participant concluded our session with this comment, “I love the internet age. The internet is a mirror on society; it’s facilitating and enabling (some bad acts and bad actors), but it’s helping to expose these issues, as well.”

Pope Francis expresses concern and commitment
In his address to the congress on the final day, Pope Francis accepted the Declaration of Rome and made clear his concern for the issues and the commitment of the church and its readiness to help. The pope expressed concern over the spread of extreme pornography online, sexting among young people, and online bullying, as well as sextortion, the solicitation of minors for sexual purposes, and the online trafficking of persons, prostitution and the commissioning and live-viewing of rape and violence against minors in certain parts of the world.

“As all of you know, in recent years the church has come to acknowledge her own failures in providing for the protection of children: extremely grave facts have come to light, for which we have to accept our responsibility before God, before the victims and before public opinion,” Pope Francis said. “For this very reason, as a result of these painful experiences and the skills gained in the process of conversion and purification, the church today feels especially bound to work strenuously and with foresight for the protection of minors and their dignity, not only within her own ranks, but in society as a whole and throughout the world.”

There was mention of convening a follow-up congress after progress can be made and measured in perhaps two or three years’ time.

Learn more
The pope’s full address can be found here, while a list of speakers and other information about the congress is available at www.childdignity2017.org. To learn more about what Microsoft is doing to protect children online, see my remarks to the congress posted here, and visit our online safety website:  www.microsoft.com/saferonline.

Tags: Online Safety, YouthSpark

Big data systems up ante on data quality measures for users

NEW YORK — In the rush to capitalize on deployments of big data platforms, organizations shouldn’t neglect data quality measures needed to ensure the information used in analytics applications is clean and trustworthy, experienced IT managers said at the 2017 Strata Data Conference here last week.

Several speakers pointed to data quality as a big challenge in their big data environments — one that required new processes and tools to help their teams get a handle on quality issues, as both the volumes of data being fed into corporate data lakes and use of the info by data scientists and other analysts grow.

“The more of the data you produce is used, the more important it becomes, and the more important data quality becomes,” said Michelle Ufford, manager of core innovation for data engineering and analytics at Netflix Inc. “But it’s very, very difficult to do it well — and when you do it well, it takes a lot of time.”

Over the past 12 months, Ufford’s team worked to streamline the Los Gatos, Calif., company’s data quality measures as part of a broader effort to boost data engineering efficiency based on a “simplify and automate” mantra, she said during a Strata session.

A starting point for the data-quality-upgrade effort was “acknowledging that not all data sets are created equal,” she noted. In general, ones with high levels of usage get more data quality checks than lightly used ones do, according to Ufford, but trying to stay on top of that “puts a lot of cognitive overhead on data engineers.” In addition, it’s hard to spot problems just by looking at the metadata and data-profiling statistics that Netflix captures in an internal data catalog, she said.

Calling for help on data quality

To ease those burdens, Netflix developed a custom data quality tool, called Quinto, and a Python library, called Jumpstarter, which are used together to generate recommendations on quality coverage and to set automated rules for assessing data sets. When data engineers run Spark-based extract, transform and load (ETL) jobs to pull in data on use of the company’s streaming media service for analysis, transient object tables are created in separate partitions from the production tables, Ufford said. Calls are then made from the temporary tables to Quinto to do quality checks before the ETL process is completed.

In the future, Netflix plans to expand the statistics it tracks when profiling data and implement more robust anomaly detection capabilities that can better pinpoint “what is problematic or wrong” in data sets, Ufford added. The ultimate goal, she said, is making sure data engineering isn’t a bottleneck for the analytics work done by Netflix’s BI and data science teams and its business units.

2017 Strata Data Conference in New York
Data quality in big data systems was among the topics discussed at the 2017 Strata Data Conference in New York.

Improving data consistency was one of the goals of a cloud-based data lake deployment at Financial Industry Regulatory Authority Inc., an organization in Washington, D.C., that creates and enforces rules for financial markets. Before the big data platform was set up, fragmented data sets in siloed systems made it hard for data scientists and analysts to do their jobs effectively, said John Hitchingham, director of performance engineering at the not-for-profit regulator, more commonly known as FINRA.

A homegrown data catalog, called herd, was “a real key piece for making this all work,” Hitchingham said in a presentation at the conference. FINRA collects metadata and data lineage info in the catalog; it also lists processing jobs and related data sets there, and it uses the catalog to track schemas and different versions of data in the big data architecture, which runs in the Amazon Web Services (AWS) cloud.

To help ensure the data is clean and consistent, Hitchingham’s team runs validation routines after it’s ingested into Amazon Simple Storage Service (S3) and registered in the catalog. The validated data is then written back to S3, completing a process that he said also reduces the amount of ETL processing required to normalize and enrich data sets before they’re made available for analysis.

Data quality takes a business turn

Brendan Aldrich, CDO at Ivy Tech Community CollegeBrendan Aldrich

The analytics team at Ivy Tech Community College in Indianapolis also does validation checks as data is ingested into its AWS-based big data system — but only to make sure the data matches what’s in the source systems from which it’s coming. The bulk of the school’s data quality measures are now carried out by individual departments in their own systems, said Brendan Aldrich, Ivy Tech’s chief data officer.

“Data cleansing is a never-ending process,” Aldrich said in an interview before speaking at the conference. “Our goal was, rather than getting on that treadmill, why not engage users and get them involved in cleansing the data where it should be done, in the front-end systems?”

That process started taking shape when Ivy Tech, which operates 45 campuses and satellite locations across Indiana, deployed the cloud platform and Hitachi Vantara’s Pentaho BI software several years ago to give its business users self-service analytics capabilities. And it was cemented in July 2016 when the college hired a new president who mandated that business decisions be based on data, Aldrich said.

The central role data plays in decision-making gives departments a big incentive to ensure information is accurate before it goes into the analytics system, he added. As a result, data quality problems are being found and fixed more quickly now, according to Aldrich. “Even if you’re cleansing data centrally, you usually don’t find [an issue] until someone notices it and points it out,” he said. “In this case, we’re cleansing it faster than we were before.”

Marea: The future of subsea cables

Microsoft, Facebook and Telxius complete the highest-capacity subsea cable to cross the Atlantic

People and organizations rely on global networks every day to provide access to internet and cloud technology. Those systems enable tasks both simple and complex, from uploading photos and searching webpages to conducting banking transactions and managing air-travel logistics. Most people are aware of their daily dependency on the internet, but few understand the critical role played by the subsea networks spanning the planet in providing that connectivity.

Read more

Satya Nadella: The C In CEO Stands For Culture

advertisement

The CEO is the curator of an organization’s culture. Anything is possible for a company when its culture is about listening, learning, and harnessing individual passions and talents to the company’s mission. Creating that kind of culture is my chief job as CEO.

Microsoft’s culture had been rigid. Each employee had to prove to everyone that he or she was the smartest person in the room. Accountability—delivering on time and hitting numbers—trumped everything. Meetings were formal. If a senior leader wanted to tap the energy and creativity of someone lower down in the organization, she or he needed to invite that person’s boss, and so on. Hierarchy and pecking order had taken control, and spontaneity and creativity had suffered.

The culture change I wanted was centered on exercising a growth mind-set every day in three distinct ways. First, at the core of our business must be the curiosity and desire to meet a customer’s unarticulated and unmet needs with great technology. This was not abstract: We all get to practice each day. When we talk to customers, we need to listen. We need to be insatiable in our desire to learn from the outside and bring that learning into Microsoft.

Second, we are at our best when we actively seek diversity and inclusion. If we are going to serve the planet as our mission states, we need to reflect the planet. The diversity of our workforce must continue to improve, and we need to include a wide range of opinions and perspectives in our thinking and decision making. In every meeting, don’t just listen—make it possible for others to speak so that everyone’s ideas come through. Inclusiveness will help us become open to learning about our own biases and changing our behaviors so we can tap into the collective power of everyone in the company. As a result, our ideas will be better, our products will be better, and our customers will be better served.

Finally, we are one company, one Microsoft—not a confederation of fiefdoms. Innovation and competition don’t respect our silos, so we have to learn to transcend those barriers. It’s our ability to work together that makes our dreams believable and, ultimately, achievable.

Taken together, these concepts embody the growth in culture I set out to inculcate at Microsoft. I talked about these ideas every chance I got, but the last thing I wanted was for employees to think of culture as “Satya’s thing.” I wanted them to see it as their thing. The key to the culture change was individual empowerment. We sometimes underestimate what we each can do to make things happen, and overestimate what others need to do for us. I became irritated once during an employee Q&A when someone asked me, “Why can’t I print a document from my mobile phone?” I politely told him, “Make it happen. You have full authority.”

advertisement

Because I’ve made culture change at Microsoft such a high priority, people often ask how it’s going. My response is very Eastern: We’re making great progress, but we should never be done. This is a way of being. It’s about questioning ourselves each day.

I’m not exempt from having to ask myself these questions. Do a search for me and karma. It’s a fall day in Phoenix, Arizona, and I am attending the Grace Hopper Celebration of Women in Computing, the world’s largest gathering of women technologists. Diversity and inclusion is a bedrock strategy in building the culture we need and want, but I recognize that as a company and as an industry we’ve come up far too short. Which makes what I said that day in Phoenix all the more perplexing, not to mention embarrassing.

Near the end of my interview onstage, Dr. Maria Klawe—a computer scientist, president of Harvey Mudd College, and a former Microsoft board member—asked me what advice I had for women seeking a pay raise who are not comfortable asking. It’s a great question, because we know women leave the industry when they are not properly recognized and rewarded.

I only wish my answer had been great. I paused for a moment and remembered an early president at Microsoft who had told me once that human resource systems are long-term efficient but short-term inefficient. “It’s not really about asking for the raise but knowing and having faith that the system will actually give you the right raises as you go along,” I responded. “And that might be one of the additional superpowers that women who don’t ask for the raise have, because that’s good karma. It’ll come back. Long-term efficiency solves it.”

Dr. Klawe, whom I respect enormously, kindly pushed back. She used it as a teaching moment, directing her comments to the women in the audience but clearly giving me a lesson I won’t forget. She told the story of a time when she was asked how much pay would be sufficient, and she just said whatever is fair. By not advocating for herself, she didn’t get what was fair. She encouraged the audience to do their homework and to know what the proper salary is. Afterward, we hugged and left the stage to warm applause. But the damage was done.

The criticism, deserved and biting, came swiftly through waves of social media and international radio, TV, and newspaper coverage. My chief of staff smugly read me a tweet capturing the moment: “I hope Satya’s comms person is a woman and is asking for a raise right now.” I was frustrated, but I also was determined to use the incident to demonstrate what a growth mind-set looks like under pressure.

advertisement

Hit Refresh: The Quest to Rediscover Microsoft’s Soul and Imagine a Better Future for Everyone

A few hours later I shot off an email to everyone in the company. I encouraged them to watch the video, and I was quick to point out that I had answered the question completely wrong. “When it comes to career advice on getting a raise when you think it’s deserved, Maria’s advice was the right advice.” A few days later, in my regular all-employee Q&A, I apologized, and explained that I had received this advice from my mentors and had followed it. But this advice underestimated exclusion and bias—conscious and unconscious. Any advice that advocates passivity in the face of bias is wrong.

Since my remarks at Grace Hopper, Microsoft has made the commitment to drive real change—linking executive compensation to diversity progress, investing in diversity programs, and sharing data publicly about pay equity for gender, racial, and ethnic minorities. In some ways, I’m glad I messed up in such a public forum because it helped me confront an unconscious bias I didn’t know I had, and it helped me find a new sense of empathy for the great women in my life and at my company.

I had gone to Phoenix to learn, and I certainly did.


Adapted from Hit Refresh by Satya Nadella. Copyright © 2017 by Satya Nadella. To be published September 26 by HarperBusiness, an imprint of HarperCollins Publishers. Reprinted by permission.

Enterprise compliance with PCI DSS is up, says Verizon

Verizon has some good news and some bad news about organizations’ compliance with PCI DSS.

In its 2017 “Payment Security Report,” Verizon analyzed the “compliance patterns and control failures” of organizations subject to PCI DSS. The report also pulled information from Verizon’s annual “Data Breach Investigations Report” and looked at the correlation between the findings of each.

The good news in the report is that more companies reached full compliance with PCI DSS in 2016 than in 2015.

“For the first time, more than half (55.4%) of companies we assessed were fully compliant at interim validation, compared to 48.4% in 2015,” Verizon wrote. “But that means that nearly half of stores, hotels, restaurants, practices and other businesses that take card payments are still failing to maintain compliance from year to year.”

While having more than half of organizations compliant is a positive trend, Verizon also noted that compliance doesn’t necessarily mean security, particularly because organizations tend to “lose focus” once they achieve compliance. The trick, according to the report, is not to focus purely on meeting the compliance requirements, but to “make sustainability and resilience part of their larger security program.”

The bad news is that those organizations not fully in compliance with PCI DSS are missing the mark by a wider margin than before. The companies that failed their compliance assessments in 2015 were missing 12.4% of the required controls, and in 2016, 13% of the controls were missing.

“Many of the security controls that weren’t in place cover fundamental security principles with broad applicability, and their absence could be material to the likelihood of suffering a data breach,” said Verizon.

However, the report said that this isn’t necessarily happening because companies aren’t putting effort into security, but one factor is that the controls they do implement are ineffective. This can be due to controls losing effectiveness over time or to controls that don’t adapt to other changes in the environment. Either way, the problem is significant.

“Over the past five years we’ve analyzed PCI DSS compliance, the proportion of companies achieving 100% has gone up almost fivefold,” Verizon said. “Despite this general improvement, the control gap of companies failing their interim assessment has actually grown worse. Looking at it requirement by requirement, five out of six of the worst performers are the same now as they were in 2012.”

In comparing the data in the “Payment Security Report” to the “Data Breach Investigation Report,” Verizon noticed another significant connection.

“Of all the payment card data breaches that Verizon has investigated between 2010 and 2016 — nearly 300 — not a single organization was fully PCI DSS compliant at the time of the breach.”

So, while compliance with PCI DSS may not guarantee the security of an organization, it likely decreases the odds of it being the victim of a data breach.

In other news:

  • The cyber-espionage group Turla has developed a new backdoor attack called WhiteBear. Kaspersky Lab APT Intelligence Reporting has been tracking these attacks that use Gazer — the name given by Eset to the second stage backdoor used in WhiteBear — since 2016. Turla, which is allegedly based in Russia, was targeting computers at various embassies, diplomatic and foreign affairs organization, but has recently turned its focus to defense-relation organizations. “WhiteBear infections appear to be preceded by a condensed spear phishing dropper, lack Firefox extension installer payloads, and contain several new components signed with a new code signing digital certificate, unlike WhiteAtlas incidents and modules,” Kaspersky Lab wrote on its site SecureList. “The exact delivery vector for WhiteBear components is unknown to us, although we have very strong suspicion the group spear phished targets with malicious pdf files.” Turla was also behind the recent plot to use Britney Spears’ Instagram account to conceal and spread malware.
  • A firmware update is now available to patients with a radio frequency-enabled St. Jude Medical implantable pacemaker or defibrillator. The devices from St. Jude Medical — now Abbott’s — have a flaw in the firmware that enabled attackers to remotely access them and cause rapid battery depletion and cause dangerous pacing or shocks. The patch was issued in January 2017 after months of drama from St. Jude Medical, which at first denied the existence of the flaw until a security researcher discovered it, allegedly shorted the company’s shares, and then finally went public with it. Now, the FDA has approved the patch and patients can start getting updates that don’t require surgery. However, the FDA does warn that the update has potential issues, including the reloading of an earlier firmware version, the loss of preprogrammed device settings, the loss of diagnostic data, and the complete loss of device functionality.
  • A group of security and technology companies banded together to take down the WireX Android DDoS botnet this month. Researchers from Google, Cloudflare, Flashpoint, Akamai, Oracle, RiskIQ and Team Cymru shut down the WireX botnet that at its peak may have infected hundreds of thousands of Android devices with malicious apps. The apps were sending a huge amount of requests to websites through HTTPS, which would deplete the resources of the servers the websites were hosted on. Google found the malware that infected the Android devices in its Play Store and removed the hundreds of infected applications. WireX may have been active as early as Aug. 2, but the attacks on the websites on Aug. 15 are what prompted the companies to work together. A blog post explaining the botnet and the efforts to take it down said that, though the attacks leveraged user apps, the users seemed to not be affected. “The applications that housed these attack functions, while malicious, appeared to be benign to the users who had installed them,” Akamai wrote. The post also praises the collaboration efforts, saying, “These discoveries were only possible due to open collaboration between DDoS targets, DDoS mitigation companies and intelligence firms. Every player had a different piece of the puzzle; without contributions from everyone, this botnet would have remained a mystery.”

VDI hurdles push IT toward virtual, published apps

LAS VEGAS — Application virtualization is gaining steam as more organizations shy away from delivering full desktops to users.

The cost and complexity of desktop virtualization, plus the growing variety of devices that users bring to work, have driven IT professionals to reconsider their application delivery strategies. Two new offerings aim to help them take a more application-centric approach.

“People aren’t going to continue to do full-blown VDI,” said Brad Tompkins, CEO of the VMware User Group. “It’s just gotten too clunky.”

Here at VMworld, VMware launched Workspace One App Express, a cloud service for published apps. And Dell EMC added application virtualization support to its VDI Complete bundle.

“It’s really easy to provide a basic desktop and then deliver apps based on users’ entitlements,” said Mark Ellersick, client technology analyst at Western Carolina University in Cullowhee, N.C.

But application virtualization presents some challenges as well, such as getting multiple apps to work well together — especially when they live on different hosts, Ellersick said.

Published apps head to the cloud

Workspace One App Express results from a partnership with Frame, a cloud workspace provider. IT pros do not need to create a separate infrastructure as they typically would; they can instead publish applications directly through a Frame-provided Workspace One portal in the cloud. Users then access the published apps from a web browser.

Apps are far less expensive to deploy.
Jeff McNaughtDell

Advancements in storage, networking and compute technology have made it easier to publish applications using traditional methods, but it still requires a significant level of IT skill. The cloud will lower the barriers to application publishing and allow organizations to deploy more quickly, Frame CTO Ruben Spruijt said.

“It takes a while on-premises before you are able to publish apps,” Spruijt said.

And a benefit of publishing applications to browsers is that IT does not have to manage domains, as would be the case when publishing directly to users’ PCs, said Courtney Burry, a VMware senior director of product marketing.

Application virtualization cost savings

VDI Complete, which includes VMware Horizon View, hyper-converged infrastructure and thin clients, now offers application virtualization at a starting price that works out to $7 per user per month. The desktop virtualization version starts at $11.

The cost of VDI is driving more organizations to consider application virtualization, said Jeff McNaught, chief strategy officer for Dell client computing.

“Apps are far less expensive to deploy,” he said.

At the University of Arkansas, which runs VMware Horizon on Dell EMC infrastructure and thin clients in its student computer labs, IT leaders plan to take advantage of these cost benefits. They are launching a Workspace One pilot program for delivering applications to staff and hope to eventually do the same for all 27,000 students.

The cost to deliver apps to students on 27,000 virtual desktops would be enormously prohibitive, but application virtualization makes it feasible, said Jon Kelley, associate director of enterprise systems at the school in Fayetteville, Ark.

Removing vGPU roadblocks

Horizon View administrators with graphics-intensive workloads also received some welcome enhancements. VMware’s vMotion technology, which migrates virtual resources without downtime, is coming to virtual GPUs (vGPUs). And Horizon View shops that use Nvidia Grid to allocate vGPUs will be able to manage that process through VMware vRealize Operations Manager instead of relying on a separate console.

At Western Carolina, which delivers applications through a mix of VDI, VMware App Volumes and Microsoft Remote Desktop Session Host, vMotion will help IT reallocate vGPUs among virtual desktops without having to shut them down, Ellersick said.

“That’s a roadblock for us currently,” he said. “Right now, we have to kill everything.”

Announcing the public preview of Azure Archive Blob Storage and Blob-Level Tiering

From startups to large organizations, our customers in every industry have experienced exponential growth of their data. A significant amount of this data is rarely accessed but must be stored for a long period of time to meet business continuity and compliance requirements. Examples include employee data, medical records, customer information, financial records, backups, etc. Additionally, recent and coming advances in artificial intelligence and data analytics are unlocking value from data that might have previously been discarded. Customers want to keep more of these data sets for a longer period but need a scalable and cost-effective solution to do so.

Last year, we launched Cool Blob Storage to help customers reduce storage costs by tiering their infrequently accessed data to the Cool tier. Today we’re announcing the public preview of Archive Blob Storage designed to help organizations reduce their storage costs even further by storing rarely accessed data in our lowest-priced tier yet. Furthermore, we’re excited to introduce the public preview of Blob-Level Tiering enabling you to optimize storage costs by easily managing the lifecycle of your data across these tiers at the object level.

The CEO of HubStor, a leading enterprise backup and archiving company, stated: “We are jumping for joy to see the amazing design Microsoft successfully implemented. Azure Archive Blob Storage is indeed an excellent example of Microsoft leapfrogging the competition.”

Azure Archive Blob Storage

Azure Archive Blob storage is designed to provide organizations with a low cost means of delivering durable, highly available, secure cloud storage for rarely accessed data with flexible latency requirements (on the order of hours). See Azure Blob Storage: Hot, cool, and archive tiers to learn more.

The Archive tier, in addition to Hot and Cool access tiers, is now available in Blob Storage accounts. Archive Storage characteristics include:

  • Cost-effectiveness: Archive access tier is our lowest priced storage offering. Customers with long-term storage which is rarely accessed can take advantage of this. For more details on regional preview pricing, see Azure Storage Pricing.
  • Seamless Integration: Customers use the same familiar operations on blobs in the Archive tier as on blobs in the Hot and Cool access tiers. This will enable customers to easily integrate the new access tier into their applications.
  • Availability: The Archive access tier will provide the same 99% availability SLA (at General Availability (GA)) offered by the Cool access tier.
  • Durability: All access tiers including Archive are designed to offer the same high durability that you have come to expect from Azure Storage with the same data replication options available today.
  • Security: All data in the Archive access tier is automatically encrypted at rest.

Blob-Level Tiering:  easily optimize storage costs without moving your data

To simplify data lifecycle management, we now allow customers to tier their data at the blob level.  Customers can easily change the access tier of a blob among the Hot, Cool, or Archive tiers as usage patterns change, without having to move data between accounts. Blobs in all three access tiers can co-exist within the same account.

Flexible management

Archive Storage and Blob-level Tiering will be available on all Blob Storage accounts. For customers with large volumes of data in General Purpose accounts, we will allow upgrading your account to get access to Cool, Archive, and Blob-level Tiering at GA.

A user may access the feature using .NET (see Figure 1), Python (preview), or Node.js client libraries or REST APIs initially. Support for the Java client library and portal (see Figure 2) will roll out over the next week. Other SDKs and tools will be supported in the next few months.

XSCL_white

Figure 1: Set blob access tier using .NET client library

28092A22_v3

Figure 2: Set blob access tier in portal

Pricing

Pricing for Azure Archive Blob Storage during preview will be reduced. Please refer to the Azure Blobs Storage Pricing page for more details.

How to get started

To enroll in the public preview, you will need to submit a request to register this feature to your subscription. After your request is approved (within 1-2 days), any new LRS Blob Storage account you create in US East 2 will have the Archive access tier enabled, and all new accounts in all public regions will have blob-level tiering enabled. During preview, only LRS accounts will be supported but we plan to extend support to GRS and RA-GRS accounts (new and existing) as well at GA. Blob-level tiering will not be supported for any blob with snapshots. As with most previews, this should not be used for production workloads until the feature reaches GA.

To submit a request, run the following PowerShell or CLI commands.

PowerShell

Register-AzureRmProviderFeature -FeatureName AllowArchive -ProviderNamespace Microsoft.Storage

This will return the following response:

FeatureName         ProviderName      RegistrationState 
-----------         ------------      ----------------- 
AllowArchive        Microsoft.Storage   Pending 

It may take 1-2 days to receive approval.  To verify successful registration approval, run the following command:

Get-AzureRmProviderFeature -FeatureName AllowArchive -ProviderNamespace  Microsoft.Storage

If the feature was approved and properly registered, you should receive the following output:

FeatureName         ProviderName      RegistrationState 
-----------         ------------      ----------------- 
AllowArchive        Microsoft.Storage   Registered  

CLI 2.0

az feature register –-namespace Microsoft.Storage –-name AllowArchive

This will return the following response:

{
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/providers/Microsoft.Features/providers/Microsoft.Storage/features/AllowArchive",
  "name": "Microsoft.Storage/AllowArchive",
  "properties": {
    "state": "Pending"
  },
  "type": "Microsoft.Features/providers/features"
}

It may take 1-2 days to receive approval.  To verify successful registration approval, run the following command:

-az feature show –-namespace Microsoft.Storage –-name AllowArchive

If the feature was approved and properly registered, you should receive the following output:

{
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/providers/Microsoft.Features/providers/Microsoft.Storage/features/AllowArchive",
  "name": "Microsoft.Storage/AllowArchive",
  "properties": {
    "state": "Registered"
  },
  "type": "Microsoft.Features/providers/features"
}

Get it, use it, and tell us about it

We’re confident that Azure Archive Blob Storage will provide another critical element for optimizing your organization’s cloud data storage strategy. As this is a preview, we look forward to hearing your feedback on these features, which you can send by email to us at archivefeedback@microsoft.com.

Microsoft stitches up Windows Server 2003 on busy June Patch Tuesday

Organizations that still use Windows Server 2003 got a surprise on June Patch Tuesday, with a Microsoft security…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

update for the unsupported server operating system.

A month after the company issued patches for legacy systems to ward off the WannaCry ransomware attacks that affected thousands of computers, Microsoft released a free patch for Windows Server 2003, which has been unsupported since 2015. Microsoft addressed the exploit used in the WannaCry attacks in its March Patch Tuesday, but that only applied to supported Windows systems. The company later issued updates to protect unsupported Windows XP, Windows 8 and Windows Server 2003 operating systems.

This most recent course reversal — which also applies to other unsupported systems, such as Windows XP — comes alongside June Patch Tuesday updates that addressed an eye-opening 94 vulnerabilities.

“In reviewing the updates for this month, some vulnerabilities were identified that pose elevated risk of cyberattacks by government organizations, sometimes referred to as nation-state actors or other copycat organizations,” Adrienne Hall, general manager of Microsoft’s Cyber Defense Operations Center, wrote in a blog post. Hall indicated Microsoft chose to issue these additional security updates to protect unsupported systems from threats that may be similar to WannaCry.

Microsoft encourages businesses to migrate from legacy systems, such as Windows Server 2003, through end-of-life support deadlines. By releasing a security update for an unsupported product, Microsoft risks setting a precedent that businesses can stay with legacy products and still receive critical security updates.

In a separate blog post, Eric Doerr, general manager of the Microsoft Security Response Center, cautioned that this “should not be viewed as a departure from our standard servicing policies,” and businesses will be best-served by staying on Microsoft’s roadmap with supported Windows systems.

“It’s sort of a double-edged sword,” said Amol Sarwate, director of vulnerability labs for Qualys Inc., based in Redwood City, Calif. “For things like WannaCry, when the exploitation is so high and everyone and anyone is affected, Microsoft did the right thing by releasing patches for an end-of-life operating system.”

At the same time, “if they do this more often, people will start thinking the patches will be there, and that takes them away from the goal of moving away from the old operating systems,” he said.

Patch for in-the-wild vulnerability

Of the 94 vulnerabilities Microsoft identified for June Patch Tuesday, 27 are remote code execution (RCE) exploits that could allow an attacker to take control of a machine.

Sarwate said the top priority for Windows Server administrators should be CVE-2017-8543, which affects Windows Server 2008 and above, and is currently exploited in the wild. On an unpatched system, attackers can send a specially crafted Server Message Block request to the Windows Search service to gain control of a computer.

Administrators should give prompt attention to address CVE-2017-8507, an RCE vulnerability in Microsoft Outlook an attacker could use to gain control of a system when a user views an email message, Sarwate said.

For more information about the remaining security vulnerabilities released on June Patch Tuesday, visit Microsoft’s Security Update Guide.

Dan Cagen is the associate site editor for SearchWindowsServer.com. Write to him at dcagen@techtarget.com.

Next Steps

How to adapt to Microsoft’s patching changes

New patching process may mean less control

Security Update Guide brings growing pains


Essential Guide

Catch up on the Windows Server patches of 2017

Powered by WPeMatico