Tag Archives: several

Microsoft seizes malicious domains used in COVID-19 phishing

Microsoft has seized control of several malicious domains that were used in COVID-19-themed phishing attacks against its customers in 62 countries around the world.

Last month, the technology giant filed a complaint with the U.S. District Court for the Eastern District of Virginia in order to stop cybercriminals from “exploiting the pandemic by attempting to obtain personal access and confidential information of its customers.” The court documents were unsealed on Tuesday as Microsoft secured control of the domains, which were used in a variety of phishing and business email compromise (BEC) attacks.

In a blog post Tuesday, Microsoft revealed that the “civil case resulted in a court order allowing Microsoft to seize control of key domains in the criminals’ infrastructure so that it can no longer be used to execute cyberattacks,” Tom Burt, corporate vice president of customer security and trust at Microsoft, wrote.

Microsoft’s Digital Crimes Unit first observed a new phishing scheme in December of 2019, which was designed to compromise customers’ Office 365 accounts. While efforts to block the sophisticated scheme were successful, Microsoft recently observed renewed attempts by the same threat actors, this time with a COVID-19 lure.

“Specifically, defendants in this action are part of an online criminal network whose tactics evolved to take advantage of global current events by deploying COVID-19 themed phishing campaign targeting Microsoft customers around the world. This sophisticated phishing campaign is designed to compromise thousands of Microsoft customer accounts and gain access to customer email, contact lists, sensitive documents and other personal information,” Microsoft wrote in the complaint.

Microsoft seized six primary domains, five of which were revealed to have the name “Office” in them; the sixth domain was mailitdaemon[.]com, which is used to receive forwarded mail from compromised Office 365 accounts.

Burt wrote in the blog post that BEC threats have “increased in complexity, sophistication and frequency in recent years.” As BEC rises, threat actors have become equipped with new tactics that take impersonation to the next level. “These phishing emails are designed to look like they come from an employer or trusted source,” Microsoft wrote in the complaint.

In these coronavirus phishing emails, threat actors included messages with a COVID-19 theme to lure in victims, playing on the fear and uncertainty caused by the pandemic. For example, threat actors do this by “using terms such as ‘COVID-19 bonus,'” Burt wrote.

According to the FBI, half of cybercrime losses in 2019 were BEC alone. Some experts say BEC attacks have led to as many cyberinsurance payments as ransomware, and in some cases more.

Microsoft isn’t alone in seizing coronavirus-related malicious domains. In April, the Department of Justice announced the disruption of hundreds of online COVID-19 related scams, through public and private sector cooperative efforts.

“As of April 21, 2020, The FBI’s Internet Crime Complaint Center has received and reviewed more than 3,600 complaints related to COVID-19 scams, many of which operated from websites that advertised fake vaccines and cures, operated fraudulent charity drives, delivered malware or hosted various other types of scams, ” the DOJ wrote in the announcement.

Like many security vendors, Microsoft said it has observed cybercriminals adapting their lures this year to take advantage of current events such as COVID-19. The company recommended several steps to prevent credential theft, including implementing two-factor authentication on all business and personal accounts.

“While the lures may have changed, the underlying threats remain, evolve and grow,” Burt wrote.

Go to Original Article

IBM stretches its Elastic Storage line to speed AI, big data

IBM is expected to unveil this week several new and updated storage offerings to help large businesses build infrastructure that supports AI-optimized software to analyze and organize data.

The centerpiece is the new IBM Elastic Storage System (ESS) 5000 data lake capable of performing up to 55 GBps in a single eight-disk enclosure node. IBM said ESS 5000 handles up to 8-yottabyte configurations. The new IBM storage system is particularly suited for data collection and longer-term storage capability, according to IBM product documents viewed by SearchStorage.

The forthcoming products underscore the growing use of object storage as a target for AI and high-density analytics. IBM also built an enhanced version of its IBM Elastic Storage System 3000 that allows access and data movement between IBM Spectrum Scale and object storage, both on premises and in the cloud. The disk-based system adds Data Acceleration for AI feature in IBM Spectrum Scale software. IBM claims the feature serves to lower costs by eliminating the need for an extra cloud copy of the data. It moves data using automatic or controlled acceleration of lower-cost object storage.

IBM’s AI and analytics offerings form a cornerstone of its overall corporate strategy, said Steve McDowell, a senior analyst at Moor Insights & Strategy, based in Austin, Texas.

“The ESS 5000 is a product that is only designed to solve big data problems for big data customers,” McDowell said. “There are only a handful of IT shops in the world today who need the combination of 55 GBps performance that is scalable to yottabyte capacities. Those that do need it are almost all IBM customers.”

IBM Elastic Storage: Mainframe to cloud

IBM ESS 5000 will compete with the Dell EMC Isilon A2000 and Net App FAS 6000 big data storage systems.

The underpinnings of the 2U ESS 3000 building block stem from IBM’s long expertise in mainframes and traditional high-end storage, McDowell said. ESS 3000 systems are based on the IBM FlashSystem NVMe flash platform.

There are only a handful of IT shops in the world today who need the combination of 55 GBps performance that is scalable to yottabyte capacities. Those that do need it are almost all IBM customers.
Steve McDowellSenior analyst, Moor Insights & Strategy

“The ESS 3000 building block addresses the kind of performance required for enterprise AI workloads, and the data lakes that emerge around those workloads, for organizations building out those capabilities.”

IBM also upgraded it Cloud Object Storage system, adding support for shingled magnetic recording hard disk drives, expanding its capacity to 1.9 petabytes (PB) in a 4u enclosure.

Aside from the storage hardware, McDowell said Spectrum Scale enhancements include some compelling features, including the new Data Acceleration for AI to help balance data between storage tiers.

“One of the biggest challenges of hybrid cloud is keeping data where you need it, when you need it. It’s also a costly challenge, as the egress charges encountered when moving data out of public cloud can become very expensive,” McDowell said.

Greater flexibility to move data across different storage tiers should appeal to corporate IT shops that need to keep sensitive data on premises or perhaps in a hybrid cloud.

“No one wants to leave all their sensitive or strategic data in the cloud,” said one analyst familiar with the company’s plans. “If you are coming up with the next vaccine for the coronavirus that could end up being worth $3 billion, you are not going to put that up in anyone’s public cloud. Especially massive data sets that can be hard to manage across multiple environments.”

IBM has supported data movement to other vendors’ storage for years, and recently added support for Dell EMC PowerScale and NetApp filers to its IBM Spectrum Discover metadata management software.

The AI software IBM has added makes it easier to locate and manage data spread across multiple vendors’ clouds, and makes a difference in the way large enterprises build object storage and discover information, one analyst said.

IBM also upgraded its Spectrum Discover Policy Engine to optimize data migration to less expensive archive tiers.

IBM enhances Red Hat storage

Along with IBM Elastic Storage hardware, IBM also debuted the Storage Suite for IBM Cloud Paks, which combines OpenSource Red Hat Storage with IBM Spectrum Storage Software.

Red Hat is a key part of IBM’s cloud strategy. IBM acquired Red Hat in a $34 billion deal in 2019, vowing to run it as an independent engineering arm.

Offerings in the new bundle include Red Hat Ceph, OpenShift Container Platform, IBM Spectrum Virtualize, IBM Spectrum Scale, IBM Cloud Object Storage and IBM Discover.

IBM claims Spectrum Discover can search billions of files or objects in 0.5 seconds and automatically deploy that data on Red Hat OpenShift. The product is intended to improve users’ insights into data to eliminate rescanning. A storage data catalog can be integrated with the IBM Cloud Pak for Data with one click.

Some of the AI-driven capabilities built into the new or enhanced offerings ease installation and maintenance. Integration with existing infrastructure will be a factor in convincing users to adopt the products in what figures to be challenging economic times this year and likely into next.

“Adoptability will be key with this,” said another analyst familiar with the company’s plans. “But the Fortune 50 to Fortune 100 companies are watching pennies these days and could be reluctant to spend money until they have a better idea of what the returns are going to be. With this virus, no one knows what they will need, or need over the long term.”

Go to Original Article

Global smartphone sales took sizable hit in Q1

Industry observers said a recent decline in global smartphone sales was caused by several factors, among them a mature enterprise mobility market and pandemic-related economic uncertainty.

Gartner reported this week that first-quarter 2020 global smartphone sales had dropped by 20.2% compared to the first quarter of 2019.

Anshul Gupta, senior research analyst at Gartner, said this was the worst-ever decline for the market. He attributed the decline to both supply chain disruptions and weaker demand — a function, he said, of shelter-in-place restrictions and COVID-19-related economic uncertainty.

Anshul GuptaAnshul Gupta

“There has been a drop in business spending on smartphones, though the drop in demand from business users was not as much as from customers,” he said.

Per Gartner, the top three phone manufacturers all saw a decline in the first quarter of 2020. Samsung, Huawei and Apple saw drops of 22.7%, 27.3% and 8.2%, respectively.

A mature market

Analyst opinion as to what caused the global smartphone sales decline varied, although they agreed that enterprise mobility challenges remain.

Holger MuellerHolger Mueller

Holger Mueller, vice president and principal analyst at Constellation Research, said companies have slowed the rate at which they are purchasing new phone hardware. He cited a mature market as the primary driver of that trend.

“Growth can only come from the consumer side,” he said. “Workers who need a smartphone have one.”

Enterprise investment in mobility has moved away from smartphones and tablets and toward security, he said, pointing to the uptick in interest of device management software.

Ray WangRay Wang

Ray Wang, founder and principal analyst at Constellation Research, said the variety of mobile devices has grown and enterprises are beginning to branch out.

“We expect [companies] to go with devices other than mobile phones,” he said. “We see an increase of MiFi [portable broadband] cards [and] IoT devices … for mobility, which is cutting into the mobile market.”

Eric KleinEric Klein

Independent analyst Eric Klein said some of the most notable new features included in high-end phones — like cameras with ever-higher numbers of megapixels — aren’t driving enterprise purchasing decisions.

“In general, people are just holding onto their devices longer, because that need to upgrade isn’t quite there yet,” he said. “The features we’re seeing, and the innovation we’re seeing, have really plateaued.”

The future of the market

Experts said the market should recover somewhat over the near term. Gupta said Gartner was forecasting lower smartphone sales in 2020 than in 2019, but sales should trend higher as the year progresses and pandemic-related issues fade.

“Demand should pick up in the [second half of 2020], and we project positive growth in 2021,” he said.

Klein said that with little business reason to purchase high-end devices, manufacturers may look to pivot to lower-end phones.

“I think the majority of sales are going to be in the entry-level or mid-market space,” he said.

In time, Klein said, new features like 5G capability or foldable displays may drive sales, but the main reason for users and businesses to upgrade in the short term will be to keep pace with the latest iOS and Android OS updates.

Go to Original Article

Cisco servers breached through SaltStack vulnerabilities

Cisco revealed threat actors had compromised several of its servers by exploiting two previously disclosed SaltStack vulnerabilities.

The networking giant published a security advisory Thursday regarding two products — Cisco Modeling Labs (CML) Corporate Edition and Cisco Virtual Internet Routing Lab Personal Edition (VIRL-PE) — that were affected by the critical SaltStack FrameWork vulnerabilities disclosed last month. The advisory contained patches for both products, but it also noted that six salt-master servers were compromised by threat actors who exploited the SaltStack flaws in Cisco VIRL-PE.

“Cisco identified that the Cisco-maintained salt-master servers that are servicing Cisco VIRL-PE releases 1.2 and 1.3 were compromised. The servers were remediated on May 7, 2020,” the advisory said.

Those servers are the following:

  • us-1.virl.info
  • us-2.virl.info
  • us-3.virl.info
  • us-4.virl.info
  • vsm-us-1.virl.info
  • vsm-us-2.virl.info

A Cisco spokesperson told SearchSecurity: “At this time, we have no evidence of customer data exposure related to this vulnerability.”

The two SaltStack flaws — CVE-2020-11651, an authentication bypass vulnerability, and CVE-2020-11652, a directory traversal vulnerability — were fixed in version 3000.2 of the framework, which was released on April 29. The vulnerabilities, which were discovered by researchers at F-Secure, were disclosed the following day.

Cisco said it updated its salt-master servers on May 7. However, CML and VIRL-PE, which use a version of SaltStack that runs the salt-master service affected by the two vulnerabilities, were not patched and were left exposed. When asked why these patches came weeks later, the Cisco spokesperson offered the following response:

The Cisco-hosted servers were patched on May 7. For Cisco CML and VIRL-PE deployments, customers download software that contains SaltStack. Cisco PSIRT [Product Security Incident Response Team] became aware of attempted exploitation of these vulnerabilities the week of May 18. We made fixed software available and issued the security advisory on May 28 to inform our customers and provide mitigation instructions so they can take appropriate action. We ask our customers to please review the advisory for complete detail.

Go to Original Article

Steer clear of trouble while following the Office 365 roadmap

Microsoft will make several changes to the Office 365 platform this year that will affect enterprise users. Email client changes and new features in the Office suite and subscriptions can increase support calls, but administrators can help themselves through training and engagement.

Microsoft, which was once tolerant of customers on older products, is pushing customers to adopt the latest Windows 10 build and Office suite to take advantage of new Office 365 functionality and capabilities. At time of publication, the Office 365 roadmap shows nearly 250 features in development with nearly 150 rolling out. Some of the changes include:

  • After October 2020, only Office 2019 and Office Pro Plus will be allowed to connect to Office 365 services, such as email on Exchange Online and SharePoint Online;
  • Microsoft Outlook will receive several changes to its user interface throughout 2020;
  • Office Groups and Microsoft Teams will be the focus for collaboration tool development;
  • Office ProPlus is no longer supported in Windows 8.1, Windows 7 or older on the client operating system and Windows Server 2012, 2012 R2 and 2016 on the server side.

Given the number of updates in the works, many administrators realize that the wave of change will affect many of their users, especially if it requires upgrading any legacy Office suite products such as Office 2013, 2016 and even 2010. To ensure a smooth transition with many of the new Office 365 tools and expected changes, IT workers must take several steps to prepare.

Develop an Office 365 or Office 2019 adoption plan

One of the first steps for IT is to plot out a strategy that outlines the upcoming changes and what needs to be done to complete the adoption process. During this step, the IT team must detail the various software changes to implement — upgrades to the Office suite, introduction of Microsoft Teams and other similar items. The adoption plan can define the details around training material, schedules, resources and timelines needed.

Identify platform champions to help encourage adoption

To be more effective when it comes to gaining the trust of their end users and keeping them invested with the upcoming Office 365 roadmap features, administrators must identify a few platform champions within the business to help build support within the end-user groups and outside of IT.

Build excitement around the upcoming changes

Changes are generally met with some resistance from end users, and this is especially the case when it comes to changing tools that are heavily used such as Outlook, Word, Excel and certain online services. To motivate end users to embrace some of the new applications coming out in 2020, administrators must highlight the benefits such as global smart search, a new look and feel for the email client and several enhancements coming in Microsoft Teams.

Be flexible with training materials and methods

Everyone learns differently, so any training content that administrators provide to the end users must come in several formats. Some of the popular delivery mechanisms include short videos, one-page PDF guides with tips and tricks, blog postings and even podcasts. One other option is to outsource the training process by using a third-party vendor that can deliver training material, tests and other content through an online learning system. Some of the groups that offer this service include BrainStorm, Microsoft Learning and Global Knowledge Training.

Monitor progress and highlight success stories

Once IT begins to roll out the adoption plan and the training to the end users, it is important to monitor the progress by performing frequent checks to identify the users actively participating in the training and using the different tools available to them. One way for the administrators to monitor Office activation is through the Office 365 admin portal under the reports section. Some of the Office usage and activation reports will identify who is making full use of the platform and the ones lagging behind who might require extra assistance to build their skills.

Stay on top of the upcoming changes from Microsoft

End users are not the only ones who need training. Given the fast rate that the Office 365 platform changes, IT administrators have a full-time job in continuing to review the new additions and changes to the applications and services. Online resources like Microsoft 365 Roadmap and blog posts by Microsoft and general technology sites provide valuable insights into what is being rolled out and what upcoming changes to expect.

Share stories and keep the door open for continuous conversations

Microsoft Teams and Yammer are highly recommended for administrators to interact with their end users as they are adopting new Office 365 tools. This gives end users a way to share feedback and allows others to join the conversation to help IT gauge the overall sentiment around the changes in Office 365. They also provide IT with an avenue to make some announcements related to major future changes and evaluate how their end users respond.

Go to Original Article

Critical SaltStack vulnerabilities exploited in several data breaches

Several technology organizations have reported data breaches stemming from two critical SaltStack vulnerabilities that were first disclosed last week.

SaltStack’s infrastructure automation and configuration management software, which used to maintain cloud servers and data centers, is built on the company’s open source Salt framework. Last Thursday, F-Secure publicly disclosed two critical remote code execution vulnerabilities in the Salt framework — CVE-2020-11651, an authentication bypass flaw, and CVE-2020-11652, a directory traversal bug; both flaws were patched in release 3000.2 of the framework, which SaltStack released the day before the disclosure.

The SaltStack vulnerabilities, which were first discovered by F-Secure researchers in March, allow an unauthorized individual who can connect to a Salt installation’s “request server” port to circumvent any authorization requirements or access controls. As a result, an attacker can gain root control of both the “Master” Salt installation and the “minions” or agents that connect to it, according to F-Secure.

“A scan revealed over 6,000 instances of this service exposed to the public internet,” F-Secure said in its advisory. “Getting all of these installs updated may prove a challenge as we expect that not all have been configured to automatically update the salt software packages.”

F-Secure did not publish any proof-of-concept exploit code for the SaltStack vulnerabilities because of the “reliability and simplicity of exploitation.” The cybersecurity vendor also warned that attacks were imminent. “We expect that any competent hacker will be able to create 100% reliable exploits for these issues in under 24 hours,” the advisory said.

Exploitation in the wild didn’t occur quite that quickly, but it was close.

The data breaches

Several technology organizations were breached over the weekend in attacks that exploited the SaltStack vulnerabilities.

On May 2, LineageOS, an open source Android distribution, was breached. The organization announced on Twitter that “an attacker used a CVE in our saltstack master to gain access to our infrastructure” but that signing keys, builds and source code were unaffected. A timeline of the attack with additional details was documented on the LineageOS status page.

Also, on May 2, certificate authority DigiCert was breached. According to a public post in the Mozilla security group forum by Jeremy Rowley, executive vice president of product at DigiCert, a key used for signed certificate timestamps (SCTs) on the company’s Certificate Transparency (CT) 2 log server was exposed in the breach. “The remaining logs remain uncompromised and run on separate infrastructure,” Rowley wrote in a post on Sunday.

Update: In a statement to SearchSecurity, Rowley said CT2 log server was separated from the rest of DigiCert’s network, and therefore no CA systems or other log servers were affected by the intrusion. “The Salt environment was not actually tied to DigiCert’s corporate environment. It was its own segmented environment,” he said.

DigiCert announced Monday that it was deactivating the CT2 log server, though it didn’t believe the exposed key was used to sign SCTs outside of the CT2 log server. However, as a precaution the company advised other certificate authorities that received DigiCert SCTs after 5 p.m. MDT on May 2 to obtain alternative SCTs.

Software maker Xen Orchestra was also breached over the weekend, according to a company blog post. The company documented the attack timeline, which began at 1:18 a.m. on May 3 when it discovered some parts of its infrastructure were unreachable. After launching a full investigation, Xen Orchestra identified the culprit as a “rogue” Salt minion process for cryptocurrency mining, which was found to be running on some of its VMs, according to the blog.

Xen Orchestra said it was fortunate in that no RPMs or GNU Privacy Guard (GPG) signing keys were affected in the breach, and there was no evidence that customer data or other sensitive information was compromised.

The company admitted it was caught off guard and underestimated the risk of having Salt Masters exposed to the public internet. “Luckily, the initial attack payload was really dumb and not dangerous,” Xen Orchestra said in the post. “We are aware it might have been far more dangerous and we take it seriously as a big warning.”

Open source blogging platform Ghost became yet another victim, suffering an attack that began at 1:30 a.m. on May 3, according to report on their status page. The organization determined an attacker used the CVEs to gain access to its infrastructure, which affected both Ghost(Pro) sites and Ghost.org billing services. Like Xen-Orchestra, Ghost determined the attackers deployed cryptomining malware on its infrastructure.

“The mining attempt spiked CPUs and quickly overloaded most of our systems, which alerted us to the issue immediately,” the company wrote in its update, adding that fixes for the vulnerabilities were implemented. “At this time there is no evidence of any attempts to access any of our systems or data.”

Ghost verified that no customer payment card data was affected in the breach, but that all sessions, passwords and keys were being reset and all servers were being reprovisioned as a precaution. In an updated status post on Monday, Ghost said all traces of the cryptomining malware had been eliminated.

The attacks continued after the weekend. Code 42, an IT services firm based in Nantes, France, (not to be confused with Code42, a U.S.-based backup and data protection vendor), took to Twitter Monday to announce its infrastructure was under attack through a “zeroday” in SaltStack. [Editor’s note: The SaltStack vulnerabilities were not zero days as they had been patched prior to public disclosure and exploitation in the wild.]

SaltStack issued a statement confirming that attacks had occurred and urging customers to update their software to prevent further breaches and follow best practices to harden their Salt environments.

“Upon learning of the CVE, SaltStack took immediate action to develop and publish patches, and to communicate update instructions to our customers and users,” Moe Abdula, senior vice president of engineering at SaltStack, wrote in a blog post. “Although there was no initial evidence the CVE had been exploited, we have confirmed that some vulnerable, unpatched systems have been accessed by unauthorized users since the release of the patches.”

Go to Original Article

The Acid Test for Your Backup Strategy

For the first several years that I supported server environments, I spent most of my time working with backup systems. I noticed that almost everyone did their due diligence in performing backups. Most people took an adequate responsibility to verify that their scheduled backups ran without error. However, almost no one ever checked that they could actually restore from a backup — until disaster struck. I gathered a lot of sorrowful stories during those years. I want to use those experiences to help you avert a similar tragedy.

Successful Backups Do Not Guarantee Successful Restores

Fortunately, a lot of the problems that I dealt with in those days have almost disappeared due to technological advancements. But, that only means that you have better odds of a successful restore, not that you have a zero chance of failure. Restore failures typically mean that something unexpected happened to your backup media. Things that I’ve encountered:

  • Staff inadvertently overwrote a full backup copy with an incremental or differential backup
  • No one retained the necessary decryption information
  • Media was lost or damaged
  • Media degraded to uselessness
  • Staff did not know how to perform a restore — sometimes with disastrous outcomes

I’m sure that some of you have your own horror stories.

These risks apply to all organizations. Sometimes we manage to convince ourselves that we have immunity to some or all of them, but you can’t get there without extra effort. Let’s break down some of these line items.

People Represent the Weakest Link

We would all like to believe that our staff will never make errors and that the people that need to operate the backup system have the ability to do so. However, as a part of your disaster recovery planning, you must expect an inability to predict the state or availability of any individual. If only a few people know how to use your backup application, then those people become part of your risk profile.

You have a few simple ways to address these concerns:

  • Periodically test the restore process
  • Document the restore process and keep the documentation updated
  • Non-IT personnel need knowledge and practice with backup and restore operations
  • Non-IT personnel need to know how to get help with the application

It’s reasonable to expect that you would call your backup vendor for help in the event of an emergency that prevented your best people from performing restores. However, in many organizations without a proper disaster recovery plan, no one outside of IT even knows who to call. The knowledge inside any company naturally tends to arrange itself in silos, but you must make sure to spread at least the bare minimum information.

Technology Does Fail

I remember many shock and horror reactions when a company owner learned that we could not read the data from their backup tapes. A few times, these turned into grief and loss counselling sessions as they realized that they were facing a critical — or even complete — data loss situation. Tape has its own particular risk profile, and lots of businesses have stopped using it in favour of on-premises disk-based storage or cloud-based solutions. However, all backup storage technologies present some kind of risk.

In my experience, data degradation occurred most frequently. You might see this called other things, my favourite being “bit rot”. Whatever you call it, it all means the same thing: the data currently on the media is not the same data that you recorded. That can happen just because magnetic storage devices have susceptibilities. That means that no one made any mistakes — the media just didn’t last. For all media types, we can establish an average for failure rates. But, we have absolutely no guarantees on the shelf life for any individual unit. I have seen data pull cleanly off decade-old media; I have seen week-old backups fail miserably.

Unexpectedly, newer technology can make things worse. In our race to cut costs, we frequently employ newer ways to save space and time. In the past, we had only compression and incremental/differential solutions. Now, we have tools that can deduplicate across several backup sets and at multiple levels. We often put a lot of reliance on the single copy of a bit.

How to Test your Backup Strategy

The best way to identify problems is to break-test to find weaknesses. Leveraging test restores will help identity backup reliability and help you solve these problems. Simply, you cannot know that you have a good backup unless you can perform a good restore. You cannot know that your staff can perform a restore unless they perform a restore. For maximum effect, you need to plan tests to occur on a regular basis.

Some tools, like Altaro VM Backup, have built-in tools to make tests easy. Altaro VM Backup provides a “Test & Verify Backups” wizard to help you perform on-demand tests and a “Schedule Test Drills” feature to help you automate the process.

how to test and verify backups altaro

If your tool does not have such a feature, you can still use it to make certain that your data will be there when you need it. It should have some way to restore a separate or redirected copy. So, instead of overwriting your live data, you can create a duplicate in another place where you can safely examine and verify it.

Test Restore Scenario

In the past, we would often simply restore some data files to a shared location and use a simple comparison tool. Now that we use virtual machines for so much, we can do a great deal more. I’ll show one example of a test that I use. In my system, all of these are Hyper-V VMs. You’ll have to adjust accordingly for other technologies.

Using your tool, restore copies of:

  • A domain controller
  • A SQL server
  • A front-end server dependent on the SQL server

On the host that you restored those VMs to, create a private virtual switch. Connect each virtual machine to it. Spin up the copied domain controller, then the copied SQL server, then the copied front-end. Use the VM connect console to verify that all of them work as expected.

Create test restore scenarios of your own! Make sure that they match a real-world scenario that your organization would rely on after a disaster.

Go to Original Article
Author: Eric Siron

Community and Connection to Drive Change

Reflections on International Women’s Day and Women’s History Month

In recent weeks, I have had several individuals share with me their admiration for the amount of time I spend listening to, advocating for and simply being there for women. Of course I was humbled by what felt like a compliment, but hearing this gave me pause. Why did these individuals see my actions as deserving of admiration as opposed to a core way of how we show up for each other in the workplace, the industry and our lives in general? What path led me to this way of being, how might I expand my impact and how might I encourage others to take a more active role?

This way of being has been part of who I am for my entire working life. When I joined Microsoft full time in 1998, my first manager was a role model for me. Laurie Litwack spent time getting to know me personally as well as to understand my passion and hopes and what unique perspective I brought. She thoughtfully created my first assignment to both leverage my skills and challenge me. Laurie showed me not only what it meant to bring your authentic self to work but also how it felt to be supported. Under her leadership I not only grew in the technical aspects of my role, she also nurtured my appreciation for people. Looking back, this experience was unique, especially for that era in engineering where there were fewer women and even fewer women managers. It shaped my values as a leader and my view on how you best engage people and support their development. It showed me the importance of being present.

Early into my career the VP of our engineering organization, Bill Vegthe, brought a group of women employees together to better understand our experiences in the organization. He genuinely wanted to learn from us what the organization could be doing better to support our growth and satisfaction. At the time, the number of women in the organization was low and this forum was the first opportunity many of us had to meet and spend time with each other. The most valuable thing we learned from the experience was the personal support and enjoyment that came from simply making time for each other. The isolation we each felt melted away when we got to spend time with others like us: creating connections, sharing experiences, learning from each other. We grew more collectively than we ever would have individually, and I personally benefited from both the friendship and wisdom of many of the women in this community: Terrell Cox, Jimin Li, Anna Hester, Farzana Rahman, Deb MacFadden, Molly Brown, Linda Apsley, Betsy Speare. This was true many years ago when this community was created and holds true today even as this community has scaled from a handful of women to thousands of women across our Cloud + AI Division who make up this Women’s Leadership Community (WLC) under sponsorship from leaders such as Bob Muglia, Bill Laing, Brad Anderson and currently Scott Guthrie.

As I grew in my career, the importance of intentionally building connections with other women only became more clear. In the early 2010s as I joined the technical executive community, I looked around and felt a similar experience to my early career days. There were very few technical executives who were women, and we were spread across the organization, meaning we rarely had the opportunity to interact and in some cases had never met! It was out of desire to bring the WLC experience to this group that our Life Without Lines Community of technical women executives across Microsoft grew, based on the founding work of Michele Freed, Lili Cheng, Roz Ho, Rebecca Norlander. This group represents cross-company leadership and as the connections deepened, so did the impact on each other in terms of peer mentoring, career sponsorship and engineering and product collaboration.

Together we are more powerful than we are individually, amplifying each other’s voices.       

Although the concept of community might seem simple and obvious in the ongoing conversations about inclusion, the key in my experience is how the connections in these communities were built. This isn’t just about networking for the sake of networking; we come together with a focus on being generous with our time and our experiences, challenging each other and our organization to address issues in a new way, and with the space to be authentic within our own community by not feeling like we needed to be a monolith in our perspectives or priorities. We advocate for one another, we leverage our networks, we create space and we amplify voices of others. This community names the challenges these women face, names the hopes they have for themselves and future women in our industry, and names what is most important to our enjoyment of our work. My job, and the job of others leaders, is to then listen to these voices leveraging the insights to advocate for what is needed in the organization, and drive systemic changes that will create the best-lived experience for all women at Microsoft and in the industry. 

I have found that members of the community want to be heard, if you are willing to be present, willing to bring your authentic self and willing to take action on what you learn. I’m reflecting on this, in particular, as I think about International Women’s Day (IWD). From its beginnings in the early 1900s through to present day, IWD strives to recognize the need for active participation, equality and development of women and acknowledge the contribution of women globally.

This year I am reflecting on the need to ensure that our communities of women accurately represent the diverse range of perspectives and experiences of employees and customers. Making sure that even in a community about including others, we are not unintentionally excluding certain groups of women who may not have the same experiences or priorities, or privileges as others. It is a chance to reflect on how I can expand my impact. I challenge all of us to take this time to recognize those who are role models for us and those voices who may not be heard and determine what role each of us can play in achieving this goal for everyone.

Go to Original Article
Author: Microsoft News Center

How to install and test Windows Server 2019 IIS

Transcript – How to install and test Windows Server 2019 IIS

In this video, I want to show you how to install Internet Information Services, or IIS, and prepare it for use.

I’m logged into a domain-joined Windows Server 2019 machine and I’ve got the Server Manager open. To install IIS, click on Manage and choose the Add Roles and Features option. This launches the Add Roles and Features wizard. Click Next on the welcome screen and choose role-based or feature-based installation for the installation type and click Next.

Make sure that My Server is selected and click Next. I’m prompted to choose the roles that I want to deploy. We have an option for web server IIS. That’s the option I’m going to select. When I do that, I’m prompted to install some dependency features, so I’m going to click on Add Features and I’ll click Next.

I’m taken to the features screen. All the dependency features that I need are already being installed, so I don’t need to select anything else. I’ll click Next, Next again, Next again on the Role Services — although if you do need to install any additional role services to service the IIS role, this is where you would do it. You can always enable these features later on, so I’ll go ahead and click Next.

I’m taken to the Confirmation screen and I can review my configuration selections. Everything looks good here, so I’ll click install and IIS is being installed.

Testing Windows Server 2019 IIS

The next thing that I want to do is test IIS to make sure that it’s functional. I’m going to go ahead and close this out and then go to local server. I’m going to go to IE Enhanced Security Configuration. I’m temporarily going to turn this off just so that I can test IIS. I’ll click OK and I’ll close Server Manager.

The next thing that I want to do is find this machine’s IP address, so I’m going to right-click on the Start button and go to Run and type CMD to open a command prompt window, and then from there, I’m going to type ipconfig.

Here I have the server’s IP address, so now I can open up an Internet Explorer window and enter this IP address and Internet Information Services should respond. I’ve entered the IP address, then I press enter and I’m taken to the Internet Information Services screen. IIS is working at this point.

I’ll go ahead and close this out. If this were a real-world deployment, one of the next things that you would probably want to do is begin uploading some of the content that you’re going to use on your website so that you can begin testing it on this server.

I’ll go ahead and open up file explorer and I’ll go to this PC, driver and inetpub folder and the wwwroot subfolder. This is where you would copy all of your files for your website. You can configure IIS to use a different folder, but this is the one used by default for IIS content. You can see the files right here that make up the page that you saw a moment ago.

How to work with the Windows Server 2019 IIS bindings

Let’s take a look at a couple of the configuration options for IIS. I’m going to go ahead and open up Server Manager and what I’m going to do now is click on Tools, and then I’m going to choose the Internet Information Services (IIS) Manager. The main thing that I wanted to show you within the IIS Manager is the bindings section. The bindings allow traffic to be directed to a specific website, so you can see that, right now, we’re looking at the start page and, right here, is a listing for my IIS server.

I’m going to go ahead and expand this out and I’m going to expand the site’s container and, here, you can see the default website. This is the site that I’ve shown you just a moment ago, and then if we look over here on the Actions menu, you can see that we have a link for Bindings. When I open up the Bindings option, you can see by default we’re binding all HTTP traffic to port 80 on all IP addresses for the server.

We can edit [the site bindings] if I select [the site] and click on it. You can see that we can select a specific IP address. If the server had multiple IP addresses associated with it, we could link a different IP address to each site. We could also change the port that’s associated with a particular website. For example, if I wanted to bind this particular website to port 8080, I could do that by changing the port number. Generally, you want HTTP traffic to flow on port 80. The other thing that you can do here is to assign a hostname to the site, for example www.contoso.com or something to that effect.

The other thing that I want to show you in here is how to associate HTTPS traffic with a site. Typically, you’re going to have to have a certificate to make that happen, but assuming that that’s already in place, you click on Add and then you would change the type to HTTPS and then you can choose an IP address; you can enter a hostname; and then you would select your SSL certificate for the site.

You’ll notice that the port number is set to 443, which is the default port that’s normally used for HTTPS traffic. So, that’s how you install IIS and how you configure the bindings for a website.

+ Show Transcript

Go to Original Article

4 SD-WAN vendors integrate with AWS Transit Gateway

Several software-defined WAN vendors have announced integration with Amazon Web Services’ Transit Gateway. For SD-WAN users, the integrations promise simplified management of policies governing connectivity among private data centers, branch offices and AWS virtual networks.

Stitching together workloads across cloud and corporate networks is complex and challenging. AWS tackles the problem by making AWS Transit Gateway the central router of all traffic emanating from connected networks.

Cisco, Citrix Systems, Silver Peak and Aruba, a Hewlett Packard Enterprise Company, launched integrations with the gateway this week. The announcements came after AWS unveiled the AWS Transit Gateway at its re:Invent conference in Las Vegas.

SD-WAN vendors lining up quickly to support the latest AWS integration tool didn’t surprise analysts. “The ease and speed of integration with leading IaaS platforms are key competitive issues for SD-WAN for 2020,” said Lee Doyle, the principal analyst for Doyle Research.

By acting as the network hub, Transit Gateway reduces operational costs by simplifying network management, according to AWS. Before the new service, companies had to make individual connections between networks outside of AWS and those serving applications inside the cloud provider.

The potential benefits of Transit Gateway made connecting to it a must-have for SD-WAN suppliers. However, tech buyers should pay close attention to how each vendor configures its integration.

“SD-WAN vendors have different ways of doing things, and that leads to some solutions being better than others,” Doyle said.

What the 4 vendors are offering

Cisco said its integration would let IT teams use the company’s vManage SD-WAN controller to administer connectivity from branch offices to AWS. As a result, engineers will be able to apply network segmentation and data security policies universally through the Transit Gateway.

Aruba will let customers monitor and manage connectivity either through the Transit Gateway or Aruba Central. The latter is a cloud-based console used to control an Aruba-powered wireless LAN.

Silver Peak is providing integration between the Unity EdgeConnect SD-WAN platform and Transit Gateway. The link will make the latter the central control point for connectivity.

Finally, Citrix’s Transit Gateway integration would let its SD-WAN orchestration service connect branch offices and data centers to AWS. The connections will be particularly helpful to organizations running Citrix’s virtual desktops and associated apps on AWS.

Go to Original Article