Tag Archives: create

For Sale – Core i7-7700K, 32GB DDR4-2400, Asus Prime B250M-K, Carbide Air 240 White, 256GB SSD

Hello! Having upgraded my PC, it’s time to create some space. Note some key components are missing (PSU and GPU as I kind of need those!) so you’ll need to obtain those separately to make a complete PC.

Case: Corsair Carbide Air 240 White with window, boxed and in excellent condition

Mainboard: Asus Prime B250M-K, with box and complete

Processor: Intel Core i7-7700K, boxed version (unused Intel sticker!)

Cooler: Arcric Freezer i11, boxed

Memory: Corsair VENGEANCE LPX 32GB (2 x 16GB) DDR4 2400MHz C16, with box

Fans: 2 x Arctic Freezer F12 PWM PST case fans with PWM sharing (pass-thru connector), one boxed

SSD: Samsung 830 256GB SATA SSD, £35 inc. P&P

Currently the CPU, cooler, RAM and mainboard are mounted in the case. I would prefer collection of these components (recognising Covid avoidance distancing) as they are, but I’ll consider splitting if there are no takers for the combination. With the case fans, I’m asking for £360 £400.

Thanks for looking, pictures will be coming soon…

Go to Original Article

Accessibility tools support Hamlin Robinson students learning from home | | Microsoft EDU

More than ever, educators are relying on technology to create inclusive learning environments that support all learners. As we recognize Global Accessibility Awareness Day, we’re pleased to mark the occasion with a spotlight on an innovative school that is committed to digital access and success for all.

Seattle-based Hamlin Robinson School, an independent school serving students with dyslexia and other language-based learning differences, didn’t set a specific approach to delivering instruction immediately after transitioning to remote learning. “Our thought was to send home packets of schoolwork and support the students in learning, and we quickly realized that was not going to work,” Stacy Turner, Head of School, explained in a recent discussion with the Microsoft Education Team.

After about a week into distance learning, the school quickly went to more robust online instruction. The school serves grades 1-8 and students in fourth-grade and up are utilizing Office 365 Education tools, including Microsoft Teams. So, leveraging those same resources for distance learning was natural.

Built-in accessibility features

Stacy said the school was drawn to Microsoft resources for schoolwide use because of built-in accessibility features, such as dictation (speech-to-text), and the Immersive Reader, which relies on evidence-based techniques to help students improve at reading and writing.

“What first drew us to Office 365 and OneNote were some of the assistive technologies in the toolbar,” Stacy said. Learning and accessibility tools are embedded in Office 365 and can support students with visual impairments, hearing loss, cognitive disabilities, and more.

Josh Phillips, Head of Middle School, says for students at Hamlin Robinson, finding the right tools to support their learning is vital. “When we graduate our students, knowing that they have these specific language-processing needs, we want them to have fundamental skills within themselves and strategies that they know how to use. But we also want them to know what tools are available to them that they can bring in,” he said.

For example, for students who have trouble typing, a popular tool is the Dictate, or speech-to-text, function of Office 365. Josh said that a former student took advantage of this function to write a graduation speech at the end of eighth grade. “He dictated it through Teams, and then he was able to use the skills we were practicing in class to edit it,” Josh said. “You just see so many amazing ideas get unlocked and be able to be expressed when the right tools come along.”

Supporting teachers and students

Providing teachers with expertise around tech tools also is a focus at Hamlin Robinson. Charlotte Gjedsted, Technology Director, said the school introduced its teachers to Teams last year after searching for a platform that could serve as a digital hub for teaching and learning. “We started with a couple of teachers being the experts and helping out their teams, and then when we shifted into this remote learning scenario, we expanded that use,” Charlotte said.

“Teams seems to be easiest platform for our students to use in terms of the way it’s organized and its user interface,” added Josh.

He said it was clear in the first days of distance learning that using Teams would be far better than relying on packets of schoolwork and the use of email or other tools. “The fact that a student could have an assignment issued to them, could use the accessibility tools, complete the assignment, and then return the assignment all within Teams is what made it clear that this was going to be the right app for our students,” he said. 

A student’s view

Will Lavine, a seventh-grade student at the school says he appreciates the stepped-up emphasis on Teams and tech tools during remote learning and says those are helping meet his learning needs. “I don’t have to write that much on paper. I can use technology, which I’m way faster at,” he said.

“Will has been using the ease of typing to his benefit,” added Will’s tutor, Elisa Huntley. “Normally when he is faced with a hand written assignment, he would spend quite a bit of time to refine his work using only a pencil and eraser. But when he interfaces with Microsoft Teams, Will doesn’t feeling the same pressure to do it right the first time. It’s much easier for him to re-type something. His ideas are flowing in ways that I have never seen before.”

Will added that he misses in-person school, but likes the collaborative nature of Teams, particularly the ability to chat with teachers and friends.

With the technology sorted out, Josh said educators have been very focused on ensuring students are progressing as expected. He says that teachers are closely monitoring whether students are joining online classes, engaging in discussions, accessing and completing assignments, and communicating with their teachers.

Connect, explore our tools

We love hearing from our educator community and students and families. If you’re using accessibility tools to create more inclusive learning environments and help all learners thrive, we want to hear from you! One great way to stay in touch is through Twitter by tagging @MicrosoftEDU.

And if you want to check out some of the resources Hamlin Robinson uses, remember that students and educators at eligible institutions can sign up for Office 365 Education for free, including Word, Excel, PowerPoint, OneNote, and Microsoft Teams.

In honor of Global Accessibility Awareness Day, Microsoft is sharing some exciting updates from across the company. To learn more visit the links below:

Go to Original Article
Author: Microsoft News Center

For Sale – Core i7-7700K, 32GB DDR4-2400, Asus Prime B250M-K, Carbide Air 240 White, 256GB SSD

Hello! Having upgraded my PC, it’s time to create some space. Note some key components are missing (PSU and GPU as I kind of need those!) so you’ll need to obtain those separately to make a complete PC.

Case: Corsair Carbide Air 240 White with window, boxed and in excellent condition

Mainboard: Asus Prime B250M-K, with box and complete

Processor: Intel Core i7-7700K, boxed version (unused Intel sticker!)

Cooler: Arcric Freezer i11, boxed

Memory: Corsair VENGEANCE LPX 32GB (2 x 16GB) DDR4 2400MHz C16, with box

Fans: 2 x Arctic Freezer F12 PWM PST case fans with PWM sharing (pass-thru connector), one boxed

SSD: Samsung 830 256GB SATA SSD, £35 inc. P&P

Currently the CPU, cooler, RAM and mainboard are mounted in the case. I would prefer collection of these components (recognising Covid avoidance distancing) as they are, but I’ll consider splitting if there are no takers for the combination. With the case fans, I’m asking for £400.

Thanks for looking, pictures will be coming soon…

Go to Original Article

Virtualization-Based Security: Enabled by Default

Virtualization-based Security (VBS) uses hardware virtualization features to create and isolate a secure region of memory from the normal operating system. Windows can use this “virtual secure mode” (VSM) to host a number of security solutions, providing them with greatly increased protection from vulnerabilities in the operating system, and preventing the use of malicious exploits which attempt to defeat operating systems protections.

The Microsoft hypervisor creates VSM and enforces restrictions which protect vital operating system resources, provides an isolated execution environment for privileged software and can protect secrets such as authenticated user credentials. With the increased protections offered by VBS, even if malware compromises the operating system kernel, the possible exploits can be greatly limited and contained because the hypervisor can prevent the malware from executing code or accessing secrets.

The Microsoft hypervisor has supported VSM since the earliest versions of Windows 10. However, until recently, Virtualization-based Security has been an optional feature that is most commonly enabled by enterprises. This was great, but the hypervisor development team was not satisfied. We believed that all devices running Windows should have Microsoft’s most advanced and most effective security features enabled by default. In addition to bringing significant security benefits to Windows, achieving default enablement status for the Microsoft hypervisor enables seamless integration of numerous other scenarios leveraging virtualization. Examples include WSL2, Windows Defender Application Guard, Windows Sandbox, Windows Hypervisor Platform support for 3rd party virtualization software, and much more.

With that goal in mind, we have been hard at work over the past several Windows releases optimizing every aspect of VSM. We knew that getting to the point where VBS could be enabled by default would require reducing the performance and power impact of running the Microsoft hypervisor on typical consumer-grade hardware like tablets, laptops and desktop PCs. We had to make the incremental cost of running the hypervisor as close to zero as possible and this was going to require close partnership with the Windows kernel team and our closest silicon partners – Intel, AMD, and ARM (Qualcomm).

Through software innovations like HyperClear and by making significant hypervisor and Windows kernel changes to avoid fragmenting large pages in the second-level address translation table, we were able to dramatically reduce the runtime performance and power impact of hypervisor memory management. We also heavily optimized hot hypervisor codepaths responsible for things like interrupt virtualization – taking advantage of hardware virtualization assists where we found that it was helpful to do so. Last but not least, we further reduced the performance and power impact of a key VSM feature called Hypervisor-Enforced Code Integrity (HVCI) by working with silicon partners to design completely new hardware features including Intel’s Mode-based execute control for EPT (MBEC), AMD’s Guest-mode execute trap for NPT (GMET), and ARM’s Translation table stage 2 Unprivileged Execute-never (TTS2UXN).

I’m proud to say that as of Windows 10 version 1903 9D, we have succeeded in enabling Virtualization-based Security by default on some capable hardware!

The Samsung Galaxy Book2 is officially the first Windows PC to have VBS enabled by default. This PC is built around the Qualcomm Snapdragon 850 processor, a 64-bit ARM processor. This is particularly exciting for the Microsoft hypervisor development team because it also marks the first time that enabling our hypervisor is officially supported on any ARM-based device.

Keep an eye on this blog for announcements regarding the default-enablement of VBS on additional hardware and in future versions of Windows 10.

Go to Original Article
Author: brucesherwin

AI vendors to watch in 2020 and beyond

There are thousands of AI startups around the world. Many aim to do similar things — create chatbots, develop hardware to better power AI models or sell platforms to automatically transcribe business meetings and phone calls.

These AI vendors, or AI-powered product vendors, have raised billions over the last decade, and will likely raise even more in the coming years. Among the thousands of startups, a few shine a little brighter than others.

To help enterprises keep an eye on some of the most promising AI startups, here is a list of those founded within the past five years. The startups listed are all independent companies, or not a subsidiary of a larger technology vendor. The chosen startups also cater to enterprises rather than consumers, and focus on explainable AI, hardware, transcription and text extraction, or virtual agents.

Explainable AI vendors and AI ethics

As the need for more explainable AI models has skyrocketed over the last couple of years and the debate over ethical AI has reached government levels, the number of vendors developing and selling products to help developers and business users understand AI models has increased dramatically. Two to keep an eye on are DarwinAI and Diveplane.

DarwinAI uses traditional machine learning to probe and understand deep learning neural networks to optimize them to run faster.

Founded in 2017 and based in Waterloo, Ontario, the startup creates mathematical models of the networks, and then uses AI to create a model that infers faster, while claiming to maintain the same general levels of accuracy. While the goal is to optimize the deep learning models, a 2018 update introduced an “explainability toolkit” that offers optimization recommendations for specific tasks. The platform then provides detailed breakdowns on how each task works, and how exactly the optimization will improve them.

Founded in 2017, Diveplane claims to create explainable AI models based on historical data observations. The startup, headquartered in Raleigh, N.C., puts its outputs through a conviction metric that ranks how likely new or changed data fits into the model. A low ranking indicates a potential anomaly. A ranking that’s too low indicates that the system is highly surprised, and that the data likely doesn’t belong in a model’s data set.

AI startups, AI vendors
There are thousands of AI startups in the world today, and it looks like there will be many more over the coming years.

In addition to the explainability product, Diveplane also sells a product that creates an anonymized digital twin of a data set. It doesn’t necessarily help with explainability, but it does help with issues around data privacy.

According to Diveplane CEO Mike Capps, Diveplane Geminai takes in data, understands it and then generates new data from it without carrying over personal data. In healthcare, for example, the product can input patient data and scrub personal information like names and locations, while keeping the patterns in the data. The outputs can then be fed into machine learning algorithms.

“It keeps the data anonymous,” Capps said.

AI hardware

To help power increasingly complex AI models, more advanced hardware — or at least hardware designed specifically for AI workloads — is needed. Major companies, including Intel and Nvidia, have quickly stepped up to the challenge, but so, too, have numerous startups. Many are doing great work, but one stands out.

Cerebras Systems, a 2016 startup based in Los Altos, Calif., made headlines around the world in 2019 when it created what it dubbed the world’s largest computer chip designed for AI workloads. The chip, about the size of a dinner plate, has some 400,000 cores and 1.2 trillion transistors. By comparison, the largest GPU has around 21.1 billion transistors.

The company has shipped a limited number of chips so far, but with a valuation expected to be well over $1 billion, Cerebras looks to be going places.

Automatic transcription companies

It’s predicted that more businesses will use natural language processing (NLP) technology in 2020 and that more BI and AI vendors will integrate natural language search functions into their platforms in the coming years.

Numerous startups sell transcription and text capturing platforms, as well as many established companies. It’s hard to judge them, as their platforms and services are generally comparable; however, two companies stand out.

Fireflies.ai sells a transcription platform that syncs with users’ calendars to automatically join and transcribe phone meetings. According to CEO and co-founder Krish Ramineni, the platform can transcribe calls with over 90% accuracy levels after weeks of training.

The startup, founded in 2016, presents transcripts within a searchable and editable platform. The transcription is automatically broken into paragraphs and includes punctuation. Fireflies.ai also automatically extracts and bullets information it deems essential. This feature does “a fairly good job,” one client said earlier this year.

The startup plans to expand that function to automatically label more types of information, including tasks and questions.

Meanwhile, Trint, founded in late 2014 by former broadcast journalist Jeff Kofman, is an automatic transcription platform designed specifically for newsrooms, although it has clients across several verticals.

The platform can connect directly with live video feeds, such as the streaming of important events or live press releases, and automatically transcribe them in real time. Transcriptions are collaborative, as well as searchable and editable, and included embedded time codes to easily go back to the video.

“It’s a software with an emotional response, because people who transcribe generally hate it,” Kofman said.

Bots and virtual agents

As companies look to cut costs and process client requests faster, the use of chatbots and virtual agents has greatly increased across numerous verticals over the last few years. While there are many startups in this field, a couple stand out.

Boost.ai, a Scandinavian startup founded in 2016, sells an advanced conversational agent that it claims is powered by a neural network. Automatic semantic understanding technology sits on top of the network, enabling the agent to read textual input word by word, and then as a whole sentence, to understand user intent.

Agents are pre-trained on one of several verticals before they are trained on the data of a new client, and the Boost.ai platform is quick to set up and has a low count of false positives, according to co-founder Henry Vaage Iversen. It can generally understand the intent of most questions within a few weeks of training, and will find a close alternative if it can’t understand it completely, he said.

The platform supports 25 languages, and pre-trained modules for a number of verticals, including banking, insurance and transportation industries.

Formed in 2018, EyeLevel.ai doesn’t create virtual agents or bots; instead, it has a platform for conversational AI marketing agents. The San Francisco-based startup has more than 1,500 chatbot publishers on its platform, including independent developers and major companies.

Eyelevel.ai is essentially a marketing platform — it advertises for numerous clients through the bots on in its marketplace. Earlier this year, Eyelevel.ai co-founder Ryan Begley offered an example.

An independent developer on its platform created a bot that quizzes users on their Game of Thrones knowledge. The bot operates on social media platforms, and, besides providing a fun game for users, it also collects marketing data on them and advertises products to them. The data it collects is fed back into the Eyelevel platform, which then uses it to promote through its other bots.

By opening the platform to independent developers, it gives individuals a chance to get their bot to a broader audience while making some extra cash. Eyelevel.ai offers tools to help new bot developers get started, too.

“Really, the key goal of the business is help them make money,” Begley said of the developers.

Startup launches continuing to surge

This list of AI-related startups represents only a small percentage of the startups out there. Many offer unique products and services to their clients, and investors have widely picked up on that.

According to the comprehensive AI Index 2019 report, a nearly 300-page report on AI trends complied by the Human-Centered Artificial Intelligence initiative at Stanford University, global private AI investment in startups reached $37 billion in 2019 as of November.

The report notes that since 2010, which saw $1.3 billion raised, investments in AI startups have increased at an average annual growth rate of over 48%.

The report, which considered only AI startups with more than $400,000 in funding, also found that more than 3,000 AI startups received funding in 2018. That number is on the rise, the report notes.

Go to Original Article

Failures must be expected in pursuit of digital innovation

Any CIO who expects employees to innovate must create a culture that accepts and even encourages failure along the way — something risk-averse executives understand but rarely put into practice.

“Culture is the software of the mind,” said Sandy Carter, vice president of Amazon Web Services, during a session at the Gartner IT Symposium in Orlando this week. “If an experiment failing comes with a price to pay, the culture of experimentation fails.”

That sentiment was reiterated time and again from prominent speakers during the Gartner conference, where IT leaders listened intently to advice on how to improve the pace of their organizations’ digital innovation efforts. In fact, lagging behind in digitization topped business leaders’ concerns in Gartner’s Emerging Risks Monitor Report released this week.

The research firm surveyed 144 senior executives across industries and found that “digitalization misconceptions” tops the list of concerns, while “lagging digitalization” ranks second. Last quarter’s top emerging risk, “pace of change,” continues to rank high, as it has in four previous emerging risk reports.

Sixty percent of the survey respondents said slow strategy execution and insufficient digital capabilities were top concerns for 2019, Gartner reports. Given the high stakes of digital innovation and the changes it brings, such projects certainly merit concern. Digitization projects lead to changes to business capabilities, profit models, value propositions and customer behavior, according to Gartner.

Think like a startup

In session after session at the IT Symposium, experts shared ways to hasten innovation — beginning with the top. It’s up to CIOs and other leaders to encourage innovative practices and to organize teams in ways that support digital innovation.

One example is team size. Big teams typically slow or completely stall the testing of new ideas, Carter said. She encourages keeping teams within companies small to maintain a startup feel, where interesting ideas are encouraged and actually implemented.

Gartner analyst Mark Raskino echoed that advice during his session on digital transformation mistakes to avoid. Executing on innovative ideas is a problem for big teams, which often spend too much time planning, he said.

Gartner analyst Mark Raskino shares digital transformation pitfalls to avoid.
Gartner analyst Mark Raskino tells IT Symposium attendees that digital innovation is often slowed by big teams who spend too much time planning and not enough time on execution.

“They’ll have one consulting firm come in, then they have a debate, set up a transformation program, they spend a couple of months on it, someone leaves the company, then another consultant comes in who redefines what digital is … you can see walls covered in plans that aren’t being executed by anybody,” Raskino told session attendees. “It’s a corporate disease, you’ve seen it before, and it has to stop.”

Lean startup thinking and taking action to create the minimum viable product gets you out of that trap, he added.

Large companies should also borrow from the startup mentality of growth mindsets — one of the “culture recoding” requirements for innovation, Raskino said.

“Unless upper and middle management is prepared to learn new stuff, and comfortable with doing that every day, it can’t expect anyone else in the organization to learn new things, and the organization becomes too stodgy,” Raskino said. “The more people we can get into that mindset, the more we can shift the cultural balance.”

Everyone must have ‘permission to disrupt’

Jennifer Hyman, co-founder and CEO of Rent the Runway, a privately held startup now valued at $1 billion, said innovation must be everyone’s responsibility.

Innovation is the responsibility of every group within the company, and every team has to be given permission to disrupt itself.
Jennifer HymanCo-founder and CEO, Rent the Runway

“Do not start an innovation team at your organization, because that team is doomed for failure,” Hyman said in a keynote, where she was interviewed by Gartner analyst Helen Huntley. “Innovation is the responsibility of every group within the company, and every team has to be given permission to disrupt itself.”

If a CTO or CIO is not given permission to fail, they won’t come up with innovative technology, she said.

“Innovation inherently means risk-taking. It inherently means that a portion of your revenue, or your systems, are going to go down, while something else is going right,” she said. RTR experienced this type of disruption during a major software upgrade last month, which caused short-term shipping delays but led to a 35% improvement in inventory availability.

Talking upfront about what is likely to suffer through the innovation process, at least in the short term, and committing to that process over the long term is critical, according to Hyman.

Jennifer Hyman, co-founder and CEO of Rent the Runway, at the Gartner IT Symposium
Jennifer Hyman, co-founder and CEO of Rent the Runway, discusses the company’s Closet in the Cloud service during the Gartner IT Symposium 2019.

“Sometimes large companies give up too quickly on innovation because they expect that the growth rate of that innovation is going to take off as if they were regular divisions within the company, but true disruption takes a really long time within an organization,” Hyman said. “It has to come from the top, it has to be a part of people’s goals, and failure and risk taking has to be encouraged.”

Indeed, innovation is about reinvention, and that could require very long-range thinking, Gartner’s Raskino said. Misreading how deep the change is going to be in an industry in five to 10 years is often where executive boards make their first mistake, he told attendees.

“You have to look a long way out and bring it back to the question of, ‘What competencies are necessary now and what assets do we need now, to secure that future?'” he said.

And companies shouldn’t expect to obtain the assets their employees need to execute a deep transformation by pulling from their existing budgets, he said.

“[Executives] think they will be able to rob a bit from this budget and save some from the existing IT budget to do digital,” Raskino said. “You can do digital optimization, potentially, within existing budgets, but if you think you are going to do a transformation, without net new investment somewhere, you’re fooling yourself.”

Go to Original Article

How to Customize Site-Aware Clusters and Fault Domains

In this guide, we’ll cover how to create fault domains and configure them in Windows Server 2019. We will also run down the different layers of resiliency provided by Windows Server and fault domain awareness with Storage Spaces Direct. Let’s get started!

Resiliency and High-Availability

Many large organizations deploy their services across multiple data centers to not only provide high-availability but to also support disaster recovery (DR). This allows services to move from servers, virtualization hosts, cluster nodes or clusters in one site to hardware in a secondary location. Prior to the Windows Server 2016 release, this was usually done through deploying a multi-site (or “stretched”) failover cluster. This solution worked well, but it had some gaps in its manageability, namely that it was never easy to determine what hardware was running at each site. Virtual machines (VMs) were also limited to move between cluster nodes and sites, but had no other mobile granularity, even though most datacenters organize their hardware by chassis and racks. With Windows Server 2016 and 2019, Microsoft now provides organizations with the ability to not only have server high-availability but also resiliency to chassis or rack failures and integrated site awareness through “fault domains”.

What is a Fault Domain?

A fault domain is a set of hardware components that have a shared single point of failure, such as a single power source. To provide fault tolerance, you need to have multiple fault domains so that a VM or service can move from one fault domain to another fault domain, such as from one rack to another.

The following image helps you identify these various datacenter components.

Defining a Node, Chassis, Rack and Site for Fault Domains

Defining a Node, Chassis, Rack, and Site for Fault Domains

Source: https://ccsearch.creativecommons.org/photos/c461e460-6a99-4421-b5f1-906e74c9446b

Configuring Fault Domains in Windows Server 2019

First, let’s review the different layers of resiliency now provided by Windows Server. The following table shows the hierarchy of the Windows Server hardware stack:

Fault Domain High-Availability
Application Failover clustering is used to automatically restart an application’s services or move it to another cluster node. If the application is running inside a virtual machine (VM) then guest clustering can be used which creates a cluster of virtualized hosts.
Virtual Machine Virtual machines (VMs) run on a failover cluster and can be restarted or failover to another cluster node. A virtualized application can run inside the VM.
Node (Server / Host) A server can move its application to another node on the same chassis using failover clustering. The server is the single point of failure, and this could be caused by an operating system crash.
Chassis A server which has a chassis failure can move to another chassis in the same rack. A chassis is commonly used with blade servers and its single point of failure could be a single power source or fan.
Rack A server which has a rack failure can move to another rack in the same site. A rack may have a single point of failure from its top of rack (TOR) switch.
Site If an entire site is lost, such as from a natural disaster, a server can move to a secondary site (datacenter).

This implementation of fault domains lets you move nodes between different chassis, racks, and sites. Remember that hardware components are only defined by the software, and they do not change the physical configuration of your datacenter. This means that if two nodes are in the same physical chassis and it fails, then both will go offline, even if you have declared them to be in different fault domains via the management interface.

This blog will specifically focus on the latest site availability and fault domain features, but check out Altaro’s blogs on Failover Clustering for more information. Additionally, as a side-note, Altaro VM Backup can provide DR and recovery functionality with its replication engine if desired.

Fault Domain Awareness with Storage Spaces Direct

A key scenario for using fault domains is to distribute your data, not just across different disks and nodes, but also across different chassis, racks and sites so that it is always available in case of an outage. Microsoft implements this using storage spaces direct (S2D) which divides cloned disks across these different fault domains. This allows you to deploy commodity storage drives in your datacenters, and its data is automatically replicated between each disk. In the initial release of S2D, the disks were mirrored between two cluster nodes, that if one failed, the data was already available on the second server. With the added layers of chassis and rack-awareness, additional disks can be created and distributed across different nodes, chassis, racks, and sites, providing granular resiliency throughout the different hardware layers. This means that if a node crashes, the data is still available elsewhere within the same chassis. If an entire chassis loses its power, a copy is on another chassis within the same rack. If a rack becomes unavailable due to a TOR switch misconfiguration, the data can be recovered from another rack. And if the datacenter fails, a copy of the disk is available at the secondary site.

One important consideration is that the site awareness and fault domains must be configured before storage spaces direct is set up. If your S2D cluster is already running and you are configuring fault domains later, you must manually move your nodes int the correct fault domains, first evicting the node from your cluster and its drive from your storage pool using the cmdlet:

Creating Fault Domains

Once you have deployed your hardware and understand the different layers in your hardware stack you will need to enable fault domain awareness. Since a minority of Windows Server users have multiple datacenters, this must be enabled through a command run from any node. Microsoft wanted to avoid exposing it directly through the GUI interface so that inexperienced users did not accidentally turn it on and expect an operation that they lacked the hardware to support. Enable fault domains using the following PowerShell cmdlet:

Remember that this hardware configuration is hierarchical, so nodes are part of chassis, which are stored in racks, which reside in sites. Your nodes will use the actual node name, and this is set automatically, such as N1.contoso.com. Next, you can define all of the different chassis, racks, and sites in your environment using friendly names and descriptions. This is helpful because your event logs will reflect your naming conventions, making troubleshooting easy.

You can name each of your chassis to match your hardware specs, such as a “Chassis 1”.

Next you can assign names to your racks, such as “Rack 1”.

Finally define any sites you have and provide them with a friendly name, like “Primary” or “Seattle”.

For each of these types, you can also use the -Description or -Location switch to add additional contextual information which is displayed in event logs, making troubleshooting and hardware maintenance easier.

Configuring Fault Domains

Once you have defined the different fault domains, you can configure their hierarchical structure using a parent (and child) relationship. Starting with the node, you define which chassis they belong to and then move up the chain. For example, you may configure nodes N1 and N2 to be part of chassis C1 using PowerShell:

Similarly, you may set chassis C1 and C2 to reside in rack R1:

Then configure racks R1 and R2 within the primary datacenter using:

To view your configuration and these relationships, run the Get-ClusterFaultDomain cmdlet.

You can also define the relationship of the hardware in an XML file. This method is described in Microsoft’s Fault domain awareness page. If you want to dig deeper, check out the full PowerShell syntax.


Now you are able to take advantage of the latest site-awareness features for Windows Server Failover Clustering, giving you additional resiliency throughout your hardware stack. We’ll have further content focused on this area in the near future, so stay tuned!

Finally, what about you? Do you see this being useful in your organization? Do you see any barrier to implementation? Let us know in the comments section below!

Go to Original Article
Author: Symon Perriman

CleverTap, Phiture launch customer engagement analytics framework

Customer retention platform CleverTap has collaborated with mobile consultancy Phiture to create a new acknowledgement, interest and conversion framework to better understand customer interactions.

To enable app companies to understand the depth and intensity of user interactions, the framework categorizes levels of user activity into one of three tiers of engagement: acknowledgment, interest or conversion (AIC), with conversion holding the highest value. CleverTap intends the customer experience analytics framework to help marketers improve user retention strategies and move customers from the acknowledgement layer through to the conversion layer.

With the AIC framework, marketers can recognize which in-app activities contribute the most to revenue growth. Marketers can choose which customer activities trigger classification into which layer for their specific app.

For the acknowledgement layer, triggers might include launching the app, opening an app-generated email or interacting with push notifications. Interest triggers include activity with more intent, but not quite at conversion yet, like sharing in-app links, scrolling through a news feed or moving through different pages within the app.

The conversion layer triggers include activities that suggest users are committed to the app and have taken actions, such as making a purchase, booking a flight, posting a message to a news feed or completing a level in a game.

Gartner noted customer engagement center interaction analytics as a top priority in its 2019 Hype Cycle for Customer Service and Customer Engagement report. According to Gartner, visualizing the customer journey and predicting customer behavior are essential abilities of service leaders and agents to bolster customer experience and, ultimately, company growth.

CleverTap and Phiture’s framework competes with products such as Google Analytics, Adobe Analytics, Mixpanel and Smartlook that also offer tools and dashboards that break down customer interaction and provide insight for better marketing.

CleverTap claims that typical dashboard metrics such as daily active users and monthly active users are not enough to see a clear picture of engagement and don’t provide actionable insight. The AIC framework incorporates the relative value of actions to inform user engagement strategy, according to Phiture.

The AIC framework accounts for the fact that users may not stay within one level and may move from interest to acknowledgement, or vice versa, or may even exist within multiple layers; tracking this movement in addition to activity contributes to informing and improving campaign strategies, according to the vendors.

Go to Original Article

PowerShell backup scripts: What are 3 essential best practices?

Although Windows PowerShell is not a backup tool per se, it can be used to create data backups. In fact, there are several PowerShell backup scripts available for download.

For those who may be considering backing up data using PowerShell, there are several best practices to keep in mind.

Don’t use internet scripting as-is

Even though there are some good PowerShell backup scripts available for download, none of those scripts should be used as-is. At the very least, you will probably need to modify the script to instruct PowerShell as to what data should and should not be backed up, and where to save the backup.

Additionally, a script might be designed to create a full backup every time that it is run, as opposed to creating an incremental or differential backup.

Be mindful of permissions

Another best practice for PowerShell backup scripts is to be mindful of permissions. When a PowerShell script runs interactively, that script inherits the permissions of the user who executed the script. It is possible, however, to force PowerShell to get its permissions from a different set of credentials by using the Get-Credential cmdlet

This brings up an important point. When a script includes the Get-Credential cmdlet, the script will cause Windows to display a prompt asking the user to enter a set of credentials. If a script is designed to be automated, then such behavior is not desirable.

PowerShell makes it possible to export a set of credentials to an encrypted file. The file can then be used to supply credentials to a script. Such a file must, however, be carefully protected. Otherwise, someone could make a copy of the file and use it to supply credentials to other PowerShell scripts.

Don’t rely on manual script execution

Finally, with PowerShell backup scripts, try not to rely on manual script execution. While there is nothing wrong with running a PowerShell script as a way of creating a one-off backup, a script is likely to be far more useful if it is configured to automatically run on a scheduled basis.

The Windows operating system includes a built-in tool for task automation, called Task Scheduler. By using Task Scheduler, you can automatically execute PowerShell backup scripts on a scheduled basis.

Go to Original Article