Tag Archives: around

AI vendors to watch in 2020 and beyond

There are thousands of AI startups around the world. Many aim to do similar things — create chatbots, develop hardware to better power AI models or sell platforms to automatically transcribe business meetings and phone calls.

These AI vendors, or AI-powered product vendors, have raised billions over the last decade, and will likely raise even more in the coming years. Among the thousands of startups, a few shine a little brighter than others.

To help enterprises keep an eye on some of the most promising AI startups, here is a list of those founded within the past five years. The startups listed are all independent companies, or not a subsidiary of a larger technology vendor. The chosen startups also cater to enterprises rather than consumers, and focus on explainable AI, hardware, transcription and text extraction, or virtual agents.

Explainable AI vendors and AI ethics

As the need for more explainable AI models has skyrocketed over the last couple of years and the debate over ethical AI has reached government levels, the number of vendors developing and selling products to help developers and business users understand AI models has increased dramatically. Two to keep an eye on are DarwinAI and Diveplane.

DarwinAI uses traditional machine learning to probe and understand deep learning neural networks to optimize them to run faster.

Founded in 2017 and based in Waterloo, Ontario, the startup creates mathematical models of the networks, and then uses AI to create a model that infers faster, while claiming to maintain the same general levels of accuracy. While the goal is to optimize the deep learning models, a 2018 update introduced an “explainability toolkit” that offers optimization recommendations for specific tasks. The platform then provides detailed breakdowns on how each task works, and how exactly the optimization will improve them.

Founded in 2017, Diveplane claims to create explainable AI models based on historical data observations. The startup, headquartered in Raleigh, N.C., puts its outputs through a conviction metric that ranks how likely new or changed data fits into the model. A low ranking indicates a potential anomaly. A ranking that’s too low indicates that the system is highly surprised, and that the data likely doesn’t belong in a model’s data set.

AI startups, AI vendors
There are thousands of AI startups in the world today, and it looks like there will be many more over the coming years.

In addition to the explainability product, Diveplane also sells a product that creates an anonymized digital twin of a data set. It doesn’t necessarily help with explainability, but it does help with issues around data privacy.

According to Diveplane CEO Mike Capps, Diveplane Geminai takes in data, understands it and then generates new data from it without carrying over personal data. In healthcare, for example, the product can input patient data and scrub personal information like names and locations, while keeping the patterns in the data. The outputs can then be fed into machine learning algorithms.

“It keeps the data anonymous,” Capps said.

AI hardware

To help power increasingly complex AI models, more advanced hardware — or at least hardware designed specifically for AI workloads — is needed. Major companies, including Intel and Nvidia, have quickly stepped up to the challenge, but so, too, have numerous startups. Many are doing great work, but one stands out.

Cerebras Systems, a 2016 startup based in Los Altos, Calif., made headlines around the world in 2019 when it created what it dubbed the world’s largest computer chip designed for AI workloads. The chip, about the size of a dinner plate, has some 400,000 cores and 1.2 trillion transistors. By comparison, the largest GPU has around 21.1 billion transistors.

The company has shipped a limited number of chips so far, but with a valuation expected to be well over $1 billion, Cerebras looks to be going places.

Automatic transcription companies

It’s predicted that more businesses will use natural language processing (NLP) technology in 2020 and that more BI and AI vendors will integrate natural language search functions into their platforms in the coming years.

Numerous startups sell transcription and text capturing platforms, as well as many established companies. It’s hard to judge them, as their platforms and services are generally comparable; however, two companies stand out.

Fireflies.ai sells a transcription platform that syncs with users’ calendars to automatically join and transcribe phone meetings. According to CEO and co-founder Krish Ramineni, the platform can transcribe calls with over 90% accuracy levels after weeks of training.

The startup, founded in 2016, presents transcripts within a searchable and editable platform. The transcription is automatically broken into paragraphs and includes punctuation. Fireflies.ai also automatically extracts and bullets information it deems essential. This feature does “a fairly good job,” one client said earlier this year.

The startup plans to expand that function to automatically label more types of information, including tasks and questions.

Meanwhile, Trint, founded in late 2014 by former broadcast journalist Jeff Kofman, is an automatic transcription platform designed specifically for newsrooms, although it has clients across several verticals.

The platform can connect directly with live video feeds, such as the streaming of important events or live press releases, and automatically transcribe them in real time. Transcriptions are collaborative, as well as searchable and editable, and included embedded time codes to easily go back to the video.

“It’s a software with an emotional response, because people who transcribe generally hate it,” Kofman said.

Bots and virtual agents

As companies look to cut costs and process client requests faster, the use of chatbots and virtual agents has greatly increased across numerous verticals over the last few years. While there are many startups in this field, a couple stand out.

Boost.ai, a Scandinavian startup founded in 2016, sells an advanced conversational agent that it claims is powered by a neural network. Automatic semantic understanding technology sits on top of the network, enabling the agent to read textual input word by word, and then as a whole sentence, to understand user intent.

Agents are pre-trained on one of several verticals before they are trained on the data of a new client, and the Boost.ai platform is quick to set up and has a low count of false positives, according to co-founder Henry Vaage Iversen. It can generally understand the intent of most questions within a few weeks of training, and will find a close alternative if it can’t understand it completely, he said.

The platform supports 25 languages, and pre-trained modules for a number of verticals, including banking, insurance and transportation industries.

Formed in 2018, EyeLevel.ai doesn’t create virtual agents or bots; instead, it has a platform for conversational AI marketing agents. The San Francisco-based startup has more than 1,500 chatbot publishers on its platform, including independent developers and major companies.

Eyelevel.ai is essentially a marketing platform — it advertises for numerous clients through the bots on in its marketplace. Earlier this year, Eyelevel.ai co-founder Ryan Begley offered an example.

An independent developer on its platform created a bot that quizzes users on their Game of Thrones knowledge. The bot operates on social media platforms, and, besides providing a fun game for users, it also collects marketing data on them and advertises products to them. The data it collects is fed back into the Eyelevel platform, which then uses it to promote through its other bots.

By opening the platform to independent developers, it gives individuals a chance to get their bot to a broader audience while making some extra cash. Eyelevel.ai offers tools to help new bot developers get started, too.

“Really, the key goal of the business is help them make money,” Begley said of the developers.

Startup launches continuing to surge

This list of AI-related startups represents only a small percentage of the startups out there. Many offer unique products and services to their clients, and investors have widely picked up on that.

According to the comprehensive AI Index 2019 report, a nearly 300-page report on AI trends complied by the Human-Centered Artificial Intelligence initiative at Stanford University, global private AI investment in startups reached $37 billion in 2019 as of November.

The report notes that since 2010, which saw $1.3 billion raised, investments in AI startups have increased at an average annual growth rate of over 48%.

The report, which considered only AI startups with more than $400,000 in funding, also found that more than 3,000 AI startups received funding in 2018. That number is on the rise, the report notes.

Go to Original Article
Author:

Preserving cultural heritage one language at a time – Microsoft on the Issues

There are close to 7,000 languages spoken around the world today. Yet, sadly, every two weeks a language dies with its last speaker, and it is predicted that between 50% and 90% of endangered languages will disappear by next century. When a community loses a language, it loses its connection to the past – and part of its present. It loses a piece of its identity. As we think about protecting this heritage and the importance of preserving language, we believe that new technology can help.

More than many nations, the people of New Zealand are acutely aware of this phenomenon. Centuries ago, the Māori people arrived on the islands to settle in and create a new civilization. Through the centuries and in the isolation of the South Pacific, the Māori developed their own unique culture and language. Today, in New Zealand, 15% of the population is Māori yet only a quarter of the Māori people speak their native language, and only 3% of all people living in New Zealand speak te reo Maori. Statistically, fluency in the language is extremely low.

New Zealand and its institutions have taken notice and are actively taking steps to promote the use of te reo Māori in meaningful ways. More and more schools are teaching te reo Māori, and city councils are revitalizing the country’s indigenous culture by giving new, non-colonial names to sites around their cities. Prime Minister Jacinda Ardern has promoted the learning of te reo Māori, calling for 1 million new speakers by 2040.  In a simple, yet profound, statement Ardern said, “Māori language is a part of who we are.” Despite all these efforts, today the fluency in te reo Māori is low.

For the past 14 years, Microsoft has been collaborating with te reo Māori experts and Te Taura Whiri i te Reo Māori (the Māori Language Commission) to weave te reo Māori into the technology that thousands of Kiwis use every day with the goal of ensuring it remains a living language with a strong future. Our collaboration has already resulted in translations of Minecraft educational resources and we recently commissioned a game immersed entirely in the traditional Māori world, Ngā Motu (The Islands).

To focus only on shaping the future ignores the value of the past, as well as our responsibility to preserve and celebrate te reo Māori heritage. This is why we are proud to announce the inclusion of te reo Māori as a language officially recognized in our free Microsoft Translator app. Microsoft Translator supports more than 60 languages, and this means that the free application can translate te reo Māori text into English text and vice versa. It will also support Māori into and from all other languages supported by Microsoft Translator. This is really all about breaking the language barrier at home, at work, anywhere you need it.

Dr. Te Taka Keegan, senior lecturer of computer science at the University of Waikato and one of the many local experts who have helped guide the project from its inception, says: “The language we speak is the heart of our culture. The development of this Māori language tool would not have been possible without many people working towards a common goal over many years. We hope our work doesn’t simply help revitalize and normalize te reo Māori for future generations of New Zealanders, but enables it to be shared, learned and valued around the world. It’s very important for me that the technology we use reflects and reinforces our cultural heritage, and language is the heart of that.”

Te reo Māori will employ Microsoft’s Neural Machine Translation (NMT) techniques, which can be more accurate than statistical translation models. We recently achieved human parity in translating news from Chinese to English, and the advanced machine learning used for te reo Māori will continue to become better and better as even more documents are used to “teach” it every nuance of the language. This technology will be leveraged across all our M365 products and services.

But while the technology is exciting, it’s not the heart of this story. This is about collaborating to develop the tools that boost our collective well-being. New Zealand’s government is also spearheading a “well-being” framework for measuring a nation’s progress in ways that don’t solely reflect economic growth. We need to look at cultural heritage the same way. Preserving our cultural heritage isn’t just a “nice thing to do” – according to the U.N., it’s vital to our resilience, social cohesion and sense of belonging, celebrating the values and stories we have in common.

I was fortunate to visit New Zealand this year, and it is a country that is genuinely working to achieve a delicate cultural balance, one that keeps in mind growth as well as guardianship, which maintains innovation and a future focus whilst preserving a deep reverence for its past. This kind of balance is something all nations should be striving for.

Globally, as part of our AI for Cultural Heritage program, Microsoft has committed $10 million over five years to support projects dedicated to the preservation and enrichment of cultural heritage that leverage the power of artificial intelligence. The ultimate role of technology is to serve humankind, not to replace it. We can harness the latest tools in ways that support an environment rich in diversity, perspectives and learnings from the past. And when we enable that knowledge and experience to be shared with the rest of the world, every society benefits.

For more information on Microsoft Translator please visit: https://www.microsoft.com/translator

Tags: , , , , , , , ,

Go to Original Article
Author: Microsoft News Center

DreamWorks animates its cloud with NetApp Data Fabric

Although it’s only been around since 2001, DreamWorks Animation has several blockbuster movies to its credit, including How to Train Your Dragon, Kung Fu Panda, Madagascar and Shrek. To get the finished product ready for the big screen, digital animators at the Hollywood, Calif., studio share huge data sets across the internal cloud, built around NetApp Data Fabric and its other storage technologies.

An average film project takes several years to complete and involves up to tens of thousands of data sets. At each stage of production, different animation teams access the content to add to or otherwise enhance the digital images, with the cloud providing the connective tissue. The “lather, rinse, repeat” process occurs up to 600 times per frame, said Skottie Miller, a technology fellow at the Los Angeles-area studio.

“We don’t make films — we create data. Technology is our paintbrush. File services and storage is our factory floor,” Miller told an audience of NetApp users recently.

‘Clouds aren’t cheap’

DreamWorks has a mature in-house cloud that has evolved over the years. In addition to NetApp file storage, the DreamWorks cloud incorporates storage kits from Hewlett Packard Enterprise (HPE). The production house runs the Qumulo Core file system on HPE Apollo servers and uses HPE Synergy composable infrastructure for burst compute, networks and storage.

Miller said DreamWorks views its internal cloud as a “lifestyle, a way to imagine infrastructure” that can adapt to rapidly changing workflows.

“Clouds aren’t magic and they’re not cheap. What they are is capable and agile,” Miller said. “One of the things we did was to start acting like a cloud by realizing what the cloud is good at: [being] API-driven and providing agile resources on a self-service basis.”

We don’t make films — we create data. Technology is our paintbrush. File services and storage is our factory floor.
Skottie MillerTechnology fellow, DreamWorks Animation

DreamWorks set up an overarching virtual studio environment that provides production storage, analytics on the storage fabric and automated processes. The studio deploys NetApp All-Flash FAS to serve hot data and NetApp FlexCache for horizontal scale-out across geographies, especially for small files.

The DreamWorks cloud relies on various components of the NetApp Data Fabric. NetApp FAS manages creative workflows. NetApp E-Series block storage is used for postproduction. NetApp HCI storage (based on SolidFire all-flash arrays) serves Kubernetes clusters and a virtual machine environment.

To retire tape backups, DreamWorks added NetApp StorageGrid as back-end object storage with NetApp FabricPool tiering for cold data. The company uses NetApp SnapMirror to get consistent point-in-time snapshots. Along with StorageGrid, Miller said DreamWorks has adopted NetApp Data Availability Services (NDAS) to manage OnTap file storage across hybrid clouds.

“NDAS has an interesting characteristic. Imagine a cloud thumb drive with a couple hundred terabytes. You can use it to guard against cyberattacks or environmental disasters, or to share data sets with somebody else,” Miller said.

The need for storage analytics

The sheer size of the DreamWorks cloud — a 20 PB environment with more than 10,000 pieces of media — underscored the necessity for deep storage analytics, Miller said.

“We rely on OnTap automation for our day-to-day provisioning and for quality of service,” he said.

In addition to being a NetApp customer, DreamWorks and NetApp have partnered to further development of Data Fabric innovations.

A DreamWorks cloud controller helps inform development of NetApp Data Fabric under a co-engineering agreement. The cloud software invokes APIs in NetApp Kubernetes Service.

The vendor and customer have joined forces to build an OnTap analytics hub that streams telemetry data in real time to pinpoint anomalies and automatically open service tickets. DreamWorks relies on open source tools that it connects to OnTap using NetApp APIs.

Go to Original Article
Author:

For Sale – EK Supremacy Evo waterblock

Got these just lying around so looking to get rid of them.

EK Supremacy Evo waterblock nickle plexi, intel fittings only and bolts have been painted black.
£30ono

Ek supremacy waterblock full nickel top has a copper base plate AMD fitting only with not other bits. £20 one delivered SOLD

XSPC EX240 radiator in white used could use a repeat. £25 one delivered SOLD

EK LAING D5 pump top combi (no pump) with white top. £25ono delivered. SOLD

Go to Original Article
Author:

Azure AD + F5—helping you secure all your applications

7 hours ago

Howdy folks,

We often hear from our customers about the complexities around providing seamless and secure user access to their applications—from cloud SaaS applications to legacy on-premises applications. Based on your feedback, we’ve worked to securely connect any app, on any cloud or server—through a variety of methods. And today, I’m thrilled to announce our deep integration with F5 Networks that simplifies secure access to your legacy applications that use protocols like header-based and Kerberos authentication.

By centralizing access to all your applications, you can leverage all the benefits that Azure AD offers. Through the F5 and Azure AD integration, you can now protect your legacy-auth based applications by applying Azure AD Conditional Access policies to leverage our Identity Protection engine to detect user risk and sign-in risk, as well as manage and monitor access through our identity governance capabilities. Your users can also gain single sign-on (SSO) and use passwordless authentication to these legacy-auth based applications.

To help you get started, we made it easier to publish these legacy-auth based applications by making the F5-BIG IP Application Policy Manager available in the Azure AD app gallery. You can learn how to configure your legacy-auth based applications by reviewing our documentation below based on the app type and scenario:

As always, let us know your feedback, thoughts, and suggestions in the comments below, so we can continue to build capabilities that help you securely connect any app, on any cloud, for every user.

Best regards,

Alex Simons (@Alex_A_Simons)

Corporate VP of Program Management

Microsoft Identity Division

Go to Original Article
Author: Microsoft News Center

How to Create and Manage Hot/Cold Tiered Storage

When I was working in Microsoft’s File Services team around 2010, one of the primary goals of the organization was to commoditize storage and make it more affordable to enterprises. Legacy storage vendors offered expensive products, often consuming a majority of the budget of the IT department and they were slow to make improvements because customers were locked in. Since then, every release of Windows Server has included storage management features which were previously only provided by storage vendors, such as deduplication, replication, and mirroring. These features could be used to manage commodity storage arrays and disks, reducing costs and eliminating vendor lock-in. Windows Server now offers a much-requested feature, the ability to move files between different tiers of “hot” (fast) storage and “cold” (slow) storage.

Managing hot/cold storage is conceptually similar to computer memory cache but at an enterprise scale. Files which are frequently accessed can be optimized to run on the hot storage, such as faster SSDs. Meanwhile, files which are infrequently accessed will be pushed to cold storage, such as older or cheaper disks. These lower priority files will also take advantage of file compression techniques like data deduplication to maximize storage capacity and minimize cost. Identical or varying disk types can be used because the storage is managed as a pool using Windows Server’s storage spaces, so you do not need to worry about managing individual drives. The file placement is controlled by the Resilient File System (ReFS), a file system which is used to optimize and rotate data between the “hot” and “cold” storage tiers in real-time based on their usage. However, using tiered storage is only recommended for workloads that are not regularly accessed. If you have permanently running VMs or you are using all the files on a given disk, there would be little benefit in allocating some of the disk to cold storage. This blog post will review the key components required to deploy tiered storage in your datacenter.

Overview of Resilient File System (ReFS) with Storage Tiering

The Resilient File System was first introduced in Windows Server 2012 with support for limited scenarios, but it has been greatly enhanced through the Windows Server 2019 release. It was designed to be efficient, support multiple workloads, avoid corruption and maximize data availability. More specifically to tiering though, ReFS divides the pool of storage into two tiers automatically, one for high-speed performance and one of maximizing storage capacity. The performance tier receives all the writes on the faster disk for better performance. If those new blocks of data are not frequently accessed, the files will gradually be moved to the capacity tier. Reads will usually happen from the capacity tier, but can also happen from the performance tier as needed.

Storage Spaces Direct and Mirror-Accelerated Parity

Storage Spaces Direct (S2D) is one of Microsoft’s enhancements designed to reduce costs by allowing servers with Direct Attached Storage (DAS) drives to support Windows Server Failover Clustering. Previously, highly-available file server clusters required some type of shared storage on a SAN or used an SMB file share, but S2D allows for small local clusters which can mirror the data between nodes. Check out Altaro’s blog on Storage Spaces Direct for in-depth coverage on this technology.

With Windows Server 2016 and 2019, S2D offers mirror-accelerated parity which is used for tiered storage, but it is generally recommended for backups and less frequently accessed files, rather than heavy production workloads such as VMs. In order to use tiered storage with ReFS, you will use mirror-accelerated parity. This provides decent storage capacity by using both mirroring and a parity drive to help prevent and recover from data loss. In the past, mirroring and parity would conflict and you would usually have to select one of the other.  Mirror-accelerator parity works with ReFS by taking writes and mirroring them (hot storage), then using parity to optimize their storage on disk (cold storage). By switching between these storage optimizations techniques, ReFS provides admins with the best of both worlds.

Creating Hot and Cold Tiered Storage

When configuring hot and cold storage you get to define the ratio of the hot and cold storage. For most workloads, Microsoft recommends allocating 20% to hot and 80% to cold. If you are using high-performance workloads, consider having more hot memory to support more writes. On the flip-side, if you have a lot of archival files, then allocate more cold memory. Remember that with a storage pool you can combine multiple disk types under the same abstracted storage space. The following PowerShell cmdlets show you how to configure a 1,000 GB disk to use 20% (200 GB) for performance (hot storage) and 80% (800 GB) for capacity (cold storage).

Managing Hot and Cold Tiered Storage

If you want to increase the performance of your disk, then you will allocate a great percentage of the disk to the performance (hot) tier. In the following example we use the PowerShell cmdlets to create a 30:70 ratio between the tiers:

Unfortunately, this resizing only changes the ratios of the disks but does not change the size of the partition or volume, so you likely also want to change these using the Resize-Volumes cmdlets.

Optimizing Hot and Cold Storage

Based on the types of workloads you are using, you may wish to further optimize when data is moved between hot and cold storage, which is known as the “aggressiveness” of the rotation. By default, the hot storage will use wait until 85% of its capacity is full before it begins to send data to the cold storage. If you have a lot of write traffic going to the hot storage then you want to reduce this value so that performance-tier data gets pushed to the cold storage quicker. If you have fewer write requests and want to keep data in hot storage longer then you can increase this value. Since this is an advanced configuration option, it must be configured via the registry on every node in the S2D cluster, and it also requires a restart. Here is a sample script to run on each node if you want to change the aggressiveness so that it swaps files when the performance tier reaches 70% capacity:

You can apply this setting cluster-wide by using the following cmdlet:

NOTE: If this is applied to an active cluster, make sure that you reboot one node at a time to maintain service availability.

Wrap-Up

Now you should be fully equipped with the knowledge to optimize your commodity storage using the latest Windows Server storage management features. You can pool your disks with storage spaces, use storage spaces direct (S2D) to eliminate the need for a SAN, and ReFS to optimize the performance and capacity of these drives.  By understanding the tradeoffs between performance and capacity, your organization can significantly save on storage management and hardware costs. Windows Server has made it easy to centralize and optimize your storage so you can reallocate your budget to a new project – or to your wages!

What about you? Have you tried any of the features listed in the article? Have they worked well for you? Have they not worked well? Why or why not? Let us know in the comments section below!


Go to Original Article
Author: Symon Perriman

Global cryptomining attacks use NSA exploits to earn Monero

A new threat group has launched cryptomining attacks around the globe and is using exploits from the National Security Agency to spread its malware.

The threat group, dubbed ‘Panda,’ was revealed this week in a new report from Cisco Talos. Christopher Evans and Dave Liebenberg, threat researcher and head of strategic intelligence, respectively, at Cisco Talos, wrote that although the group is “far from the most sophisticated” it has been very active and willing to “update their infrastructure and exploits on the fly as security researchers publicize indicators of compromises and proof of concepts.”

“Panda’s willingness to persistently exploit vulnerable web applications worldwide, their tools allowing them to traverse throughout networks, and their use of RATs, means that organizations worldwide are at risk of having their system resources misused for mining purposes or worse, such as exfiltration of valuable information,” Evans and Liebenberg wrote in a blog post. “Our threat traps show that Panda uses exploits previously used by Shadow Brokers and Mimikatz, an open-source credential-dumping program.”

The NSA exploits include EternalBlue, which attacks a vulnerability in Microsoft’s Server Message Block (SMB) protocol. The researchers first became aware of Panda’s cryptomining attacks in the summer of 2018 and told SearchSecurity that over the past year they’ve seen daily activity in the organization’s honeypots.

“We see them in several of our honeypots nearly every day, which tells me they’re targeting a large portion of the internet,” Evans said. “Our honeypots are deployed throughout the world, and I’ve never seen a geographic focus of their attacks in the data. The applications they target are widely deployed, and without patching are easy targets.”

Since January, the researchers saw Panda’s cryptomining attacks changing by targeting different vulnerabilities — first a ThinkPHP web framework issue, then an Oracle WebLogic flaw — and using new infrastructure both in March and again over the past month.

“They also frequently update their targeting, using a variety of exploits to target multiple vulnerabilities, and [are] quick to start exploiting known vulnerabilities shortly after public POCs become available, becoming a menace to anyone slow to patch,” the researchers wrote. “And, if a cryptocurrency miner is able to infect your system, that means another actor could use the same infection vector to deliver other malware.”

Liebenberg told SearchSecurity, “It appears that instead of employing good OpSec they focus on volume. That’s one reason why they’ll keep using old, burned infrastructure while still deploying new ones.” 

Evans and Liebenberg said in their research that the Panda group has made approximately 1,215 Monero (a cryptocurrency that emphasizes privacy), which equates to almost $100,000 today. One Monero is currently equal to $78, but the value of Monero has fluctuated — beginning the year around $50 and peaking over $110 in June.

The researchers have confirmed Panda cryptomining attacks against organizations in the banking, healthcare, transportation, telecommunications and IT services industries. Evans and Liebenberg also told SearchSecurity that the best way for organizations to detect if they have been attacked would be to “look for prolonged high system utilization, connections to mining pools using common mining ports (3333, 4444), watching for common malware persistence mechanisms, watching for DNS traffic to known mining pools and enabling the appropriate rules in your IDS.”

Go to Original Article
Author:

How to transfer FSMO roles with PowerShell

In the process of migrating to a new server or spreading the workload around, you might need to transfer FSMO roles in Active Directory from one domain controller to another.

AD relies on a concept called flexible server master operations roles, commonly referred to as FSMO roles. Domain controllers in an AD forest and domain hold one or more of these roles that handle different duties, such as keeping the AD schema in sync and synchronizing passwords across all domain controllers. You might need to spread these roles to other domain controllers to make AD operate more efficiently. As is the case when managing a Windows shop, you can manage much of your infrastructure either through the GUI or with PowerShell. There is no right or wrong way, but a script can be customized and reused, which saves some time and effort.

It’s not always easy to figure out which domain controller holds a particular role since FSMO roles tend to get spread out among various domain controllers. Then, once you’ve found the FSMO role, you need to juggle multiple windows if you try to manage them with the GUI. However, if you use PowerShell, we can both find where these FSMO roles live and easily move them to any domain controller with a script.

Before you get started

Before you can find and move FSMO roles with PowerShell, be sure to install Remote Server Administration Tools found here, which also includes the AD module. The computer you use PowerShell on should be on the domain, and you should have the appropriate permissions to move FSMO roles.

Use PowerShell to find FSMO roles

It’s not necessary to find the FSMO role holders before moving them, but it’s helpful to know the state before you make these types of significant changes.

There are two PowerShell commands we’ll use first to find the FSMO roles: Get-AdDomain and Get-AdForest. You need to use both commands since some FSMO holders reside at the forest level and some at the domain level. The AD module contains these cmdlets, so if you have that installed, you’re good to go.

First, you can find all the domain-based FSMO roles with the Get-AdDomain command. Since the Get-AdDomain returns a lot more than just FSMO role holders, you can reduce the output a bit with Select-Object:

Get-ADDomain | Select-Object InfrastructureMaster,PDCEmulator,RIDMaster | Format-List

[embedded content]
Migrating an AD domain controller

This command returns all the domain-based roles, including the Primary Domain Controller (PDC) emulator and Relative Identifier (RID) master, but we need to find the forest-level FSMO roles called domain naming master and schema master. For these FSMO roles, you need to use the Get-ADForest command.

Since Get-AdForest returns other information besides the FSMO role holders, limit the output using Select-Object to find the FSMO role holders we want.

Get-ADForest | Select-Object DomainNamingMaster,SchemaMaster | Format-List

How to transfer FSMO roles

It’s not necessary to find the FSMO role holders before moving them, but it’s helpful to know the state before you make these types of significant changes.

To save some time in the future, you can write a PowerShell function called Get-ADFSMORole that returns the FSMO role holders at the domain and the forest level in one shot.

function Get-ADFSMORole {
[CmdletBinding()]
param
()

Get-ADDomain | Select-Object InfrastructureMaster,PDCEmulator,RIDMaster
Get-ADForest | Select-Object DomainNamingMaster,SchemaMaster
}

Now that you have a single function to retrieve all the FSMO role holders, you can get to the task of moving them. To do that, call the function you made and assign a before state to a variable.

$roles = Get-ADFSMORole

With all the roles captured in a variable, you can transfer FSMO roles with a single command called Move-ADDirectoryServerOperationMasterRole. This command just handles moving FSMO roles. You can move each role individually by looping over each role name and calling the command, or you could do them all at once. Both methods work depending on how much control you need.

$destinationDc = 'DC01'
## Method 1
'DomainNamingMaster','PDCEmulator','RIDMaster','SchemaMaster','InfrastructureMaster' | ForEach-Object {
Move-ADDirectoryServerOperationMasterRole -OperationMasterRole $_ -Identity $destinationDc
}

## Method 2
Move-ADDirectoryServerOperationMasterRole -OperationMasterRole DomainNamingMaster,PDCEmulator,RIDMaster,SchemaMaster,InfrastructureMaster-Identity $destinationDc

After you run the command, use the custom Get-ADFSMORole function created earlier to confirm the roles now reside on the new domain controller.

Go to Original Article
Author:

Four steps to get involved in the Microsoft Office Specialist World Championship | | Microsoft EDU

Students around the world are using their Microsoft Office Specialist (MOS) certification to show colleges and future employers that they have a true mastery of the Microsoft Office suite. In fact, some talented students even go on to compete in a world competition for Microsoft Office.

Each year, Certiport, a Pearson VUE business, and the leading provider of learning curriculum, practice tests and performance-based IT certification exams that accelerate academic and career opportunities, hosts the MOS World Championship. This event is a global competition, testing top students’ skills on Microsoft Office Word, Excel and PowerPoint.

Are you a hard-working student, looking to show the world your Microsoft Office skills? Check out these four easy steps below to find out how you can get involved in the Microsoft Office Specialist Championship.

  1. Learn. Before you’re ready to compete, make sure you’re a master of the Microsoft Office Suite. Certiport has collaborated with multiple learning partners to make preparation easy. You can learn the skills you need to earn a top score on your MOS exam.
  2. Practice. Now that you’ve expanded your knowledge, it’s time to apply it. You can hone your Microsoft Office skills using various practice exams. These performance-based assessment and test preparation tools will prepare you to earn your MOS certification by creating a true “live in the app” experience. You’ll be a master in no time, because you’ll be practicing skills as if in the Microsoft Office application.
  3. Certify. You’re ready to show your skills! Microsoft Office Specialist exams are only delivered in Certiport Authorized Testing Centers. However, with more than 14,000 testing centers worldwide, there’s bound to be one close by. Find a testing center near you, and don’t forget to reach out to the testing center to schedule an appointment. Make sure to push for a score over 700 to be eligible for the MOS World Championship!
  4. Compete. If you’ve earned a top score, then the MOS Championship is your next step. Qualification is simple. When you take a Microsoft Office Specialist exam in Word, Excel or PowerPoint, you’ll automatically enter the MOS Championship and could be chosen to represent your country.

To represent your country at the MOS World Championship, you’ll need to first be named your nation’s champion by competing in a regional competition hosted by Certiport’s network of Authorized Partners. You can see the full list of partners and nations that compete here. In addition, each country has its own selection process, so make sure to connect with your local Partner to find out how you can prepare to compete in the MOS World Championship in 2020.

Interested in learning more about the MOS World Championship? Connect with us at [email protected].

Click here for free STEM resourcesExplore tools for student-centered learning

Go to Original Article
Author: Microsoft News Center

Accelerating innovation with Dynamics 365 and the Power Platform – Dynamics 365 Blog

Every year, we bring together thousands of Microsoft partners from around the world to celebrate, connect, and learn about what’s coming in the next year. Just last week, I had the chance to join this year’s Microsoft Inspire with 13,000 of our partners in Las Vegas, and it was an amazing week full of energy and excitement.

I had the opportunity to share how we’re accelerating innovation with Dynamics 365 and the Power Platform in two featured sessions. Hayden Stafford joined me to talk about the business opportunity in FY20 for Business Applications, and Charles Lamanna and Arun Ulagaratchagan showed off the innovation and partner investments across the Power Platform. In both sessions, partners from Hitachi Solutions, VeriPark, WorkSpan, and EY joined me on stage to talk about the opportunities they’re seeing in the market. Check out the session recordings to see more.

We also shared a few major announcements with the partner community throughout the week that will deepen our partnership, align with how our business is growing, and help customers realize the potential of digital transformation. Take a look at a few highlights below.

Launching the new Microsoft Business Applications ISV Connect Program

ISVs that build on Microsoft Dynamics 365 and PowerApps can offer their customers innovative apps they couldn’t build elsewhere using connections to industry-leading cloud services on Azure and developer tools built just for them.

After previewing elements of the program in the spring, we officially launched the Business Applications ISV Connect Program on July 15th. This program brings together platform and program benefits created specifically for ISVs to support their success. We’re laying a path for a stronger partnership with our ISVs through ongoing platform investments, development tooling enhancements, and go-to-market support for mutual success. Read Steven Guggenheimer’s blog for more information about the program’s requirements and benefits.

Expanding our Dynamics 365 Industry Accelerators

We know our partners want to go to market fast. Our industry accelerators help them do just that.

At Inspire, we introduced two new industry accelerators: the Dynamics 365 Automotive Accelerator and Dynamics 365 Banking Accelerator. We also announced significant updates to the Dynamics 365 Nonprofit Accelerator.

Each of these accelerators was developed in deep partnership with industry experts to provide pre-built dashboards, sample data, and workflows that align with common industry scenarios. Steven Guggenheimer’s blog has more on this update as well.

Updating our Dynamics 365 packaging model

In October, Microsoft will be moving from “one-size-fits-all” Microsoft Dynamics 365 licensing plans to focus on providing customers with the specific Dynamics 365 applications that meet the unique needs of their organization. Customers need software that aligns to their functional roles and scenarios, and they require the ability to add or remove applications as their company grows and changes over time. The new licensing model will allow customers to purchase the applications they need, when they need them. Each application is extensible and applications can be easily mixed and matched to configure integrated solutions that align to a customer’s unique business requirements.

To learn more about these new options for your customers, check out the Inspire sessions “Customer Engagement Licensing Updates” and “Unified Operations Licensing Updates.”

Introducing new licensing options for PowerApps and Microsoft Flow

Over the past few months, we’ve announced a set of continued investments in the Power Platform and vision for one connected platform that enables everyone, regardless of their skill set or background, to innovate.

In addition to the general availability of innovations like AI Builder and PowerApps Portals, we will be introducing new licensing options in October 2019 shaped by the feedback we’ve received from our customers and the community of makers and creators who have been with us on the Power Platform journey. At Inspire we previewed new licensing options for PowerApps and Microsoft Flow that:

  • Make it easier to get started with a single use case before rolling out the full platform for all users.
  • Make licensing easier to understand by simplifying the complex feature-level differences between P1 and P2 plans.

To learn more about these new options for your customers, check out the Inspire session Microsoft Power Platform business model and licensing updates or read the PowerApps community blog post with additional details.

Customers realizing the potential of digital transformation

Throughout the event, we heard inspiring stories from customers and partners around the world.

One highlight came in Monday’s Corenote with Judson Althoff. We saw how Unilever is transforming its business with Microsoft technology, including Power BI and PowerApps. With connected data at the core, Unilever is able to increase productivity with more accessible insights and empower individuals to solve problems on their own, without a developer.

On Wednesday, Satya Nadella talked about Crane Worldwide Logistics, a Dynamics 365 customer who grew from a startup to a major player in the global logistics industry over the past 10 years. With Dynamics 365 for Sales and LinkedIn Sales Navigator, Crane can seamlessly combine customer engagement data with data about customer and seller activities from Office 365, as well as from LinkedIn – even when these interactions take place outside their CRM. And with Dynamics 365 Sales Insights, sellers are able to focus on the right customers, thanks to AI-driven insights that flag existing accounts that need extra attention, as well as leads that offer the most potential.

These are just two of the stories we shared at Microsoft Inspire this year. If you missed any part of this year’s event, take some time to check out the content on-demand to learn how you can build success in the coming year.

Go to Original Article
Author: Microsoft News Center