Tag Archives: organizations

Creating a more accessible world with Azure AI

At Microsoft, we are inspired by how artificial intelligence is transforming organizations of all sizes, empowering them to reimagine what’s possible. AI has immense potential to unlock solutions to some of society’s most pressing challenges.

One challenge is that according to the World Health Association, globally, only 1 in 10 people with a disability have access to assistive technologies and products. We believe that AI solutions can have a profound impact on this community. To meet this need, we aim to democratize AI to make it easier for every developer to build accessibility into their apps and services, across language, speech, and vision.

In view of the upcoming Bett Show in London, we’re shining a light on how Immersive Reader enhances reading comprehension for people regardless of their age or ability, and we’re excited to share how Azure AI is broadly enabling developers to build accessible applications that empower everyone.

Empowering readers of all abilities

Immersive Reader is an Azure Cognitive Service that helps users of any age and reading ability with features like reading aloud, translating languages, and focusing attention through highlighting and other design elements. Millions of educators and students already use Immersive Reader to overcome reading and language barriers.

The Young Women’s Leadership School of Astoria, New York, brings together an incredible diversity of students with different backgrounds and learning styles. The teachers at The Young Women’s Leadership School support many types of learners, including students who struggle with text comprehension due to learning differences, or language learners who may not understand the primary language of the classroom. The school wanted to empower all students, regardless of their background or learning styles, to grow their confidence and love for reading and writing.

A teacher and student looking at a computer together

Watch the story here

Teachers at The Young Women’s Leadership School turned to Immersive Reader and an Azure AI partner, Buncee, as they looked for ways to create a more inclusive and engaging classroom. Buncee enables students and teachers to create and share interactive multimedia projects. With the integration of Immersive Reader, students who are dyslexic can benefit from features that help focus attention in their Buncee presentations, while those who are just learning the English language can have content translated to them in their native language.

Like Buncee, companies including Canvas, Wakelet, ThingLink, and Nearpod are also making content more accessible with Immersive Reader integration. To see the entire list of partners, visit our Immersive Reader Partners page. Discover how you can start embedding Immersive Reader into your apps today. To learn more about how Immersive Reader and other accessibility tools are fostering inclusive classrooms, visit our EDU blog.

Breaking communication barriers

Azure AI is also making conversations, lectures, and meetings more accessible to people who are deaf or hard of hearing. By enabling conversations to be transcribed and translated in real-time, individuals can follow and fully engage with presentations.

The Balavidyalaya School in Chennai, Tamil Nadu, India teaches speech and language skills to young children who are deaf or hard of hearing. The school recently held an international conference with hundreds of alumni, students, faculty, and parents. With live captioning and translation powered by Azure AI, attendees were able to follow conversations in their native languages, while the presentations were given in English.

Learn how you can easily integrate multi-language support into your own apps with Speech Translation, and see the technology in action with Translator, with support for more than 60 languages, today.

Engaging learners in new ways

We recently announced the Custom Neural Voice capability of Text to Speech, which enables customers to build a unique voice, starting from just a few minutes of training audio.

The Beijing Hongdandan Visually Impaired Service Center leads the way in applying this technology to empower users in incredible ways. Hongdandan produces educational audiobooks featuring the voice of Lina, China’s first blind broadcaster, using Custom Neural Voice. While creating audiobooks can be a time-consuming process, Custom Neural Voice allows Lina to produce high-quality audiobooks at scale, enabling Hongdandan to support over 105 schools for the blind in China like never before.

“We were amazed by how quickly Azure AI could reproduce Lina’s voice in such a natural-sounding way with her speech data, enabling us to create educational audiobooks much more quickly. We were also highly impressed by Microsoft’s commitment to protecting Lina’s voice and identity.”—Xin Zeng, Executive Director at Hongdandan

Learn how you can give your apps a new voice with Text to Speech.

Making the world visible for everyone

According to the International Agency for the Prevention of Blindness, more than 250 million people are blind or have low vision across the globe. Last month, in celebration of the United Nations International Day of Persons with Disabilities, Seeing AI, a free iOS app that describes nearby people, text, and objects, expanded support to five new languages. The additional language support for Spanish, Japanese, German, French, and Dutch makes it possible for millions of blind or low vision individuals to read documents, engage with people around them, hear descriptions of their surroundings in their native language, and much more. All of this is made possible with Azure AI.

Try Seeing AI today or extend vision capabilities to your own apps using Computer Vision and Custom Vision.

Get involved

We are humbled and inspired by what individuals and organizations are accomplishing today with Azure AI technologies. We can’t wait to see how you will continue to build on these technologies to unlock new possibilities and design more accessible experiences. Get started today with a free trial.

Check out our AI for Accessibility program to learn more about how companies are harnessing the power of AI to amplify capabilities for the millions of people around the world with a disability.

Go to Original Article
Author: Microsoft News Center

Data-driven storytelling makes data accessible

As organizations wrestle with an abundance of data and a dearth of experts in data interpretation, data-driven storytelling helps those organizations make sense of their information and drive their business forward.

Most business intelligence platforms help organizations transform their information into digestible data visualizations. Many, however, don’t give the data context — attempt to explain why sales dropped in a given month, or rose in another, for example.

Some BI vendors — Tableau and Yellowfin, for example — have added data-driven storytelling capabilities.

Narrative Science, a vendor based in Chicago and founded in 2010, meanwhile, is among a group of vendors whose sole focus is data storytelling, offering a suite of tools that give information context. Narrative Science recently introduced Lexio, a tool that turns data into digestible stories and is particularly suited for mobile devices.

 Nate Nichols, vice president of product architecture at Narrative Science, and Anna Schena Walsh, director of growth marketing at the company, co-authored a book on storytelling entitled Let Your People Be People. In the book, published Jan. 6, the authors look at 36 ways storytelling — with a particular emphasis on data-driven storytelling — can help change organizations, improving operations as well as helping employees not trained in data science use data to their advantage.

Nichols and Schena Walsh recently answered questions about data-driven storytelling. Here, in Part I of a two-part Q&A, they discuss the importance of data-driven storytelling. In Part II, they delve into how data-driven storytelling can improve an organization’s operations.

In a business sense, what is storytelling?

Anna Schena WalshAnna Schena Walsh

Anna Schena Walsh: When we think of storytelling at Narrative Science, we spend a lot of time here thinking about what makes a good story because our software is data storytelling, so we’re going from to data to story. What we realized is that the arc of a good story, when it comes to software, also applies to people. No matter where you sit in a business you are a storyteller, whether you are a salesperson, a marketer, an engineer, and at some point in your career you need to be able to tell a good story. Whether it’s to advocate for yourself, to sell a product or other various different ways, that’s an essential skill for everyone to do precisely and to do it well.

Honing in more narrowly on business intelligence and analytics, how do you define data-driven storytelling?

Nate NicholsNate Nichols

Nate Nichols: It’s what Anna said about storytelling and applying it to data. The real shift here is that there’s been this idea that getting to the right number, or doing some analysis and looking at a number or a chart, was sufficient, and that was where the process stopped. You got a number and then it’s an executive’s job to figure out what to go do with that, or someone else’s job to figure out what to go and do with those numbers. I think what our customers are looking at and what the world is waking up to is that the right answer is just the beginning of the problem. You have the right answer, but then the real work is the communication, and that’s the piece — the storytelling part — that can actually change the world and bring people along with you.

No movement was ever led by just stating an answer and then everyone realizing that was right and joining up of their own accord. It’s really telling the story, giving it the cause and effect, the context, the why.
Nate NicholsVice president of product architecture, Narrative Science

No movement was ever led by just stating an answer and then everyone realizing that was right and joining up of their own accord. It’s really telling the story, giving it the cause and effect, the context, the why. With all of the data and analysis that’s out there, you need to still actually do the work of mobilizing it.

How does data-driven storytelling manifest itself — how do you take the information and turn it into a story?

Nichols: One of the key components is using language, so when our system is writing stories it starts with a question from the user. They want to know how their sales pipeline is, how [operations] were last quarter. There are a lot of systems that can answer that — our system can answer that and tell you how many deals were made, but then it goes into a storytelling mode where it gives a reader the context, why this is happening or what else is happening around this — that context becomes really important. It’s cause and effect, and knowing why things are happening becomes super important.

It starts with an answer, and then brings in all those storytelling elements to express things in a way that makes sense to a person. A computer is good at saying, ‘Sales increased 22 percent week over week,’ but a human would say, ‘Sales are doing great,’ or, ‘Sales jumped a lot, sales shot up.’ It may be less numerically precise, but it’s a lot more intuitive and works with our brains better. Our system is adding on that layer, bringing in the context, bringing in the characters, and then doing a lot of work to put that in a single story that someone can sit down and read and has a beginning and an end.

Your book looks at different ways storytelling can be used by businesses — what are some of them?

Schena Walsh: The book looks at 36 different ways you can use storytelling. One is how to tell a better story, and then how to create a storytelling environment, and at then at the end how to use data storytelling to enable you to realize your talent. Here at Narrative Science we have software that surfaces the story and brings the data we need to us, which allows the employees here not to be spending time looking through analytics, and then also gives them the data points they need to tell their own stories as well. So we actually spend a lot of time here training our people to tell their stories. Nate actually leads a storytelling workshop here at Narrative Science, and a lot of the elements of what we teach our employees and our clients is … in the book.

Why do businesses need to improve their data-driven storytelling abilities — what does it enable them to do that they might not otherwise be able to?

Schena Walsh: One big trend I’ve seen is companies leaning into what was previously referred to as soft skills. As you see a lot more automation of tasks happening, these skills have become more and more important. For us, we truly believe that storytelling unlocks incredible potential for companies. We know we need to spend a lot of time with data, we need that information, and data storytelling allows us to be able to able to tell stories about ourselves, about our companies, about our jobs really well and really precisely.

Editor’s note: This interview has been edited for clarity and conciseness.

Go to Original Article

SAP Data Hub opens predictive possibilities at Paul Hartmann

Organizations have access to more data than they’ve ever had, and the number of data sources and volume of data just keeps growing.

But how do companies deal with all the data and can they derive real business use from it? Paul Hartmann AG, a medical supply company, is trying to answer those questions by using SAP Data Hub to integrate data from different sources and use the data to improve supply chain operations. The technology is part of the company’s push toward a data-based digital transformation, where some existing processes are digitized and new analytics-based models are being developed.

The early results have been promising, said Sinanudin Omerhodzic, Paul Hartmann’s CIO and chief data officer.

Paul Hartmann is a 200-year-old firm in Heidenheim, Germany that supplies medical and personal hygiene products to customers such as hospitals, nursing homes, pharmacies and retail outlets. The main product groups include wound management, incontinence management and infection management.

Paul Hartmann is active in 35 countries and turns over around $2.2 billion in sales a year. Omerhodzic described the company as a pioneer in digitizing its supply chain operations, running SAP ERP systems for 40 years. However, changes in the healthcare industry have led to questions about how to use technology to address new challenges.

For example, an aging population increases demand for certain medical products and services, as people live longer and consume more products than before.

One prime area for digitization was in Paul Hartmann’s supply chain, as hospitals demand lower costs to order and receive medical products. Around 60% of Paul Hartmann’s orders are still handled by email, phone calls or fax, which means that per-order costs are high, so the company wanted to begin to automate these processes to reduce costs, Omerhodzic said.

One method was to install boxes stocked with products and equipped with sensors in hospital warehouses that automatically re-order products when stock reaches certain levels. This process reduced costs by not requiring any human intervention on the customer side. Paul Hartmann installed 9,000 replenishment boxes in about 100 hospitals in Spain, which proved adept at replacing stock when needed. But it then began to consider the next step: how to predict with greater accuracy what products will be needed when and where to further reduce the wait time on restocking supplies.  

Getting predictive needs new data sources

This new level of supply chain predictive analytics requires accessing and analyzing vast amounts of data from a variety of new sources, Omerhodzic said. For example, weather data could show that a storm may hit a particular area, which could result in more accidents, leading hospitals to stock more bandages in preparation. Data from social media sources that refer to health events such as flu epidemics could lead to calculations on the number of people who could get sick in particular regions and the number of products needed to fight the infections.

“All those external data sources — the population data, weather data, the epidemic data — combined with our sales history data, allow us to predict and forecast for the future how many products will be required in the hospitals and for all our customers,” Omerhodzic said.

Paul Hartmann worked with SAP to implement a predictive system based on SAP Data Hub, a software service that enables organizations to orchestrate data from different sources without having to extract the data from the source. AI and machine learning are used to analyze the data, including the entire history of the company’s sales data, and after just a few months of the pilot project was making better predictions than the sales staff, Omerhodzic said.

“We have 200 years selling our product, so the sales force has a huge wealth of information and experience, but the new system could predict even better than they could,” he said. “This was a huge wake up for us and we said we need to learn more about our data, we need to pull more data inside and see how that could improve or maybe create new business models. So we are now in the process of implementing that.”

Innovation on the edge less disruptive

The use of SAP Data Hub as an innovation center is one example of how SAP can foster digital transformation without directly changing core ERP systems, said Joshua Greenbaum, principal analyst at Enterprise Applications Consulting. This can result in new processes that aren’t as costly or disruptive as a major ERP upgrade.

Joshua GreenbaumJoshua Greenbaum

“Eventually this touches your ERP because you’re going to be making and distributing more bandages, but you can build the innovation layer without it being directly inside the ERP system,” Greenbaum said. “When I discuss digital transformation with companies, the easy wins don’t start with the statement, ‘Let’s replace our ERP system.’ That’s the road to complexity and high costs — although, ultimately, that may have to happen.”

For most organizations, Greenbaum said, change management — not technology — is still the biggest challenge of any digital transformation effort.

Change management challenges

At Paul Hartmann, change management has been a pain point. The company is addressing the technical issues of the SAP Data Hub initiative through education and training programs that enhance IT skills, Omerhodzic said, but getting the company to work with data is another matter.

“The biggest change in our organization is to think more from the data perspective side and the projects that we have today,” he said. “To have this mindset and understanding of what can be done with the data requires a completely different approach and different skills in the business and IT. We are still in the process of learning and establishing the appropriate organization.”

Although the sales organization at Paul Hartmann may feel threatened by the predictive abilities of the new system, change is inevitable and affects the entire organization, and the change must be managed from the top, according to Omerhodzic.

“Whenever you have a change there’s always fear from all people that are affected by it,” he said. “We will still need our sales force in the future — but maybe to sell customer solutions, not the products. You have to explain it to people and you have to explain to them where their future could be.”

Go to Original Article

Manual mainframe testing persists in the age of automation

A recent study indicates that although most IT organizations recognize software test automation benefits their app development lifecycle, the majority of mainframe testing is done manually, which creates bottlenecks in the implementation of modern digital services.

The bottom line is that mainframe shops that want to add new, modern apps need to adopt test automation and they need to do it quickly or get left behind in a world of potential backlogs and buggy code.

However, while it’s true that mainframe shops have been slow to implement automated testing, it’s mostly been because they haven’t really had to; most mainframe shops are in maintenance mode, said Thomas Murphy, an analyst at Gartner.

“There is a need to clean up crusty old code, but that is less automated ‘testing’ and more automated analysis like CAST,” he said. “In an API/service world, I think there is a decent footprint for service virtualization and API testing and services around this. There are a lot of boutique consulting firms that also do various pieces of test automation.”

Yet, Detroit-based mainframe software maker Compuware, which commissioned the study conducted by Vanson Bourne, a market research firm, found that as many as 86% of respondents to its survey said they find it difficult to automate the testing of mainframe code. Only 7% of respondents said they automate the execution of test cases on mainframe code and 75% of respondents said they do not have automated processes that test code at every stage of development.

The survey polled 400 senior IT leaders responsible for application development in organizations with a mainframe and more than 1,000 employees.

Overall, mainframe app developers — as opposed to those working in distributed environments — have been slow to automate mainframe testing of code, but demand for new, more complex applications continues to grow to the point where 92% of respondents said their organization’s mainframe teams spend much more time testing code than was required in the past. On average, mainframe app development teams spend 51% of their time on testing new mainframe applications, features or functionality, according to the survey.

Shift left

To remedy this, mainframe shops need to “shift left” and bring automated testing, particularly automated unit testing, into the application lifecycle earlier to avoid security risks and improve the quality of their software. But only 24% of organizations reported that they perform both unit and functional mainframe testing on code before it is released into production. Moreover, automation and the shift to Agile and DevOps practices are “crucial” to the effort to both cut the time required to build and improve the quality of mainframe software, said Chris O’Malley, CEO of Compuware.

Yet, 53% of mainframe application development managers said the time required to conduct thorough testing is the biggest barrier to integrating the mainframe into Agile and DevOps.

IBM system z 13 mainframe
Mainframes continue to be viewed as the gold standard for data privacy, security and resiliency, though IT pros say there is not enough automated software testing for systems like the IBM system z, pictured here.

Eighty-five percent of respondents said they feel pressure to cut corners in testing that could result in compromised code quality and bugs in production code. Fifty percent said they fear cutting corners could lead to potential security flaws, 38% said they are concerned about disrupting operations and 28% said they are most concerned about the potential negative impact on revenue.

In addition, 82% of respondents said that the paucity of automated test cases could lead to poor customer experiences, and 90% said that automating more test cases could be the single most important factor in their success, with 87% noting that it will help organizations overcome the shortage of skilled mainframe app developers.

Automated mainframe testing tools in short supply

Truth be told, there are fewer tools available to automate the testing of mainframe software and there is not much to be found in the open source market.

And though IBM — and its financial results after every new mainframe introduction — might beg to differ, many industry observers, like Gartner’s Murphy, view the mainframe as dead.

The mainframe isn’t where our headspace is at. We use that new mainframe — the cloud — now.
Thomas MurphyAnalyst, Gartner

“The mainframe isn’t where our headspace is at,” Murphy said. “We use that new mainframe — the cloud — now. There isn’t sufficient business pressure or mandate. If there were a bunch of recurring issues, if the mainframe was holding us back, then people would address the problem. Probably by shooting the mainframe and moving elsewhere.”

Outside of the mainframe industry, companies such as Parasoft, SmartBear and others regularly innovate and deliver new automated testing functionality for developers in distributed, web and mobile environments. For instance, Parasoft earlier this fall introduced Selenic, its AI-powered automated testing tool for Selenium. Selenium is an automated testing suite for web apps that has become a de facto standard for testing user interfaces. Parasoft’s Selenic integrates into existing CI/CD pipelines to ease the way for organizations that employ DevOps practices. Selenic’s AI capabilities provide recommendations that automate the “self-healing” of any broken Selenium scripts and provide deep code analysis to users.

For its part, Gartner named SmartBear, another prominent test automation provider, as a leader in the 2019 Gartner Magic Quadrant for Software Test Automation. Among the highlights of what the company has done for developers in 2019, the company expanded into CI/CD pipeline integration for native mobile test automation with the acquisition of Bitbar, added new tools for behavior-driven development and introduced testing support for GraphQL.

Go to Original Article

How technology intensity accelerates business value – Microsoft Industry Blogs

Organizations that embrace technology intensity are inherently more successful. What exactly is technology intensity, and why is it critical for today’s enterprises to build a cohesive digital strategy?

Technology intensity defined

Technology intensity has three components:

  1. Rapid adoption of hyper-scale cloud platforms
  2. rational business decision to invest in digital capabilities
  3. Relentless focus on building technology that customers can trust—relying on credible suppliers and building security into new products.

As Microsoft CEO Satya Nadella notes, “We must adopt technology in ways that are much faster than what we have done in the past. Each one of us, in our organizations, will have to build our own digital capability on top of the technology we have adopted. Tech intensity is one of the key considerations you have to get right.”

Technology intensity as a critical enabler

Simply put, technology intensity is a critical part of business strategy today. I meet regularly with leaders from companies around the world. In my experience, high-performance companies invest the most in digital capabilities and skillsets. In fact, there is a productivity gap between these top performers and their lesser performing counterparts that directly correlates with the scale of digital investments. 

Other research shows that technology spending into hiring developers and creating innovative software that is owned and used exclusively by a company is a key competitive advantage. These companies cultivate the ability to develop their own “digital IP,” building exclusive software and tools that only their customers have access to. Resources are always scarce, and these companies build differentiated IP on top of existing best-in-class technology platforms. 

By putting customers at the center of the OEM supply chain, a report from the Economist Intelligence Unit (EIU) sponsored by Microsoft, Lorenzo Fornaroli, Senior Director of Global Logistics and Supply Chain at China-based ICT Huawei Technologies, highlights an advantage of embracing technology intensity: “…as an ICT company, we have the internal resources needed to identify new technologies early and deploy them effectively. Skills and experience with these technologies are readily available in-house.”

New business models

Manufacturers are increasingly using technology intensity principles to extend their supply chain well past the point of delivery. They are creating smart, connected products and digitizing their businesses with Azure IoT, Microsoft AI, Azure Blockchain Service, Dynamics 365, and Microsoft 365.

Rolls-Royce, for example, takes a monthly fee from customers of its jet engines that is based on flying hours. Sellers of industrial machinery such as Sandvik Coromant and Tetrapak are exploring the concept of charging customers for parts machined and containers filled.

Using the data that connected products transmit back about their condition and usage, manufacturers build new digital services. For these forward-thinking manufacturers, the extended supply chain is an opportunity to move away from selling products to customers based on a one-off, upfront purchase price, and to charge for subscriptions based on performance guarantees. This is “technology intensity in action”, as manufacturers become digital software companies.

Technology intensity in action

Viewpoint of a city

Viewpoint of a cityBühler is a global market leader in die casting technology and has adopted technology intensity for connected products. Looking to continue driving innovation in the die casting process, Bühler aggregated data from different die casting cell components together under a single cell-management system. Bühler can now monitor, control, and manage a complete die casting cell in a unified, easy-to-use system. The company is also exploring additional avenues for digital transformation including real-time fleet learning by fine-tuning its Artificial Intelligence (AI) models to generate new insights.

Lexmark, a manufacturer of laser printers and imaging products, now offers Lexmark Cloud Print Infrastructure as a Service (CPI). Customers no longer manage onsite print infrastructure. Instead, Lexmark installs its own IoT-enabled devices and activates smart services, creating an always-on print environment. To roll out CPI, Lexmark worked with Microsoft to quickly adopt new Internet of Things (IoT), Customer Relationship Management (CRM), AI and collaboration tools to successfully grow their internal digital capabilities.

Colfax is another great example of a manufacturer embracing technology intensity. A global industrial technology company, Colfax, recognized the need to adopt industrial IoT technologies and the importance of embracing a comprehensive digital transformation initiative to expand the offerings of two of its business platforms—ESAB, a welding and cutting solutions provider, and Howden, an air and gas handling engineering company. Working with Microsoft and PTC, the company adopted advanced cloud technologies while building up a digital skillset. 

Investing in new digital skillsets

Savvy manufacturers harness data from connected products and combine this with data from many other sources, in unprecedented volumes. The copious amounts of data require mindset shifts, requiring organizations to build skills for a new digital era. Most manufacturers concur that they need to expand their internal capabilities in this manner.

“Senior supply-chain professionals have typically been accustomed to working with a fairly limited set of data to drive their decisions—but that’s all changed now,” says Daniel Helmig, group head of quality and operations at Swiss-Swedish industrial group ABB in the EIU report putting customers at the center of the OEM supply chain. He continues, “but being able to take advantage of the huge volumes of data available to us— and these are growing every day—demands a mind shift among supply-chain professionals, based on using new levels of visibility to respond to issues quickly and decisively.”

From the same report, Sheri Henck, vice-president of global supply chain, distribution, and logistics at Medtronic, a US-based manufacturer of medical equipment and devices, commented “In the past, a great deal of supply-chain decision-making was based on intuition, because data wasn’t available. Today, there is plenty of data available, but there’s also a recognition that skills and competencies for supply-chain leaders and their teams need to change if we are to make the most of data and use it to make data-driven recommendations and decisions.”

Explore how to optimize your digital operations with business solutions for intelligent factories with the e-book, “Factory of the future: Achieving digital excellence in manufacturing today“.

Go to Original Article
Author: Microsoft News Center

RPA in manufacturing increases efficiency, reduces costs

Robotic process automation software is increasingly being adopted by organizations to improve processes and make operations more efficient.

In manufacturing, the use cases for RPA range from reducing errors in payroll processes to eliminating unneeded processes before undergoing a major ERP system upgrade.

In this Q&A, Shibaji Das, global director of finance and accounting and supply chain management for UiPath, discusses the role of RPA in manufacturing ERP systems and how it can help improve efficiency in organizations.

UiPath, based in New York, got its start in 2005 as DeskOver, which made automation scripts. In 2012, the company relaunched as UiPath and shifted its focus to RPA. UiPath markets applications that enable organizations to examine processes and create bots, or software robots that automate repetitive, rules-based manufacturing and business processes. RPA bots are usually infused with AI or machine learning so that they can take on more complex tasks and learn as they encounter more processes.

What is RPA and how does it relate to ERP systems?

Shibaji Das: When you’re designing a city, you put down the freeways. Implementing RPA is a little like putting down those freeways with major traffic flowing through, with RPA as the last mile automation. Let’s say you’re operating on an ERP, but when you extract information from the ERP, you still do it manually to export to Excel or via email. A platform like [UiPath] forms a glue between ERP systems and automates repetitive rules-based stuff. On top of that, we have AI, which gives brains to the robot and helps it understand documents, processes and process-mining elements.

Why is RPA important for the manufacturing industry?

Shibaji Das, UiPathShibaji Das

Das: When you look at the manufacturing industry, the challenges are always the cost pressure of having lower margins or the resources to get innovation funds to focus on the next-generation of products. Core manufacturing is already largely automated with physical robots; for example, the automotive industry where robots are on the assembly lines. The question is how can RPA enable the supporting functions of manufacturing to work more efficiently? For example, how can RPA enable upstream processes like demand planning, sourcing and procurement? Then for downstream processes when the product is ready, how do you look at the distribution channel, warehouse management and the logistics? Those are the two big areas where RPA plus AI play an important role.

What are some steps companies need to take when implementing RPA for manufacturing?

Das: Initially, there will be a process mining element or process understanding element, because you don’t want to automate bad processes. That’s why having a thought process around making the processes efficient first is critical for any bigger digital transformation. Once that happens, and you have more efficient processes running, which will integrate with multiple ERP systems or other legacy systems, you could go to task automation. 

What are some of the ways that implementing RPA will affect jobs in manufacturing? Will it lead to job losses if more processes are automated?

Das: Will there be a change in jobs as we know them? Yes, but at the same time, there’s a very positive element that will create a net positive impact from a jobs perspective, experience perspective, cost, and the overall quality of life perspective. For example, the moment computers came in, someone’s job was to push hundreds of piles of paper, but now, because of computing, they don’t have to do that. Does that mean there was a negative effect? Probably not, in the long run. So, it’s important to understand that RPA — and RPA that’s done in collaboration with AI — will have a positive impact on the job market in the next five to 10 years.

Can RPA help improve operations by eliminating mistakes that are common in manual processes?

Das: Robots do not make mistakes unless you code it wrong at the beginning, and that’s why governance is so important. Robots are trained to do certain things and will do them correctly every time — 100%, 24/7 — without needing coffee breaks.

What are some of the biggest benefits of RPA in manufacturing?

Das: From an ROI perspective, one benefit of RPA is the cost element because it increases productivity. Second is revenue; for example, at UiPath, we are using our own robots to manage our cash application process, which has impacted revenue collection [positively]. Third is around speed, because what an individual can do, a robot can do much faster. However, this depends on the system, as a robot will only operate as fast as the mother system of the ERP system works — with accuracy, of course. Last, but not least, the most important part is experience. RPA plus AI will enhance the experience of your employees, of your customers and vendors. This is because the way you do business becomes easier, more user-friendly and much more nimble as you get rid of the most common frustrations that keep coming up, like a vendor not getting paid.

What’s the future of RPA and AI in organizations?

Das: The vision of Daniel Dines [UiPath’s co-founder and CEO] is to have one robot for every individual. It’s similar to every individual having access to Excel or Word. We know the benefits of the usage of Excel or Word, but RPA access is still a little technical and there’s a bit of coding involved. But UiPath is focused on making this as code free as possible. If you can draw a flowchart and define a process clearly through click level, our process mining tool can observe it and create an automation for you without any code. For example, I have four credit cards, and every month, I review it and click the statement for whatever the amount is and pay it. I have a bot now that goes in at the 15th of the month and logs into the accounts and clicks through the process. This is just a simple example of how practical RPA could become.

Go to Original Article

Amazon cloud database and data analytics expand

Amazon Web Services is quite clear about it: it wants organizations of all sizes, with nearly any use case, to run databases in the cloud.

At the AWS re:Invent 2019 conference in Las Vegas, the cloud giant outlined the Amazon cloud database strategy, which hinges on wielding multiple purpose-built offerings for different use cases. 

AWS also revealed new services on Dec. 3, the first day of the conference, including the Amazon Managed Apache Cassandra Service, a supported cloud version of the popular Cassandra NoSQL database. The vendor also unveiled several new features for the Amazon Redshift data warehouse, providing enhanced data management and analytics capabilities.

“Quite simply, Amazon is looking to provide one-stop shopping for all data management and analytics needs on AWS,” said Carl Olofson, an analyst at IDC. “For those who are all in for AWS, this is all good. For their competitors, such as Snowflake competing with Redshift and DataStax competing with the new Cassandra service, this will motivate a stronger competitive effort.”

Amazon cloud database strategy

AWS CEO Andy Jassy, in his keynote, detailed the rationale behind Amazon’s cloud database strategy and why one database isn’t enough.

“A lot of companies primarily use relational databases for every one of their workloads, and the day of customers doing that has come and gone,” Jassy said.

There is too much data, cost and complexity involved in using a relational database for all workloads. That has sparked demand for purpose-built databases, according to Jassy.

ndy Jassy, CEO of AWS, speaks on stage at AWS re:Invent 2019 conference
AWS CEO Andy Jassy gives keynote at AWS annual user conference

For example, Jassy noted that ride sharing company Lyft has millions of drivers and geolocation coordinates, which isn’t a good fit for a relational database.

For the Lyft use case and others like it, there is a need for a fast, low-latency key value store, which is why AWS has the DynamoDB database. For workloads that require sub-microsecond latency, an in-memory database is best, and that is where ElastiCache fits in. For those looking to connect data across multiple big datasets, a graph database is a good option, which is what the Amazon Neptune service delivers. DocumentDB, on the other hand, is a document database, and  is intended for those who work with documents and JSON.

If you want the right tool for the right job that gives you differentiated performance productivity and customer experience, you want the right purpose-built database for that job.
Andy JassyCEO, AWS

“Swiss Army knives are hardly ever the best solution for anything other than the most simple tasks,” Jassy said, referring to the classic multi-purpose tool. “If you want the right tool for the right job that gives you differentiated performance productivity and customer experience, you want the right purpose-built database for that job.”

Amazon Apache Managed Cassandra

While AWS offers many different databases as part of the Amazon cloud database strategy, one variety it did not possess was Apache Cassandra, a popular open source NoSQL database.

It’s challenging to manage and scale Cassandra, which is why Jassy said he sees a need for a managed version running as an AWS service. The Apache Managed Cassandra launched as a preview on Dec. 3 with general availability set for sometime in 2020.

With the managed service there are no clusters for users to manage, and the platform provides single-digit millisecond latency, Jassy noted. He added that existing Cassandra tools and drivers will all work, making it easier for users to migrate on-premises Cassandra workloads to the cloud.

Redshift improvements

AWS also detailed a series of moves at the conference that enhance its Redshift data warehouse platform. Among the new features Jassy talked about was Lake House, which enables data queries not just in local Redshift nodes but also across multiple data lakes and S3 cloud storage buckets.

“Not surprisingly, as people start querying across both Redshift and S3 they also want to be able to query across their operational databases where a lot of important data sets live,” Jassy said. “So today, we just released something called federated query which now enables users to query across Redshift, S3 and our relational database services.”

Storage and compute for data warehouse are closely related, but there is often a need to scale storage and compute independently. To that end, AWS announced as part of the Amazon cloud database strategy its new Redshift RA3 instances with managed storage. Jassy explained that as users exhaust the amount of storage available in a Redshift local instance, the RA3 service will move the less frequently accessed data over to S3.

Redshift AQUA

As data is spread across different resources, it generates a need to accelerate query performance. Jassy introduced the new Advanced Query Accelerator (AQUA) for Redshift help meet that challenge.

Jassy said that AQUA provides an innovative way to do hardware accelerated cache to improve query performance. With AQUA, AWS has built a high-speed cache architecture on top of S3 that scale out in parallel to many different nodes. Each of the nodes host custom-designed AWS processors to speed up operations.

“This makes your processing so much faster that you can actually do the compute on the raw data without having to move it,” Jassy said.

Go to Original Article

Consider these Office 365 alternatives to public folders

As more organizations consider a move from Exchange Server, public folders continue to vex many administrators for a variety of reasons.

Microsoft supports public folders in its latest Exchange Server 2019 as well as Exchange Online, but it is pushing companies to adopt some of its newer options, such as Office 365 Groups and Microsoft Teams. An organization pursuing alternatives to public folders will find there is no direct replacement for this Exchange feature. There reason for this is due to the nature of the cloud.

Microsoft set its intentions early on under Satya Nadella’s leadership with its “mobile first, cloud first” initiative back in 2014. Microsoft aggressively expanded its cloud suite with new services and features. This fast pace meant that migrations to cloud services, such as Office 365, would offer a different experience based on the timing. Depending on when you moved to Office 365, there might be different features than if you waited several months. This was the case for migrating public folders from on-premises Exchange Server to Exchange Online, which evolved over time and also coincided with the introduction of Microsoft Teams, Skype for Business and Office 365 Groups.

The following breakdown of how organizations use public folders can help Exchange administrators with their planning when moving to the new cloud model on Office 365.

Organizations that use public folders for email only

Public folders are a great place to store email that multiple people within an organization need to access. For example, an accounting department can use public folders to let department members use Outlook to access the accounting public folders and corresponding email content.

A shared mailbox has a few advantages over a public folder with the primary one being accessibility through the Outlook mobile app or from Outlook via the web.

Office 365 offers similar functionality to public folders through its shared mailbox feature in Exchange Online. A shared mailbox stores email in folders, which is accessible by multiple users.

A shared mailbox has a few advantages over a public folder with the primary one being accessibility through the Outlook mobile app or from Outlook via the web. This allows users to connect from their smartphones or a standard browser to review email going to the shared mailbox. This differs from public folder access which requires opening the Outlook client.

Organizations that use public folders for email and calendars

For organizations that rely on both email and calendars in their public folders, Microsoft has another cloud alternative that comes with a few extra perks.

Office 365 Groups not only lets users collaborate on email and calendars, but also stores files in a shared OneDrive for Business page, tasks in Planner and notes in OneNote. Office 365 Groups is another option for email and calendars made available on any device. Office 365 Groups owners manage their own permissions and membership to lift some of the burden of security administration from the IT department.

Microsoft provides migration scripts to assist with the move of content from public folders to Office 365 Groups.

Organizations that use public folders for data archiving

Some organizations that prefer to stay with a known quantity and keep the same user experience also have the choice to keep using public folders in Exchange Online.

The reasons for this preference will vary, but the most likely scenario is a company that wants to keep email for archival purposes only. The migration from Exchange on-premises public folders requires administrators to use Microsoft’s scripts at this link.

Organizations that use public folders for project communication and data sharing repository

The Exchange public folders feature is excellent for sharing email, contacts and calendar events. For teams working on projects, the platform shines as a way to centralize information that’s relevant to the specific project or department. But it’s not as expansive as other collaboration tools on Office 365.

Take a closer look at some of the other modern collaboration tools available in Office 365 in addition to Microsoft Teams and Office 365 Groups, such as Kaizala. These offerings extend the organization’s messaging abilities to include real-time chat, presence status and video conferencing.

Go to Original Article

What are the Azure Stack HCI deployment, management options?

There are several management approaches and deployment options for organizations interested in using the Azure Stack HCI product.

Azure Stack HCI is a hyper-converged infrastructure product, similar to other offerings in which each node holds processors, memory, storage and networking components. Third-party vendors sell the nodes that can scale should the organization need more resources. A purchase of Azure Stack HCI includes the hardware, Windows Server 2019 operating system, management tools, and service and support from the hardware vendor. At time of publication, Microsoft’s Azure Stack HCI catalog lists more than 150 offerings from 19 vendors.

Azure Stack HCI, not to be confused with Azure Stack, gives IT pros full administrator rights to manage the system.

Tailor the Azure Stack HCI options for different needs

The basic components of an Azure Stack HCI node might be the same, but an organization can customize them for different needs, such as better performance or lowest price. For example, a company that wants to deploy a node in a remote office/branch office might select Lenovo’s ThinkAgile MX Certified Node, or its SR650 model. The SR650 scales to two nodes that can be configured with a variety of processors offering up to 28 cores, up to 1.5 TB of memory, hard drive combinations providing up to 12 TB (or SSDs offering more than 3.8 TB), and networking with 10/25 GbE. Each node comes in a 2U physical form factor.

If the organization needs the node for more demanding workloads, one option is the Fujitsu Primeflex. Azure Stack HCI node models such as the all-SSD Fujitsu Primergy RX2540 M5 scale to 16 nodes. Each node can range from 16 to 56 processor cores, up to 3 TB of SSD storage and 25 GbE networking.

Management tools for Azure Stack HCI systems

Microsoft positions the Windows Admin Center (WAC) as the ideal GUI management tool for Azure Stack HCI, but other familiar utilities will work on the platform.

Microsoft positions the Windows Admin Center (WAC) as the ideal GUI management tool for Azure Stack HCI, but other familiar utilities will work on the platform.

The Windows Admin Center is a relatively new browser-based tool for consolidated management for local and remote servers. The Windows Admin Center provides a wide array of management capabilities, such as managing Hyper-V VMs and virtual switches, along with failover and hyper-converged cluster management. While it is tailored for Windows Server 2019 — the server OS used for Azure Stack HCI — it fully supports Windows Server 2012/2012 R2 and Windows Server 2016, and offers some functionality for Windows Server 2008 R2.

Azure Stack HCI users can also use more established management tools such as System Center. The System Center suite components handle infrastructure provisioning, monitoring, automation, backup and IT service management. System Center Virtual Machine Manager provisions and manages the resources to create and deploy VMs, and handle private clouds. System Center Operations Manager monitors services, devices and operations throughout the infrastructure.

Other tools are also available including PowerShell, both the Windows and the PowerShell Core open source versions, as well as third-party products, such as 5nine Manager for Windows Server 2019 Hyper-V management, monitoring and capacity planning.

It’s important to check over each management tool to evaluate its compatibility with the Azure Stack HCI platform, as well as other components of the enterprise infrastructure.

Go to Original Article

What’s new with the Exchange hybrid configuration wizard?

Exchange continues to serve as the on-ramp into Office 365 for many organizations. One big reason is the hybrid capabilities that connect on-premises Exchange and Exchange Online.

If you use Exchange Server, it’s not difficult to join it to Exchange Online for a seamless transition into the cloud. Microsoft refined the Exchange hybrid configuration wizard to remove a lot of the technical hurdles to shift one of the more important IT workloads into Exchange Online. If you haven’t seen the Exchange hybrid experience recently, you may be surprised about some of the improvements over the last few years.

Exchange hybrid setups have come a long way

I started configuring Exchange hybrid deployments the first week Microsoft made Office 365 publicly available in June 2011 with the newest version of Exchange at the time, Exchange 2010. Setting up an Exchange hybrid deployment was a laborious task. Microsoft provided a 75-page document with the Exchange hybrid configuration steps, which would take about three workdays to complete. Then I could start the troubleshooting process to fix the innumerable typos I made during the setup.

In December 2011, Microsoft released Exchange 2010 Service Pack 2, which included the Exchange hybrid configuration wizard. The wizard reduced that 75-page document to a few screens of information that cut down the work from three days to about 15 minutes. The Exchange hybrid configuration wizard did not solve all the problems of an Exchange hybrid deployment, but it made things a lot easier.

What the Exchange hybrid configuration wizard does

The Exchange hybrid configuration wizard is just a PowerShell script that runs all the necessary configuration tasks. The original hybrid configuration wizard completed seven key tasks:

  1. verified prerequisites for a hybrid deployment;
  2. configured Exchange federation trust;
  3. configured relationships between on-premises Exchange and Exchange Online;
  4. configured email address policies;
  5. configured free/busy calendar sharing;
  6. configured secure mail flow between the on-premises and Exchange Online organizations; and
  7. enabled support for Exchange Online archiving.

How the Exchange hybrid configuration wizard evolved

Since the initial release of the Exchange hybrid configuration wizard, Microsoft expanded its capabilities in multiple ways with several major improvements over the last few years.

Since the initial release of the Exchange hybrid configuration wizard, Microsoft expanded its capabilities in multiple ways with several major improvements over the last few years.

Exchange hybrid configuration wizard decoupled from service pack updates: This may seem like a minor change, but it’s a significant development. Having the Exchange hybrid configuration wizard as part of the standard Exchange update cycle meant that any updates to the wizard had to wait until the next service pack update.

Now the Exchange hybrid configuration wizard is an independent component from Exchange Server. When you run the wizard, it checks for a new release and updates itself to the most current configuration. This means you get fixes or additional features without waiting through that quarterly update cycle.

Minimal hybrid configuration: Not every migration has the same requirements. Sometimes a quicker migration with fewer moving parts is needed, and Microsoft offered an update in 2016 for a minimal hybrid configuration feature for those scenarios.

The minimal hybrid configuration helps organizations that cannot use the staged migration option, but want an easy switchover without worrying about configuring extras, such has the free/busy federation in calendar availability.

The minimal hybrid configuration leaves out the following functionality from a full hybrid configuration:

  • cross-premises free/busy calendar availability;
  • Transport Layer Security secured mail flow between on-premises Exchange and Exchange Online;
  • cross-premises eDiscovery;
  • automatic Outlook on the web (OWA) and ActiveSync redirection for migrated users; and
  • automatic retention for archived mailboxes.

If these features aren’t important to your organization and speed is of the essence, the minimal hybrid configuration is a good option.

Recent update goes further with setup work

Microsoft designed the Exchange hybrid configuration wizard to migrate mailboxes without interrupting the end user’s ability to work. The wizard gives users a full global address book, free/busy calendar availability and some of the mailbox delegation features used with an on-premises Exchange deployment.

A major new addition to the hybrid configuration wizard its ability to transfer some of the on-premises Exchange configurations to the Exchange Online tenant. The Hybrid Organization Configuration Transfer feature pulls configuration settings from your Exchange organization and does a one-time setup of the same settings in your Exchange Online tenant.

Microsoft expanded the abilities of Hybrid Organization Configuration Transfer in November 2018 so it configures the following settings: Active Sync Mailbox Policy, Mobile Device Mailbox Policy, OWA Mailbox Policy, Retention Policy, Retention Policy Tag, Active Sync Device Access Rule, Active Sync Organization Settings, Address List, DLP Policy, Malware Filter Policy, Organization Config and Policy Tip Configuration.

The Exchange hybrid configuration wizard only handles these settings once. If you make changes in your on-premises Exchange organization after you run the Exchange hybrid configuration wizard, those changes will not be replicated in the cloud automatically.

Go to Original Article