Tag Archives: sizes

Instaclustr CTO on open source database as a service

In recent years, organizations of all sizes have increasingly come to rely on open source database technologies, including Apache Cassandra.

The complexity of deploying and managing Cassandra at scale has led to a rise in database-as-a-service (DBaaS) providers offering managed Cassandra services in the cloud. Among the vendors that provide managed Cassandra today are DataStax, Amazon and Instaclustr.

Instaclustr, based in Redwood City, Calif., got its start in 2013 and has grown over the past eight years to offer managed services for a number of different open source data layer projects, including Kafka event streaming, Redis database and data caching as well as Elasticsearch data query and visualization.

In this Q&A, Ben Bromhead, co-founder and CTO of Instaclustr, discusses the intersection of open source and enterprise software and why database as a service is a phenomenon that is here to stay.

How has Instaclustr changed over the last eight years?

Ben BromheadBen Bromhead

Ben Bromhead: Our original vision was wildly different and, like all good startups, we had a pretty decent pivot. When the original team got together, we were working on a marketplace for high value data sets. We took a data warehouse approach for the different data sets we provided and the access model was pure SQL. It was kind of interesting from a computer science perspective, but we probably weren’t as savvy as we needed to be to take that kind of business to market.

But one of the things we learned along the way was there was a real need for Apache Cassandra database services. We had to spend a lot of time getting our Cassandra database ready and managing it. We quickly realized that there was a market for that, so we built a web interface for a service with credit card billing, wrote a few blog posts and within a few months we had our first production customers. That’s how we kind of pivoted and got into the Cassandra database-as-a-service space.

Originally, when we built Instaclustr the idea was very much around the idea of democratizing Cassandra for smaller users and smaller use cases. Over the years, we very clearly started to move into medium and large enterprises because they tend to have bigger deployments. They also tend to have more money and are less likely to go out of business.

There are a few Cassandra DBaaS vendors now (including Amazon). How do you see the expansion of the market?

Bromhead: We’re very much of the view that having more players in the market validates the market. But sure, it does make our jobs a little bit harder.

Our take on it [managed Cassandra as a service] is also a little bit different from some of the other vendors in that we really take a multi-technology approach. So you know, not only are we engaging with our customers around their Cassandra cluster, but we’re also helping them with the Kafka cluster, Elasticsearch and Redis.

So what ends up happening is we end up becoming a trusted partner for a customer’s data layer and that’s our goal. We certainly got our start with Cassandra, that’s our bread and butter and what we’re known for, but in terms of the business vision, we want to be there as a data layer supporting different use cases.

You know, it’s great to see more Cassandra services come in. They’ve got a particular take on it and we’ve got a particular take on it. I’m very much a believer that a rising tide lifts all boats.

How does Instaclustr select and determine which open source data layer technologies you will support and turn into a managed service?

Bromhead: We’re kind of 100 percent driven by customers. So you know, when they asked us for something, they’re like, ‘Hey, you do a great job with our Elasticsearch cluster, can you look after our Redis or a Mongo?’ That’s probably the major signal that we pay most attention to. We also look at the market and certainly look at what other technologies are getting deployed side by side.

It’s one thing to have an open source license. It’s another thing to have strong governance and strong IP and copyright protection.
Ben BromheadCo-founder and CTO, Instaclustr

We very clearly look for and prefer technologies where the core IP or the majority of the IP is owned by an open source foundation. So whether that’s Apache or the Cloud Native Computing Foundation, whatever they may be. It’s one thing to have an open source license. It’s another thing to have strong governance and strong IP and copyright protection.

What are the challenges for Instaclustr in taking an open source project and turning into an enterprise grade DBaaS?

Bromhead: The open source versus enterprise grade production argument is starting to become a little bit of a false dichotomy to some degree. One thing we’ve been super focused on in the open source space around Cassandra is getting it to be more enterprise-grade and doing it in an open source way.

So a great example of that is: We have released a bunch of authentication improvements to Apache Cassandra that typically you only see in the enterprise distributions. We’ve also released backup and audit capabilities as well.

It’s one thing to have the features and to be able to tick the feature box as you kind of go down the list. It’s another thing to run a technology in a production-grade way. We take a lot of the pain out of that, in an easily reproducible, repeatable manner so that our support team can make sure that we’re delivering on our core support promises. Some of the challenges of getting stuff set up in a production-grade manner is going to get a little bit easier, particularly with the rise of Kubernetes.

The core challenge, however, for a lot of companies is actually just the expertise of being skilled in particular technologies.

We don’t live in a world where everything just lives on an Oracle or a MySQL database. You know, more and more teams are dealing with two or three or four different databases.

What impact has the COVID-19 pandemic had on Instaclustr?

Bromhead: On the business side of things it has been a mixed bag. As a DBaaS, we’re exposed to many different industries. Some of the people we work with have travel booking websites or event-based business and those have either had to pack up shop or go into hibernation.

On the flip side, we work with a ton of digital entertainment companies, including video game platforms, and that traffic has gone through the roof. We’re also seeing some people turn to Instaclustr as a way to reduce costs, to get out of expensive, unnecessary licensing agreements that they have.

We’re still in a pretty good path for growth for this year, so I think that speaks volumes to the resilient nature of the business and the diversity that we have in the customer base.

Editor’s note: This interview has been edited for clarity and conciseness.

Go to Original Article

Creating a more accessible world with Azure AI

At Microsoft, we are inspired by how artificial intelligence is transforming organizations of all sizes, empowering them to reimagine what’s possible. AI has immense potential to unlock solutions to some of society’s most pressing challenges.

One challenge is that according to the World Health Association, globally, only 1 in 10 people with a disability have access to assistive technologies and products. We believe that AI solutions can have a profound impact on this community. To meet this need, we aim to democratize AI to make it easier for every developer to build accessibility into their apps and services, across language, speech, and vision.

In view of the upcoming Bett Show in London, we’re shining a light on how Immersive Reader enhances reading comprehension for people regardless of their age or ability, and we’re excited to share how Azure AI is broadly enabling developers to build accessible applications that empower everyone.

Empowering readers of all abilities

Immersive Reader is an Azure Cognitive Service that helps users of any age and reading ability with features like reading aloud, translating languages, and focusing attention through highlighting and other design elements. Millions of educators and students already use Immersive Reader to overcome reading and language barriers.

The Young Women’s Leadership School of Astoria, New York, brings together an incredible diversity of students with different backgrounds and learning styles. The teachers at The Young Women’s Leadership School support many types of learners, including students who struggle with text comprehension due to learning differences, or language learners who may not understand the primary language of the classroom. The school wanted to empower all students, regardless of their background or learning styles, to grow their confidence and love for reading and writing.

A teacher and student looking at a computer together

Watch the story here

Teachers at The Young Women’s Leadership School turned to Immersive Reader and an Azure AI partner, Buncee, as they looked for ways to create a more inclusive and engaging classroom. Buncee enables students and teachers to create and share interactive multimedia projects. With the integration of Immersive Reader, students who are dyslexic can benefit from features that help focus attention in their Buncee presentations, while those who are just learning the English language can have content translated to them in their native language.

Like Buncee, companies including Canvas, Wakelet, ThingLink, and Nearpod are also making content more accessible with Immersive Reader integration. To see the entire list of partners, visit our Immersive Reader Partners page. Discover how you can start embedding Immersive Reader into your apps today. To learn more about how Immersive Reader and other accessibility tools are fostering inclusive classrooms, visit our EDU blog.

Breaking communication barriers

Azure AI is also making conversations, lectures, and meetings more accessible to people who are deaf or hard of hearing. By enabling conversations to be transcribed and translated in real-time, individuals can follow and fully engage with presentations.

The Balavidyalaya School in Chennai, Tamil Nadu, India teaches speech and language skills to young children who are deaf or hard of hearing. The school recently held an international conference with hundreds of alumni, students, faculty, and parents. With live captioning and translation powered by Azure AI, attendees were able to follow conversations in their native languages, while the presentations were given in English.

Learn how you can easily integrate multi-language support into your own apps with Speech Translation, and see the technology in action with Translator, with support for more than 60 languages, today.

Engaging learners in new ways

We recently announced the Custom Neural Voice capability of Text to Speech, which enables customers to build a unique voice, starting from just a few minutes of training audio.

The Beijing Hongdandan Visually Impaired Service Center leads the way in applying this technology to empower users in incredible ways. Hongdandan produces educational audiobooks featuring the voice of Lina, China’s first blind broadcaster, using Custom Neural Voice. While creating audiobooks can be a time-consuming process, Custom Neural Voice allows Lina to produce high-quality audiobooks at scale, enabling Hongdandan to support over 105 schools for the blind in China like never before.

“We were amazed by how quickly Azure AI could reproduce Lina’s voice in such a natural-sounding way with her speech data, enabling us to create educational audiobooks much more quickly. We were also highly impressed by Microsoft’s commitment to protecting Lina’s voice and identity.”—Xin Zeng, Executive Director at Hongdandan

Learn how you can give your apps a new voice with Text to Speech.

Making the world visible for everyone

According to the International Agency for the Prevention of Blindness, more than 250 million people are blind or have low vision across the globe. Last month, in celebration of the United Nations International Day of Persons with Disabilities, Seeing AI, a free iOS app that describes nearby people, text, and objects, expanded support to five new languages. The additional language support for Spanish, Japanese, German, French, and Dutch makes it possible for millions of blind or low vision individuals to read documents, engage with people around them, hear descriptions of their surroundings in their native language, and much more. All of this is made possible with Azure AI.

Try Seeing AI today or extend vision capabilities to your own apps using Computer Vision and Custom Vision.

Get involved

We are humbled and inspired by what individuals and organizations are accomplishing today with Azure AI technologies. We can’t wait to see how you will continue to build on these technologies to unlock new possibilities and design more accessible experiences. Get started today with a free trial.

Check out our AI for Accessibility program to learn more about how companies are harnessing the power of AI to amplify capabilities for the millions of people around the world with a disability.

Go to Original Article
Author: Microsoft News Center

Amazon cloud database and data analytics expand

Amazon Web Services is quite clear about it: it wants organizations of all sizes, with nearly any use case, to run databases in the cloud.

At the AWS re:Invent 2019 conference in Las Vegas, the cloud giant outlined the Amazon cloud database strategy, which hinges on wielding multiple purpose-built offerings for different use cases. 

AWS also revealed new services on Dec. 3, the first day of the conference, including the Amazon Managed Apache Cassandra Service, a supported cloud version of the popular Cassandra NoSQL database. The vendor also unveiled several new features for the Amazon Redshift data warehouse, providing enhanced data management and analytics capabilities.

“Quite simply, Amazon is looking to provide one-stop shopping for all data management and analytics needs on AWS,” said Carl Olofson, an analyst at IDC. “For those who are all in for AWS, this is all good. For their competitors, such as Snowflake competing with Redshift and DataStax competing with the new Cassandra service, this will motivate a stronger competitive effort.”

Amazon cloud database strategy

AWS CEO Andy Jassy, in his keynote, detailed the rationale behind Amazon’s cloud database strategy and why one database isn’t enough.

“A lot of companies primarily use relational databases for every one of their workloads, and the day of customers doing that has come and gone,” Jassy said.

There is too much data, cost and complexity involved in using a relational database for all workloads. That has sparked demand for purpose-built databases, according to Jassy.

ndy Jassy, CEO of AWS, speaks on stage at AWS re:Invent 2019 conference
AWS CEO Andy Jassy gives keynote at AWS annual user conference

For example, Jassy noted that ride sharing company Lyft has millions of drivers and geolocation coordinates, which isn’t a good fit for a relational database.

For the Lyft use case and others like it, there is a need for a fast, low-latency key value store, which is why AWS has the DynamoDB database. For workloads that require sub-microsecond latency, an in-memory database is best, and that is where ElastiCache fits in. For those looking to connect data across multiple big datasets, a graph database is a good option, which is what the Amazon Neptune service delivers. DocumentDB, on the other hand, is a document database, and  is intended for those who work with documents and JSON.

If you want the right tool for the right job that gives you differentiated performance productivity and customer experience, you want the right purpose-built database for that job.
Andy JassyCEO, AWS

“Swiss Army knives are hardly ever the best solution for anything other than the most simple tasks,” Jassy said, referring to the classic multi-purpose tool. “If you want the right tool for the right job that gives you differentiated performance productivity and customer experience, you want the right purpose-built database for that job.”

Amazon Apache Managed Cassandra

While AWS offers many different databases as part of the Amazon cloud database strategy, one variety it did not possess was Apache Cassandra, a popular open source NoSQL database.

It’s challenging to manage and scale Cassandra, which is why Jassy said he sees a need for a managed version running as an AWS service. The Apache Managed Cassandra launched as a preview on Dec. 3 with general availability set for sometime in 2020.

With the managed service there are no clusters for users to manage, and the platform provides single-digit millisecond latency, Jassy noted. He added that existing Cassandra tools and drivers will all work, making it easier for users to migrate on-premises Cassandra workloads to the cloud.

Redshift improvements

AWS also detailed a series of moves at the conference that enhance its Redshift data warehouse platform. Among the new features Jassy talked about was Lake House, which enables data queries not just in local Redshift nodes but also across multiple data lakes and S3 cloud storage buckets.

“Not surprisingly, as people start querying across both Redshift and S3 they also want to be able to query across their operational databases where a lot of important data sets live,” Jassy said. “So today, we just released something called federated query which now enables users to query across Redshift, S3 and our relational database services.”

Storage and compute for data warehouse are closely related, but there is often a need to scale storage and compute independently. To that end, AWS announced as part of the Amazon cloud database strategy its new Redshift RA3 instances with managed storage. Jassy explained that as users exhaust the amount of storage available in a Redshift local instance, the RA3 service will move the less frequently accessed data over to S3.

Redshift AQUA

As data is spread across different resources, it generates a need to accelerate query performance. Jassy introduced the new Advanced Query Accelerator (AQUA) for Redshift help meet that challenge.

Jassy said that AQUA provides an innovative way to do hardware accelerated cache to improve query performance. With AQUA, AWS has built a high-speed cache architecture on top of S3 that scale out in parallel to many different nodes. Each of the nodes host custom-designed AWS processors to speed up operations.

“This makes your processing so much faster that you can actually do the compute on the raw data without having to move it,” Jassy said.

Go to Original Article

Top Office 365 MFA considerations for administrators

With the rise in data breach incidents reported by companies of all sizes, it doesn’t take much effort to find a cache of leaked passwords that can be used to gain unauthorized access to email or another online service.

Administrators can make users produce complex passwords and change them frequently to ensure they set a different password for different applications or systems. It’s a helpful way to keep hackers from guessing a login, but it’s a practice that can backfire. Many users struggle with memorizing password variations, which tends to lead to one complex password used across multiple systems. Industrious hackers who find a password dump can assume some end users will use the same password — or a variation of it — across multiple workloads online to make it easier to pry their way into other systems.

IT departments in the enterprise realize that unless they implement specific password policies and enforce them, their systems may be at risk of a hack attempt. To mitigate these risks, many administrators will try multifactor authentication (MFA) products to address some of the identity concerns. MFA is the technology that adds another layer of authentication after users enter their password to confirm their identity, such as a biometric verification or a code sent via text to their phone. An organization that has moved its collaboration workloads to Microsoft’s cloud has a few Office 365 MFA options.

When considering an MFA product, IT administrators must consider several key areas, especially when some of the services they may subscribe to, such as Microsoft Azure and Office 365, include MFA functionality from Microsoft. Depending on the level of functionality needed and services covered by MFA, IT administrators might consider selecting a third-party vendor, even when that choice will require more configuration work with Active Directory and cloud services. IT workers unfamiliar with MFA technology can look over the following areas to help with the selection process.

When considering the purchase of an MFA product, IT administrators must consider several key areas, especially when some of the services they may subscribe to, such as Microsoft Azure and Office 365, include MFA functionality from Microsoft.

Choosing the right authentication options for end users

IT administrators must investigate what will work best for their end users because there are several options to choose from when it comes to MFA. Some products use phone calls for confirmation, code via text messaging, key fobs, an authenticator app and even facial recognition. Depending on what the consensus is in the organization, the IT decision-makers have to work through the evaluation process to make sure the vendor supports the option they want.

Identifying which MFA product supports cloud workloads

More organizations have adopted some cloud service, such as Office 365, Azure, AWS and other public clouds. The MFA product must adapt to the needs of the organization as it adds more cloud services. While Microsoft offers its own MFA technology that works with Office 365, other vendors such as Duo Security — owned by Cisco — and Okta support Office 365 MFA for companies that want to use a third-party product.

Potential problems that can affect Office 365 MFA users

Using Office 365 MFA helps improve security, but there is potential for trouble that blocks access for end users. This can happen when a phone used for SMS confirmation breaks or is out of the user’s possession. Users might not gain access to the system or the services they need until they recover their device or change their MFA configuration.

Another possible problem to the authentication process can happen on the other end if the MFA product goes down and blocks access for everyone who has enabled MFA. These probabilities require IT to discuss and plan before implementing Office 365 MFA for the appropriate steps to be taken if these issues arise.

Evaluate the overall costs and features related to MFA

For the most part, MFA products are subscription-based that charge a monthly fee per user. Some vendors, such as Microsoft, bundle MFA with self-service identity, access management, access reporting and self-service group management. Third-party vendors might offer different MFA features; as one example, Duo Security includes self-enrollment and management, user risk assessment with phishing simulation, and device access monitoring and identification with its MFA product.

Single sign-on, identity management and identity monitoring are all valuable features that, if included with an MFA offering, should be worth considering when it’s time to narrow the vendor list.

Go to Original Article

Announcing the Lv2-Series VMs powered by the AMD EPYC™ processor

Providing a diverse set of Virtual Machine sizes and the latest hardware is crucial to making sure that our customers get industry-leading performance for every one of their workloads. Today, I am excited to announced we are introducing the next generation of storage-optimized L-series VMs powered by AMD EPYCTM processors.

We’re thrilled to have AMD available as part of the Azure Compute family. We’ve worked closely with AMD to develop the next generation of storage optimized VMs called Lv2-Series, powered by AMD’s EPYC™ processors. The Lv2-Series is designed to support customers with demanding workloads like MongoDB, Cassandra, and Cloudera that are storage intensive and demand high levels of I/O.

Lv2-Series VM’s use the AMD EPYC™7551 processor, featuring a core frequency of 2.2Ghz and a maximum single-core turbo frequency of 3.0GHz. Lv2-Series VMs will come in sizes ranging up to 64 vCPU’s and 15TB of local resource disk. 

If you’re interested in being one of the first to try out these new sizes as part of the preview for Lv2 VMs.

Size vCPU’s Memory (GiB) Local SSD (GiB)
L8s 8 64 1 x 1.9TB
L16s 16 128 2 x 1.9TB
L32s 32 256 4 x 1.9TB
L64s 64 512 8 x 1.9TB

See ya around,


Microsoft IoT Central broadens reach with simplicity of SaaS for enterprise-grade IoT – Internet of Things

IoT is fast becoming a key strategy for companies of all sizes, as they strive to get closer to their customers and offer great product experiences—all while reducing operational expenditures. Until now, however, it’s been a major hurdle to gain the skills needed to build and manage connected solutions. This obstacle has been further compounded by concerns about security, scalability, and difficulties finding an IoT solution that has built-in best practices gained from years of experience in the sector.

This is why today we are pleased to launch the public preview of Microsoft IoT Central to address these barriers. Microsoft IoT Central is the first true highly scalable IoT software-as-a-service (SaaS) solution that offers built-in support for IoT best practices and world-class security along with the reliability, regional availability, and global scale of the Microsoft Azure cloud. Microsoft IoT Central allows companies worldwide to build production-grade IoT applications in hours—without having to manage all the necessary back-end infrastructure or learn new skills. In short, Microsoft IoT Central enables everyone to benefit from IoT.

IoT Solutions without the hassle

Microsoft IoT Central takes the hassle out of creating an IoT solution by eliminating the complexities of initial setup as well as the management burden and operational overhead of a typical IoT project. That means you can bring your connected product vision to life faster while staying focused on your customers and products. The end-to-end IoT SaaS solution equips you to harness the “digital feedback loop” to draw better insights from your data and convert them into intelligent actions that result in better products and experiences for your customers.

By reducing the time, skills, and investment required to develop a robust enterprise-grade IoT solution, Microsoft IoT Central also sets you up to quickly reap the powerful business benefits of IoT. You can get started quickly, connecting devices in seconds and moving from concept to production in hours. The complete IoT solution lets you seamlessly scale from a few to millions of connected devices as your IoT needs grow. Moreover, it removes guesswork thanks to simple and comprehensive pricing that makes it easier for you to plan your IoT investments and achieve your IoT goals.

On the security front, Microsoft IoT Central leverages industry-leading privacy standards and technologies to help ensure your data is only accessible to the right people in your organization. With IoT privacy features such as role-based access and integration with Azure Active Directory permissions, you stay in control of your information.

From years of working in the commercial space, we understand organizations’ need to take advantage of existing applications and data to glean richer insights, integrate business workflows, and take more effective actions. So, in the coming months, Microsoft IoT Central will also be able to integrate with customers’ existing business systems—such as Microsoft Dynamics 365, SAP, and Salesforce—to accelerate more proactive sales, service, and marketing.

Several customers have already started building solutions for their businesses with Microsoft IoT Central. Here’s what they have to say:

  • “Small-scale IoT use cases are rare, even though they can have profound social impact. Why? Because each use case has unique needs that in turn require special sensor configurations and secure provisioning to the cloud before the solution can even be turned on. Arrow has simplified this process by bringing together Microsoft’s IoT Central platform and Libelium’s Plug & Sense IoT Toolkits, which help small, medium, and even large businesses get their IoT projects up and running sooner. Microsoft’s IoT Central solution helped us pilot in weeks, at minimal cost, a public school environmental monitoring solution that would have taken a year to develop from scratch. School and government officials can now monitor and improve the safety of public spaces without the cost and duration of typical IoT projects.” – Jeff Reed, PhD, VP Microsoft Global Alliance at Arrow Electronics
  • “Mesh Systems is passionate about the work Microsoft is doing with the release of Microsoft IoT Central. We recognize how Microsoft IoT Central accelerates projects that need entry-level simplicity while also be extendable to meet more complex requirements. We value this level of SaaS offering from Microsoft because it allows us to focus on identifying and iterating on the application business transformation, which is critical across the IoT market.” – Uri Kluk, CTO, Mesh Systems
  • “With Microsoft IoT Central and partner VISEO, we created and deployed IoT solutions quickly, securely, and at scale—with the reach and resources of the global Azure cloud platform. The solution we implemented enables us to collect telemetry data on thousands of our devices. We are now able to do predictive maintenance and ensure our firmware is always up to date—critical advantages in the health field. With this data, we are better able to serve our market and adapt our service to the needs of our customers.” – Philippe Angotta, Director of Customer Relations, LPG
  • “Patterson Companies believes there is an opportunity to realize significant improvement in dental device fix/repair service-level outcomes for its customers via an IoT Remote Monitoring & Diagnostics solution. The OEMs that manufacture dental devices are actively implementing and enhancing their IoT capabilities to provide ongoing performance data from devices connected at the dental office. Microsoft IoT Central provides a highly configurable and intuitive solution to define the criteria needed to monitor and diagnose any variety of connected devices. This in turn equips Patterson service technicians with current and past performance data, allowing them to transition from a reactive stance to one that is proactive and results in higher levels of customer satisfaction.” – Nate Hill, Principal Architect, Patterson Dental
  • “The Umbra Group is excited to work with Microsoft IoT Central and Microsoft Dynamics 365 Finance & Operations to monitor performance and health of our systems in ways we never have been able to do before. These new tools enable us to integrate commercial, supply chain, production, and product data from the time an order is placed all the way through to serving up insights for how and when to service a device. Umbra expects to see tremendous benefits during product development and testing by being able to see and act on real-time performance data regardless of location. Our customers will be thrilled to be able to have maintenance activities performed during scheduled machine down time instead of experiencing interruptions in service, since machine conditions will now be predictable.” – David Manzanares, Vice President of Engineering, Umbra Group
  • “Digital transformation will drive mass-scale growth of the IoT market. Scalable, secure, reliable, and pay-per-use solutions are needed to handle these volumes efficiently. ICT Group has a strong focus on the Industrial IoT market, and Microsoft IoT Central offers us the ability to create insights and add real business value. ICT Group has been involved in the development of Microsoft IoT Central from the start. Microsoft IoT Central has enabled us to gather more valuable insights to inform how we manage our products with this digital feedback loop.” – Aart Wegink, Director Digital Transformation, ICT Group, The Netherlands

Microsoft is leading the way in IoT innovation, and we are committed to introducing new features at a rapid pace so customers can quickly and continually reap benefits and stay ahead of the game. As a true IoT SaaS solution, Microsoft IoT Central gives customers automatic access to new features as they’re released. It also frees customers from updating the underlying hardware.

Azure IoT Hub Device Provisioning Service now available

To further simplify IoT, we are also announcing the availability of Azure IoT Hub Device Provisioning Service. Azure IoT Hub Device Provisioning Service enables zero-touch device provisioning and configuration of millions of devices to Azure IoT Hub in a secure and scalable manner. Device Provisioning Service adds important capabilities that, together with Azure IoT Hub device management, help customers easily manage all stages of the IoT device lifecycle.

For a deeper look into the features of Microsoft IoT Central, check out the new Microsoft IoT Central website and demo, and start your free trial today. Also, for a deeper dive be sure to see our blog post, “Microsoft IoT Central delivers low-code way to build IoT solutions fast.”

Tags: Azure IoT Hub, Azure IoT Hub Device Provisioning Service, device management, Microsoft IoT Central

Dynamics 365 CRM can help drive the digital transformation process

ORLANDO, Fla. — Companies of all sizes in all industries are currently in a state of transition: Either update those legacy systems or risk being left behind.

But successfully enacting a digital transformation process can be time-consuming, costly and difficult, depending on the workforce. Yet, as consumers demand personalized customer experiences and services, the companies that can adapt will continue to thrive.

“Personalization is becoming the de facto standardization,” said Winston Hait, senior product marketing manager for Microsoft. Hait was speaking at a session on Dynamics 365 for retail at Microsoft Ignite. Dynamics 365 CRM had a substantial presence at the conference, with Microsoft CEO Satya Nadella highlighting the potential for Dynamics 365 CRM moving forward.

Over the past year and a half, Microsoft has brought together its ERP and CRM systems into Dynamics 365, while also releasing industry-specific Dynamics software, including Dynamics 365 for retail, for talent, for service, and for finance and operations — and with an eye toward more industry segregation.

“Retail used to be one of the solutions within Dynamics for finance and operations,” Hait said. “But we began to break out the different workloads — and this is just the beginning. We’ll have a finance-specific [product], we’ll have an operations [product] and a warehousing [product].”

And to help keep up with that modern digital transformation process, Dynamics 365 products will also see an artificial intelligence (AI) influence embedded into the software, according to Nadella.

“It’s not about building individual tools, but creating that platform to drive digital transformation,” Nadella said. “When you create such a rich data asset, what you enable is AI-first workloads.”

Updating legacy systems

At one of the sessions focusing on the digital transformation process, there were representatives from four different companies — two in the public sector, one in food services and one in finance — all of which were going through some digital transformation process.

It’s not about building individual tools, but creating that platform to drive digital transformation.
Satya NadellaCEO, Microsoft

“We’re moving from a legacy-based Siebel system to a cloud-based Dynamics 365 platform,” said Pierre Nejam, program director at New York City’s Department of Information Technology and Telecommunications. “We have some difficult customers — New Yorkers — and we’re always trying to find better ways to do things.”

Others, like Richard Wilson, head of single customer view and CRM product for Wesleyan, a financial services company based out of the United Kingdom, is tasked with updating decades of legacy systems.

“We’ve been around for 176 years,” Wilson said, “and so has some of our technology.”

For Wilson, the financial industry tends to adopt technology slower than other industries due to complex processes and strict regulations. But when modernizing its platform to benefit its customers, the company had to look outside the scope of financial services for its digital transformation process.

“Everyone’s expectations are changing,” Wilson said. “We have to stop comparing ourselves to other financial companies. You have to compare yourself to what everyone is doing.”

Transformation ‘is a permanent effort’

Digital transformations aren’t a one-time IT project or objective that is ever really completed, according to Nejam.

“In a city like New York, transformation isn’t something you do once or twice; you do it all the time,” Nejam said. “It’s a permanent effort.”

This is why Microsoft has continuously updated and modernized its business applications products, offering hybrid Dynamics 365 options and cloud-based Office suite products.

“It’s a matter of taking the customer relationships and changing the current business model and enhancing new technology,” said Kathy Piontek, global Microsoft executive for IBM’s global business services. “We hear that folks are drowning in data. There’s an explosion of data, and we want to take it and leverage that data and use it to make decisions moving forward.”

And while data and the mobility of cloud technology are key drivers in a digital transformation process, the focus, according to those going through the transformation, should remain on the customer.

“People have no patience for being behind or slowed down,” said Mary Alice Callaway, vice president of sales and marketing for ABC/Interbake Foods LLC, which is a manufacturer of Girl Scout cookies. “There’s really no choice on the business side for much longer.”

‘Not for the sake of change’

While a digital transformation process can sound like fun to the IT department and act as a potential source of job security as the new processes are put into place, there needs to be a purpose to the project, not doing it only for the sake of new technology.

“We’re doing a transformation not for the sake of change, but to meet our needs,” said John Harrison, director of information technology for New Jersey’s Department of Community Affairs.

When approaching this project, Harrison said he had familiarity with Salesforce, but ultimately chose Dynamics 365 after comparing several CRM platforms.

“With Dynamics, we can keep modernizing,” Harrison said. “It’s an evergreen system and is constantly updated.”

Dynamics 365 CRM pricing varies depending on the product bundle and whether your company is looking for a business or enterprise edition. More pricing information can be found at Microsoft’s Dynamics 365 website.

Span multiple services with Office 365 data loss prevention policies

As Office 365 gains more traction among organizations of all sizes, Microsoft refines the collaboration platform’s…


* remove unnecessary class from ul
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

* Replace “errorMessageInput” class with “sign-up-error-msg” class
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {

* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
return validateReturn;

* DoC pop-up window js – included in moScripts.js which is not included in responsive page
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);

security features to help administrators secure their perimeters. Office 365 now includes a data loss prevention feature that works across multiple services.

Administrators can enlist data loss prevention policies to scan both message text and message attachments for sensitive data, such as social security numbers or credit card numbers. These policies can now extend into Microsoft Office attachments and scan files in SharePoint and OneDrive for Business.

Build the data loss prevention policies

In the Exchange admin center, administrators can choose to build a single data loss prevention (DLP) policy (Figure 1) in the Office 365 Security and Compliance Center to guard data and messages in SharePoint, OneDrive and Exchange, or stick with the existing DLP option.

Office 365 DLP policy
Figure 1. Administrators can create unified data loss prevention policies through the Office 365 Security and Compliance Center.

Administrators develop data loss prevention policies from rules. Each rule has a condition and an action. Administrators can apply the policy to specific locations within Office 365.

To create a DLP policy, open the Office 365 Security & Compliance Admin Center, expand the Data loss prevention container and click on the Policy container. Then click on the Create a policy button.

Now choose the information to protect. As is the case in Exchange Server, the Security & Compliance Center in Office 365 contains DLP templates to assist with regulatory compliance. For example, there are templates designed for the financial services industry (Figure 2) as well as templates meant for healthcare providers. Administrators can always create a custom policy to fit organizational needs.

DLP policy templates
Figure 2. Administrators can use templates in the Office 365 Security & Compliance portal or choose the custom setting to build their own data loss prevention policies.

Name the policy

Naming the policy also means adding a description to it. In some cases, Office 365 automatically assigns a policy name, which the administrator can modify if necessary.

Choose the locations to apply the policy. By default, data loss prevention policies extend to all locations within Office 365, but administrators can also specify policy locations. In Figure 3, manual location assignments allow for finer control. Administrators can choose which Office 365 services to apply the policy to and whether to include or exclude specific SharePoint sites or OneDrive accounts. For example, it may be permissible for members of the legal team to transmit sensitive information, but not a sales person.

DLP locations
Figure 3. An administrator can choose which services to apply the new policy to and make adjustments.

While this wizard does not expose the individual rules that make up a policy, the Advanced Settings option allows the administrator to edit the policy rules and create additional ones.

Refine the policy settings

Next, customize the types of sensitive information to protect with DLP policies. Figure 4 shows one policy that detects when a worker sends a message that shares credit card numbers outside of the organization. The administrator can configure the policy to monitor the use of other data types. Data loss prevention policies can also monitor when sensitive information gets shared within the organization.

DLP policy wizard
Figure 4. The DLP policy wizard allows administrators to customize the types of sensitive information to protect.

The wizard allows the administrator to choose an action to take when sensitive information is shared, such as display a policy tip, block the content from being shared, or send a report to someone in the organization.

After the configuration process, the wizard will ask whether to enable the policy right away or test it.

The last step in the process is to review your selections and, if everything appears to be correct, click the Create button to generate the data loss prevention policy.

Next Steps

How to craft the best DLP policies

Choose the right DLP template in Exchange 2013 SP1

The top email security gateways on the market

Essential Guide

What data loss prevention systems and tactics can do now

Powered by WPeMatico

Using extended srcset and the picture element to tailor your image to every device and layout

Starting in Windows Insider Preview build 10547, Microsoft Edge supports srcset, sizes, and picture―the suite of technologies that make up responsive images. With these, you can tailor your image size and art direction to adapt to diverse devices and layouts. Prior to these features, you needed to provide a full responsive images solution via JavaScript, which can result in duplicate downloads, slower load times and performance due to having to execute logic on the UI thread.

Want to skip right to it? Visit our interactive Test Drive demo to see srcset and picture in action.
Take me there!

An image is fully responsive if it has three principal characteristics: First, it should download at an appropriate resolution to provide best quality image for the user’s device and based on the expected layout dimensions of the image. Second, it should be served in an efficient format that is supported by the user’s browser, to achieve smaller file sizes without compromising quality. Finally, to the focus of the image should adapt to device and viewport dimensions to ensure the primary subject of an image is always prominently in. The combination of srcset, sizes, and picture allows you to embed images with all of these characteristics so that your users have a great experience on any device or screen size. Let’s dig in to the new technologies to understand how each of these complement and build upon one another to provide a comprehensive responsive images solution.

The width descriptor

The first characteristic we discussed above is resolution switching. A responsive site should deliver the most appropriate image to every device, taking into account how the image is being displayed and even what sort of network its users are on.

Basic srcset provides simple resolution switching, but only takes the device resolution into account. This is sufficient for sites with static layout, but in a responsive layout this can result in the browser downloading an asset that isn’t necessary for the image’s size on screen.

With srcset and sizes, the browser takes both the device resolution and the layout size of the image into account when selecting the best image. Normally browsers don’t know anything about an image until they download it, so sizes allows a site to provide some of that information ahead of time so the browser can make smarter decisions about which images it should download.

Let’s take a look at an example using the original srcset syntax:

.gist table { margin-bottom: 0; }

Note: For simplicity, we didn’t include the src attribute. In practice, this should be included so there is a fallback image for browsers that do not support srcset.

Using srcset, we tell the browser to fetch the 1.5x or 2x images if the user is on a higher definition device. Unfortunately, the browser will always select the same image for a given device, even if one of the other images would have been sufficient for the current layout of the page.

Now let’s take a look at what this would like using extended srcset:

.gist table { margin-bottom: 0; }

Here, we have replaced the pixel densities with intrinsic width of the image along with a w, known as the width descriptor. So how does the browser determine the best image to show? That’s where the sizes attribute comes in.

The sizes attribute

At this point, we only have a list of images and their respective widths in pixels. The sizes attribute tells the browser how to organize them and determine the best image in a way that is backwards compatible with the original srcset. To make explaining sizes easier, let’s look at an example:

.gist table { margin-bottom: 0; }

sizes is a layout hint to the browser that reflects what the expected width of the image will be after layout. When the browser encounters an image with a sizes attribute it begins going through the media conditions in order from left to right. So in this case, if the device viewport has a max-width of 350px then the image will be 200px wide, if not it will be 400px wide.

If you have used media queries this should look familiar and it works mostly the same with one major caveat: you can’t use percentages within sizes. This is because the browser evaluates sizes well before layout occurs. Take for example:

.gist table { margin-bottom: 0; }

When sizes is evaluated, the browser doesn’t know what the width of the parent container is, so the 50% value is useless.

Tying sizes together with srcset

Now that we know what srcset with a width descriptor is and what sizes is, let’s put them together and see how the browser handles this. Let’s start off with a simple example:

.gist table { margin-bottom: 0; }

The best way to think about this example is that the browser uses the output of sizes as the input to srcset to normalize the width descriptors. The formula for normalization is pixelDensity = width/computedsizes, so in the case of our example we would divide each width by that of 100. After the conversion, our srcset should look familiar:

.gist table { margin-bottom: 0; }


From here the selection algorithm, which is specific to each browser, occurs as it did for the original srcset. When selecting an image, the browser is trying to find the image that best matches its optimal density (as defined by device’s display properties.) After the selection occurs, the browser sets the intrinsic size of the image by scaling the image based on the resulting pixel density divided by the optimal density. In our example above, if a device has an optimal density of 1x, the browser selects the first image resulting in a 100px image (100/1).

How to utilize currentSrc

The currentsrc attribute provides a means to tell which source was selected from either srcset or src. currentsrc is an asynchronous event and will return the selected source no matter where it was found.

.gist table { margin-bottom: 0; }

Note: We mentioned in June that we had to remove the currentSrc API, which gives you access to the selected source in srcset, but since we’ve implemented picture we have added it back in. This will be available starting in an upcoming Windows Insider preview build.

Introducing the picture element

Now that we have gone over the building blocks of how to do resolution switching for images, let’s talk about art direction. Often, you not only want to select a different image based on pixel density, but also want to select the image that ensures the main subject of the image is visible. You can do this with the picture element.

The picture element cannot stand alone, which is by design. You must include an img element insidethe picture element as a fallback for browsers that don’t support the picture element. In most browsers, when the HTML parser comes across a tag that it doesn’t recognize, it converts the tag to an inline box (e.g. <span> ). That means older browsers (including IE11) will ignore the picture element and expose the img element within. Perfect progressive enhancement!

The source element

The source element is what gives picture its power. Here is an example of a source element:

.gist table { margin-bottom: 0; }

The media attribute takes a media query list and if the media query returns true, then the browser begins to parse srcset and pick the correct image using the logic described earlier. In evaluating source elements, the first match always wins—pay close attention to the order. A good rule of thumb is to start with the largest image first if you are using min-width (a.k.a. “small screen first”) media queries; start smallest first if you’re using max-width (“large screen first”) media queries.

You can use the type attribute to serve specific image formats to browsers that support them. In the example above, only browsers that support the JPEG-XR image format would return true and thus, parse the srcset list. If present, both the media and type attributes need to resolve to true in order to analyze the srcset attribute.

Putting it all together

Now that you have a good sense of how all of this stuff works from a technical standpoint, let’s dig into a practical use case. Suppose you want to produce a blog post describing the story behind a painting, using as much imagery as possible. Your designer mocks up a blog where the featured art piece is in the header with the title on top of it. On viewports smaller than 750 pixels wide, the hero image becomes like a magazine cover, with the image and title filling the viewport. Here is an example wireframe:


Let’s get started with the basic layout. Initially, we’ll start with no responsive images, so we can see how they help us out. Let’s see what we have:

Improving performance by adding resolution switching

Now that we have the initial layout, it is responsive, but some issues do exist that responsive images can help with. The first issue is that while the image we included will work fine on all devices, it won’t be the best experience for people on a slower connection. The image that we are currently using is 2500×1435 and weighs in at 567KB (this is with 60% quality jpg compression) which is only desirable if you are on a high definition device or a large display. Especially on mobile devices, you want to save memory usage and battery life by serving an appropriately sized image. To do this, we create three additional sizes along with the original extra-large version:

Image Dimensions File Size
hero-xlarge.jpg 2500 x 1435 567 KB
hero-large.jpg 1920 x 912 235 KB
hero-med.jpg 960 x 456 62 KB
hero-small.jpg 480 x 228 22 KB

Adding srcset to our hero image will allow the browser to pick the best resource necessary for our viewport’s size, thus improving load time, memory usage and battery life:

.gist table { margin-bottom: 0; }

We now have a responsive site that loads the necessary image depending on the width of the viewport, which gives us better performance—but we can do better. By using a more efficient image format, we can further reduce the file size for browsers that support it. In this example, we’ll only be using JPEG-XR, supported by Internet Explorer 9+ and Microsoft Edge, but you might consider other formats depending on your use case. We have created JPEG-XR images for the larger images, so now let’s compare them with the other files:

Image Dimensions File Size
hero-xlarge.jpg 2500 x 1435 567 KB
hero-xlarge.jxr 2500 x 1435 277 KB
hero-large.jpg 1920 x 912 235 KB
hero-large.jxr 1920 x 912 121 KB
hero-med.jpg 960 x 456 62 KB
hero-med.jxr 960 x 456 42 KB
hero-small.jpg 480 x 228 22 KB

Note: In this example, I did not convert the hero-small.jpg because the benefits were negligible

In order to update our solution to both use resolution switching and only serve JPEG-XR to supported browsers, we need the picture element:

.gist table { margin-bottom: 0; }

Using the media attribute for art direction

Resolution switching improves the performance of our images and can now focus on how the image actually looks when the viewport is at various sizes. On desktop viewports, the layout works exactly like our wireframe suggested, but once we start getting to smaller viewports you start to see issues where the image is too small and the headline begins to cover up the image.

Animation showing a situation where the image is too small and the headline begins to cover up the image

To make this look like our wireframe, we need to create some new images that are designed for vertical devices. After creating some new images and harnessing the power of the media attribute to have the browser pick one of our vertical images on smaller devices we end up with the following end result:

Animation showing adaptive image with art direction using the picture element

All we did to accomplish this was by adding in the vertical images and adding the following source to our picture element:

.gist table { margin-bottom: 0; }

The combination of extended srcset and the picture element gives you the power to optimize your images for various viewports, mime type support and even art direction. We’re excited to bring this capability to Microsoft Edge and to see how you’ll use it on your sites!

To view the live examples shown in this post along with the code to make it happen, we encourage you to check out our Test Drive demo.

Greg Whitworth, Program Manager, Microsoft Edge