Tag Archives: developers

AWS and Microsoft announce Gluon, making deep learning accessible to all developers – News Center

New open source deep learning interface allows developers to more easily and quickly build machine learning models without compromising training performance. Jointly developed reference specification makes it possible for Gluon to work with any deep learning engine; support for Apache MXNet available today and support for Microsoft Cognitive Toolkit coming soon.

SEATTLE and REDMOND, Wash. — Oct. 12, 2017 — On Thursday, Amazon Web Services Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), and Microsoft Corp. (NASDAQ: MSFT) announced a new deep learning library, called Gluon, that allows developers of all skill levels to prototype, build, train and deploy sophisticated machine learning models for the cloud, devices at the edge and mobile apps. The Gluon interface currently works with Apache MXNet and will support Microsoft Cognitive Toolkit (CNTK) in an upcoming release. With the Gluon interface, developers can build machine learning models using a simple Python API and a range of prebuilt, optimized neural network components. This makes it easier for developers of all skill levels to build neural networks using simple, concise code, without sacrificing performance. AWS and Microsoft published Gluon’s reference specification so other deep learning engines can be integrated with the interface. To get started with the Gluon interface, visit https://github.com/gluon-api/gluon-api/.

Developers build neural networks using three components: training data, a model and an algorithm. The algorithm trains the model to understand patterns in the data. Because the volume of data is large and the models and algorithms are complex, training a model often takes days or even weeks. Deep learning engines like Apache MXNet, Microsoft Cognitive Toolkit and TensorFlow have emerged to help optimize and speed the training process. However, these engines require developers to define the models and algorithms up front using lengthy, complex code that is difficult to change. Other deep learning tools make model-building easier, but this simplicity can come at the cost of slower training performance.

The Gluon interface gives developers the best of both worlds — a concise, easy-to-understand programming interface that enables developers to quickly prototype and experiment with neural network models, and a training method that has minimal impact on the speed of the underlying engine. Developers can use the Gluon interface to create neural networks on the fly, and to change their size and shape dynamically. In addition, because the Gluon interface brings together the training algorithm and the neural network model, developers can perform model training one step at a time. This means it is much easier to debug, update and reuse neural networks.

“The potential of machine learning can only be realized if it is accessible to all developers. Today’s reality is that building and training machine learning models require a great deal of heavy lifting and specialized expertise,” said Swami Sivasubramanian, VP of Amazon AI. “We created the Gluon interface so building neural networks and training models can be as easy as building an app. We look forward to our collaboration with Microsoft on continuing to evolve the Gluon interface for developers interested in making machine learning easier to use.”

“We believe it is important for the industry to work together and pool resources to build technology that benefits the broader community,” said Eric Boyd, corporate vice president of Microsoft AI and Research. “This is why Microsoft has collaborated with AWS to create the Gluon interface and enable an open AI ecosystem where developers have freedom of choice. Machine learning has the ability to transform the way we work, interact and communicate. To make this happen we need to put the right tools in the right hands, and the Gluon interface is a step in this direction.”

“FINRA is using deep learning tools to process the vast amount of data we collect in our data lake,” said Saman Michael Far, senior vice president and CTO, FINRA. “We are excited about the new Gluon interface, which makes it easier to leverage the capabilities of Apache MXNet, an open source framework that aligns with FINRA’s strategy of embracing open source and cloud for machine learning on big data.”

“I rarely see software engineering abstraction principles and numerical machine learning playing well together — and something that may look good in a tutorial could be hundreds of lines of code,” said Andrew Moore, dean of the School of Computer Science at Carnegie Mellon University. “I really appreciate how the Gluon interface is able to keep the code complexity at the same level as the concept; it’s a welcome addition to the machine learning community.”

“The Gluon interface solves the age old problem of having to choose between ease of use and performance, and I know it will resonate with my students,” said Nikolaos Vasiloglou, adjunct professor of Electrical Engineering and Computer Science at Georgia Institute of Technology. “The Gluon interface dramatically accelerates the pace at which students can pick up, apply and innovate on new applications of machine learning. The documentation is great, and I’m looking forward to teaching it as part of my computer science course and in seminars that focus on teaching cutting-edge machine learning concepts across different cities in the U.S.”

“We think the Gluon interface will be an important addition to our machine learning toolkit because it makes it easy to prototype machine learning models,” said Takero Ibuki, senior research engineer at DOCOMO Innovations. “The efficiency and flexibility this interface provides will enable our teams to be more agile and experiment in ways that would have required a prohibitive time investment in the past.”

The Gluon interface is open source and available today in Apache MXNet 0.11, with support for CNTK in an upcoming release. Developers can learn how to get started using Gluon with MXNet by viewing tutorials for both beginners and experts available by visiting https://mxnet.incubator.apache.org/gluon/.

About Amazon Web Services

For 11 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud platform. AWS offers over 90 fully featured services for compute, storage, networking, database, analytics, application services, deployment, management, developer, mobile, Internet of Things (IoT), Artificial Intelligence (AI), security, hybrid and enterprise applications, from 44 Availability Zones (AZs) across 16 geographic regions in the U.S., Australia, Brazil, Canada, China, Germany, India, Ireland, Japan, Korea, Singapore, and the UK. AWS services are trusted by millions of active customers around the world — including the fastest-growing startups, largest enterprises, and leading government agencies — to power their infrastructure, make them more agile, and lower costs. To learn more about AWS, visit https://aws.amazon.com.

About Amazon

Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Fire tablets, Fire TV, Amazon Echo, and Alexa are some of the products and services pioneered by Amazon. For more information, visit www.amazon.com/about and follow @AmazonNews.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) is the leading platform and productivity company for the mobile-first, cloud-first world, and its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, rrt@we-worldwide.com

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

Announcing the preview of Java support for Azure Functions

Serverless provides a great model for accelerating app development, but developers want to do it using the programming languages and development tools of their choice. Ever since we first released Azure Functions, support for Java has been a top request. Today, at JavaOne in San Francisco, we’re announcing the public preview of Java support in Azure Functions.

With the recently announced capability to run the open source Azure Functions runtime on cross-platform .NET Core, we’ve architected our runtime to allow a broadened support for different programming languages. Java is the first new language we are introducing in this public preview. The new Java runtime will share all the differentiated features provided by Azure Functions, such as the wide range of triggering options and data bindings, serverless execution model with auto-scale, as well as pay-per-execution pricing.

As a Java developer, you don’t need to use any new tools to develop using Azure Functions. In fact, with our newly released Maven plugin, you can create, build, and deploy Azure Functions from your existing Maven-enabled projects. The new Azure Functions Core Tools will support you to run and debug your Java Functions code locally on any platform.

Image 1

Figure 1: Azure Functions project in Java created using Maven Archetype

What is even more exciting is that popular IDEs and editors like Eclipse, IntelliJ, and VS Code can be used to develop and debug Azure Functions locally.

Image 2

Figure 2: A serverless function in Java debugged using Visual Studio Code

To get started, look at the Azure Functions Java tutorial to create your first Java function and deploy it to Azure using Maven and Jenkins today. Also, if you’re attending JavaOne, join our sessions and swing by the Azure booth to learn more about building serverless apps in Azure with Java!

Next steps

SAP Data Hub debuts at SAP TechEd 2017

LAS VEGAS SAP developers got a good look at a cloud-centric future at SAP TechEd 2017.

During the opening keynote address, Bjorn Goerke, CTO and president of SAP Cloud Platform, donned the guise of Star Trek‘s Capt. James T. Kirk and compared digital business transformation to the “Kobayashi Maru test” of the Star Trek movies — a no-win conundrum. Because organizations don’t see an obvious answer to the problems posed by digital transformation, they are often paralyzed into inaction.

There are three building blocks to getting around this and solving the Kobyashi Maru test, Goerke said: truth through trusted data, agility in application design and development, and a superior user experience.

To help organizations meet these digital transformation goals, SAP demonstrated a number of new products and services at SAP TechEd 2017. One of these was the new SAP Data Hub, a data management system that was unveiled on Monday at a separate event in New York. SAP also firmly committed itself as a cloud-first company with the following announcements:

  • SAP Cloud Platform will be available on Google Cloud Platform.
  • The SAP ABAP development platform will be available in the cloud.
  • SAP is joining the Cloud Native Computing Foundation.

One hub for organizational data

SAP Data Hub will help organizations manage data from a variety of sources that currently sit in separate silos, Goerke said.

“The challenge that companies face is that there are various data silos, ERP data, data warehouses [and] big data sitting in data lakes,” he said. “The question is, how do you put a consistent layer on top of it so you can make sense out of all those different data sets and correlate them and process them so that you can develop new applications on top of it, do analytics, do data science and drive insights? You need to manage the whole pipeline of data flowing through your different data sets, and this is what SAP Data Hub does.”

The partnership with Google is also very important, according to Goerke.

“Google has provided us with the necessary hardware. So, we have certified SAP HANA and S/4HANA and additional products like the SAP analytics portfolio to run on that infrastructure, and we released SAP Cloud Platform on Google Cloud Platform,” he said. “With that, we have taken the last steps in completing our picture toward a real multicloud platform. It gives our customers a choice to run their digital transformation workloads in the future — whether they want to run them in an SAP cloud, or whether they want to run it through us on an AWS [Amazon Web Services], Azure or Google Cloud Platform; that’s the kind of flexibility our customers have asked us to provide for them.”

ABAP gets cloudy

The biggest cheer from the developer-heavy audience at the opening keynote came when Goerke announced SAP’s venerable ABAP programming environment would be made available in the SAP Cloud Platform — but not until 2018.

“SAP has been around for a while, and ABAP is still one of the key languages and environments that we have to build enterprise applications. S/4HANA itself is built on the ABAP stack, so it’s a rock-solid, extremely powerful enterprise environment,” he said. “We have a few million ABAP developers around the world who have built skills using ABAP to build applications that extend or enhance SAP ERP. But the question is, what do we do with those developers? Do we take them into the cloud? Of course, we have to, so ABAP will be supported in the cloud.”

SAP’s cloud credentials were bolstered by its joining the Cloud Native Computing Foundation, which fosters cloud adoption through systems like Kubernetes.

“The foundation has a lot of big names behind it, like Google and Microsoft, who are putting forces together to drive the technology behind a cloud-native computing model,” Goerke said. “[Technologies] like containerization of IT workloads, with Kubernetes as an orchestration and management environment for containerized workloads. We are joining as a platinum member, [and] that allows us to not only consume the technologies, but influence the direction the technologies take. And there are a number of things that we can contribute from an enterprise perspective.”

Cloud direction is good, but challenge is in details

Gavin Quinn, founder of Mindset Consulting, a Minneapolis-based firm that specializes in Fiori and mobile app development, said SAP’s direction is good, but you have to look into the details.

It’s a great direction, but the meat is still developing.
Gavin Quinnfounder of Mindset Consulting, on SAP’s cloud future

“I saw a lot of promising things for the future; Leonardo is a tremendous idea, and there’s a lot of really tremendous things that you can do, in theory, if all this works,” Quinn said. “But what does it actually cost? How do you roll it out? And what’s the nitty-gritty behind the services? It’s a great direction, but the meat is still developing. Our customers love the concept of it and want in on it, but they don’t know where to start, and that’s where the challenge lies.”

The business transformation SAP talks about may still be beyond the capabilities of most customers, Quinn said.

“For many of [of the developers who attend SAP TechEd], 99% of their time [is on things like] how to get an ABAP report, how to build a BW cube, or maybe how to get to HANA. They’re barely into HANA, and S/4 is well beyond that. So, for some of our base customers, that stuff is hardly on their roadmap,” Quinn said. “There are examples out there, and everyone loves the idea of getting there, but it just takes time. It’s going to be a challenge, but they’re setting the direction.”

ABAP in the cloud is a huge deal

SAP’s cloud message was strong, but the move to make ABAP available in the cloud is a huge deal, according to Josh Greenbaum, analyst and founder of Enterprise Applications Consulting. Many CIO’s were concerned that ABAP programmers would be left behind in the cloud-centric digital transformation.

“This extends the knowledge and these assets, both human and technological,” Greenbaum said. “Many CIOs were asking … what happens to my ABAP programmers? Where do they go? What do I do with them? How do I reposition them? Now, by having ABAP available in the cloud, you can develop extensions and new apps in ABAP and run them in the cloud. You can be ready for the next generation of apps. They have to be cloud-ready, so this is good news for developers and IT organizations.”

Why private APIs are the hottest thing around and other news

Nearly three-quarters of software developers spend at least 25% of their time weekly working with APIs, according to a recently released survey from Postman. And those aren’t just any APIs: The majority of developers spend 90% of their time working with internal or private APIs.

At a time when there’s never been more pressure on developers to produce software faster, it’s not surprising that usage of public and private APIs is so high. In the Postman survey, internal or private APIs dominate, but developers still said they spend about 20% of their time using public APIs.

Private APIs are very useful for other internal development practices, like microservices, so it’s not surprising the Postman survey found microservices are considered the “most exciting technology” of 2017. Overall, 27% of the developers surveyed said they were very interested in microservices.

But whether they’re using public or private APIs, the Postman survey takers weren’t completely satisfied with the tools they have, as 80% said they wanted more offerings to help them better utilize APIs. Typically, developers use two tools to manage their workflows at any given time, whether with public or private APIs. And their other complaint was documentation; most felt the supporting information provided with the public or private APIs was insufficient.

How do you stack up?

According to Stack Overflow, the median salary of a developer in the United States just starting out is $75,000. With 15 years of experience, that number rises to just shy of $125,000.

How well do you compare? Well, now there’s a tool that can tell you exactly how you stack up — also from Stack Overflow. The Stack Overflow Salary Calculator looks at location, education, years of experience, what kind of tools you use and what kind of developer you are. Right now, the calculator is limited to the United States, Canada, France, Germany and the United Kingdom.

In the big picture, where you live matters the most when it comes to a paycheck; salaries in the United States are substantially higher than in any of the other countries. The second-most important factor seems to be type of developer, with DevOps developers getting the highest salaries, followed closely by data scientists.

Testing for the iPhone 8

Because timing is everything with software testing, cloud-based testing provider Sauce Labs announced it can now offer same-day testing of iPhone 8 and iPhone 8 Plus applications, as well as support for testing Apple’s new iOS 11 operating system.

These new releases continue to add pressure to software development teams to get applications out quickly and bug-free. In many companies, the answer is to automate testing, but that is far easier said than done in most organizations. And with the growing number of devices that require testing, many teams are turning to third-party test providers. With its latest release, Sauce Labs can now offer customers over 1,000 actual devices to test — either by hand or through an automated process — in a public or private cloud.

Custom Vision Service introduces classifier export, starting with CoreML for iOS 11

To enable developers to build for the intelligent edge, Custom Vision Service from Microsoft Cognitive Services has added mobile model export.

Custom Vision Service is a tool for easily training, deploying, and improving custom image classifiers. With just a handful of images per category, you can train your own image classifier in minutes. Today, in addition to hosting your classifiers at a REST endpoint, you can now export models to run offline, starting with export to the CoreML format for iOS 11. Export will allow you to embed your classifier directly in your application and run it locally on a device. The models you export are optimized for the constraints of a mobile device, so you can classify on device in real time.

Custom Vision Service is designed to build quality classifiers with very small training datasets, helping you build a classifier that is robust to differences in the items you are trying to recognize and that ignores the things you are not interested in. With today’s update, you can easily add real time image classification to your mobile applications. Creating, updating, and exporting a compact model takes only minutes, making it easy to build and iteratively improve your application. More export formats and supported devices are coming in the near future.

A sample app and tutorial for adding real time image classification to an iOS app is now available.

To learn and starting building your own image classifier, visit www.customvision.ai.

Pineapple

Screenshot of a fruit recognition classifier in our sample app.

Delivering Safer Apps with Windows Server 2016 and Docker Enterprise Edition

Windows Server 2016 and Docker Enterprise Edition are revolutionizing the way Windows developers can create, deploy, and manage their applications on-premises and in the cloud. Microsoft and Docker are committed to providing secure containerization technologies and enabling developers to implement security best practices in their applications. This blog post highlights some of the security features in Docker Enterprise Edition and Windows Server 2016 designed to help you deliver safer applications.

For more information on Docker and Windows Server 2016 Container security, check out the full whitepaper on Docker’s site.

Introduction

Today, many organizations are turning to Docker Enterprise Edition (EE) and Windows Server 2016 to deploy IT applications consistently and efficiently using containers. Container technologies can play a pivotal role in ensuring the applications being deployed in your enterprise are safe — free of malware, up-to-date with security patches, and known to come from a trustworthy source. Docker EE and Windows each play a hand in helping you develop and deploy safer applications according to the following three characteristics:

  1. Usable Security: Secure defaults with tooling that is native to both developers and operators.
  2. Trusted Delivery: Everything needed to run an application is delivered safely and guaranteed not to be tampered with.
  3. Infrastructure Independent: Application and security configurations are portable and can move between developer workstations, testing environments, and production deployments regardless of whether those environments are running in Azure or your own datacenter.

Usable Security

Resource Isolation

Windows Server 2016 ships with support for Windows Server Containers, which are powered by Docker Enterprise Edition. Docker EE for Windows Server is the result of a joint engineering effort between Microsoft and Docker. When you run a Windows Server Container, key system resources are sandboxed for each container and isolated from the host operating system. This means the container does not see the resources available on the host machine, and any changes made within the container will not affect the host or other containers. Some of the resources that are isolated include:

  • File system
  • Registry
  • Certificate stores
  • Namespace (privileged API access, system services, task scheduler, etc.)
  • Local users and groups

Additionally, you can limit a Windows Server Container’s use of the CPU, memory, disk usage, and disk throughput to protect the performance of other applications and containers running on the same host.

Hyper-V Isolation

For even greater isolation, Windows Server Containers can be deployed using Hyper-V isolation. In this configuration, the container runs inside a specially optimized Hyper-V virtual machine with a completely isolated Windows kernel instance. Docker EE handles creating, managing, and deleting the VM for you. Better yet, the same Docker container images can be used for both process isolated and Hyper-V isolated containers, and both types of containers can run side by side on the same host.

Application Secrets

Starting with Docker EE 17.06, support for delivering secrets to Windows Server Containers at runtime is now available. Secrets are simply blobs of data that may contain sensitive information best left out of a container image. Common examples of secrets are SSL/TLS certificates, connection strings, and passwords.

Developers and security operators use and manage secrets in the exact same way — by registering them on manager nodes (in an encrypted store), granting applicable services access to obtain the secrets, and instructing Docker to provide the secret to the container at deployment time. Each environment can use unique secrets without having to change the container image. The container can just read the secrets at runtime from the file system and use them for their intended purposes.

Trusted Delivery

Image Signing and Verification

Knowing that the software running in your environment is authentic and came from a trusted source is critical to protecting your information assets. With Docker Content Trust, which is built into Docker EE, container images are cryptographically signed to record the contents present in the image at the time of signing. Later, when a host pulls the image down, it will validate the signature of the downloaded image and compare it to the expected signature from the metadata. If the two do not match, Docker EE will not deploy the image since it is likely that someone tampered with the image.

Image Scanning and Antimalware

Beyond checking if an image has been modified, it’s important to ensure the image doesn’t contain malware of libraries with known vulnerabilities. When images are stored in Docker Trusted Registry, Docker Security Scanning can analyze images to identify libraries and components in use that have known vulnerabilities in the Common Vulnerabilities and Exposures (CVE) database.

Further, when the image is pulled on a Windows Server 2016 host with Windows Defender enabled, the image will automatically be scanned for malware to prevent malicious software from being distributed through container images.

Windows Updates

Working alongside Docker Security Scanning, Microsoft Windows Update can ensure that your Windows Server operating system is up to date. Microsoft publishes two pre-built Windows Server base images to Docker Hub: microsoft/nanoserver and microsoft/windowsservercore. These images are updated the same day as new Windows security updates are released. When you use the “latest” tag to pull these images, you can rest assured that you’re working with the most up to date version of Windows Server. This makes it easy to integrate updates into your continuous integration and deployment workflow.

Infrastructure Independent

Active Directory Service Accounts

Windows workloads often rely on Active Directory for authentication of users to the application and authentication between the application itself and other resources like Microsoft SQL Server. Windows Server Containers can be configured to use a Group Managed Service Account when communicating over the network to provide a native authentication experience with your existing Active Directory infrastructure. You can select a different service account (even belonging to a different AD domain) for each environment where you deploy the container, without ever having to update the container image.

Docker Role Based Access Control

Docker Enterprise Edition allows administrators to apply fine-grained role based access control to a variety of Docker primitives, including volumes, nodes, networks, and containers. IT operators can grant users predefined permission roles to collections of Docker resources. Docker EE also provides the ability to create custom permission roles, providing IT operators tremendous flexibility in how they define access control policies in their environment.

Conclusion

With Docker Enterprise Edition and Windows Server 2016, you can develop, deploy, and manage your applications more safely using the variety of built-in security features designed with developers and operators in mind. To read more about the security features available when running Windows Server Containers with Docker Enterprise Edition, check out the full whitepaper and learn more about using Docker Enterprise Edition in Azure.

Azure Log Analytics – Container Monitoring Solution general availability, CNCF Landscape

Docker container is an emerging technology to help developers and devops with easy provisioning and continuous delivery in modern infrastructure. As containers can be ubiquitous in an environment, monitoring is essential. We’ve developed a monitoring solution which provides deep insights into containers supporting Kubernetes, Docker Swarm, Mesos DC/OS, and Service Fabric container orchestrators on multiple OS platforms. We are excited to announce the general availability for the Container Monitoring management solution on Azure Log Analytics, available in the Azure Marketplace today. 

“Every community contribution helps DC/OS become a better platform for running modern applications, and the addition of Azure Log Analytics Container Monitoring Solution into DC/OS Universe is a meaningful contribution, indeed,” said Ravi Yadav, Technical Partnership Lead at Mesosphere. “DC/OS users are running a lot of Docker containers, and having the option to manage them with a tool like Azure Log Analytics Container Monitoring Solution will result in a richer user experience.”

Microsoft recently joined the Cloud Native Computing Foundation (CNCF) and we continue to invest in open source projects. Azure Log Analytics is now part of the Cloud Native Computing Foundation (CNCF) Landscape under Monitoring Category.

Here’s what the Container Monitoring Solution supports:

With this solution, you can:

  • See information about all container hosts in a single location
  • Know which containers are running, what image they’re running, and where they’re running
  • See an audit trail for actions on containers
  • Troubleshoot by viewing and searching centralized logs without remote login to the Docker hosts
  • Find containers that may be “noisy neighbors” and consuming excess resources on a host
  • View centralized CPU, memory, storage, and network usage and performance information for containers

Diagram

New features available as part of the general availability include:

We’ve added new features to provide better insights to your Kubernetes cluster. With the new features, you can more easily narrow down container issues within a Kubernetes cluster. Now you can use search filters on you own custom pod labels and Kubernetes cluster hierarchies. With container process information, you can quickly see container process status for deeper health analysis. These features are only for Linux—additional Windows features are coming soon.

  • Kubernetes cluster awareness with at-a-glance hierarchy inventory from Kubernetes cluster to pods
  • New Kubernetes events
  • Capture custom pod labels and provides custom complex search filters
  • Provides container process information
  • Container Node Inventory including storage, network, orchestration type, and Docker version

For more information about how to use Container Monitoring solution, as well as the insights you can gather, see Containers solution in Log Analytics.

Learn more by reading previous blogs on Azure Log Analytics Container Monitoring.

How do I try this?

You can get a free subscription for Microsoft Azure so that you can test the Container Monitoring solution features.

How can I give you guys feedback?

There are a few different routes to give feedback:

We plan on enhancing monitoring capabilities for containers. If you have feedback or questions, please feel free to contact us!

Learn the basics of PowerShell for Azure Functions

just for developers; several scripting languages open up new opportunities for admins and systems analysts as well.

Scripting options for Azure Functions

Azure Functions is a collection of event-driven application components that can interact with other Azure services. It’s useful for asynchronous tasks, such as data ingestion and processing, extract, transform and load processes or other data pipelines, as well as microservices or cloud service integration.

In general, functions are well-suited as integration and scripting tools for legacy enterprise applications due to their event-driven, lightweight and infrastructure-free nature. The ability to use familiar languages, such as PowerShell, Python and Node.js, makes that case even stronger. Since PowerShell is popular with Windows IT shops and Azure users, the best practices below focus on that particular scripting language but apply to others as well.

PowerShell for Azure Functions

The initial implementation of PowerShell for Azure Functions uses PowerShell version 4 and only supports scripts (PS1 files), not modules (PSM1 files), which makes it best for simpler tasks and rapid development. To use PowerShell modules in Azure Functions, users can update the PSModulepath environment variable to point to a folder that contains custom modules and connect to it through FTP.

When you use scripts, pass data to PowerShell functions through files or environment variables, because a function won’t store or cache the runtime environment. Incoming data to a function, via an event trigger or input binding, is passed using files that are accessed in PowerShell through environment variables. The same scheme works for data output. Since the input data is just a raw file, users must know what to expect and parse accordingly. Functions itself won’t format data but will support most formats, including:

  • string;
  • int;
  • bool;
  • object/JavaScript Object Notation;
  • binary/buffer;
  • stream; and
  • HTTP

PowerShell functions can be triggered by HTTP requests, an Azure service queue, such as when a message is added to a specified storage queue, or a timer (see Figure 1). Developers can create Azure Functions with the Azure portal, Visual Studio — C# functions only — or a local code editor and integrated development environment, although the portal is the easiest option.

Triggers for PowerShell functions
Figure 1. PowerShell functions triggers

Recommendations

Azure Functions works the same whether the code is in C#, PowerShell or Python, which enables teams to use a language with which they have expertise or can easily master. The power of Functions stems from its integration with other Azure services and built-in runtime environments. Writing as a function is more efficient than creating a standalone app for simple tasks, such as triggering a webhook from an HTTP request.

While PowerShell is an attractive option for Windows teams, they need to proceed with caution since support for Azure Functions is still a work in progress. The implementation details will likely change, however, for the better.

Vagrant and Hyper-V — Tips and Tricks

A few months ago, I went to DockerCon as a Microsoft representative. While I was there, I had the chance to ask developers about their favorite tools.

The most common tool mentioned (outside of Docker itself) was Vagrant. This was interesting — I was familiar with Vagrant, but I’d never actually used it. I decided that needed to change. Over the past week or two, I took some time to try it out. I got everything working eventually, but I definitely ran into some issues on the way.

My pain is your gain — here are my tips and tricks for getting started with Vagrant on Windows 10 and Hyper-V.

NOTE: This is a supplement for Vagrant’s “Getting Started” guide, not a replacement.

Tip 0: Install Hyper-V

For those new to Hyper-V, make sure you’ve got Hyper-V running on your machine. Our official docs list the exact steps and requirements.

Tip 1: Set Up Networking Correctly

Vagrant doesn’t know how to set up networking on Hyper-V right now (unlike other providers), so it’s up to you to get things working the way you like them.

There are a few NAT networks already created on Windows 10 (depending on your specific build).  Layered_ICS should work (but is under active development), while Layered_NAT doesn’t have DHCP.  If you’re a Windows Insider, you can try Layered_ICS.  If that doesn’t work, the safest option is to create an external switch via Hyper-V Manager.  This is the approach I took. If you go this route, a friendly reminder that the external switch is tied to a specific network adapter. So if you make it for WiFi, it won’t work when you hook up the Ethernet, and vice versa.

You can also do this with PowerShell

Instructions for adding an external switch in Hyper-V manager

Tip 2: Use the Hyper-V Provider

Unfortunately, the Getting Started guide uses VirtualBox, and you can’t run other virtualization solutions alongside Hyper-V. You need to change the “provider” Vagrant uses at a few different points.

When you install your first box, add –provider :

vagrant box add hashicorp/precise64 --provider hyperv

And when you boot your first Vagrant environment, again, add –provider. Note: you might run into the error mentioned in Trick 4, so skip to there if you see something like “mount error(112): Host is down”.

vagrant up --provider hyperv

Tip 3: Add the basics to your Vagrantfile

Adding the provider flag is a pain to do every single time you run vagrant up. Fortunately, you can set up your Vagrantfile to automate things for you. After running vagrant init, modify your vagrant file with the following:

Vagrant.configure(2) do |config|  
  config.vm.box = "hashicorp/precise64"
  config.vm.provider "hyperv"
  config.vm.network "public_network"
end

One additional trick here: vagrant init will create a file that will appear to be full of commented out items. However, there is one line not commented out:

There is one line not commented.

Make sure you delete that line! Otherwise, you’ll end up with an error like this:

Bringing machine 'default' up with 'hyperv' provider...
==> default: Verifying Hyper-V is enabled...
==> default: Box 'base' could not be found. Attempting to find and install...
    default: Box Provider: hyperv
    default: Box Version: >= 0
==> default: Box file was not detected as metadata. Adding it directly...
==> default: Adding box 'base' (v0) for provider: hyperv
    default: Downloading: base
    default:
An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.

Trick 4: Shared folders uses SMBv1 for hashicorp/precise64

For the image used in the “Getting Started” guide (hashicorp/precise64), Vagrant tries to use SMBv1 for shared folders. However, if you’re like me and have SMBv1 disabled, this will fail:

Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:

mount -t cifs -o uid=1000,gid=1000,sec=ntlm,credentials=/etc/smb_creds_e70609f244a9ad09df0e760d1859e431 //10.124.157.30/e70609f244a9ad09df0e760d1859e431 /vagrant

The error output from the last command was:

mount error(112): Host is down
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

You can check if SMBv1 is enabled with this PowerShell Cmdlet:

Get-SmbServerConfiguration

If you can live without synced folders, here’s the line to add to the vagrantfile to disable the default synced folder.

config.vm.synced_folder ".", "/vagrant", disabled: true

If you can’t, you can try installing cifs-utils in the VM and re-provision. You could also try another synced folder method. For example, rsync works with Cygwin or MinGW. Disclaimer: I personally didn’t try either of these methods.

Tip 5: Enable Nifty Hyper-V Features

Hyper-V has some useful features that improve the Vagrant experience. For example, a pretty substantial portion of the time spent running vagrant up is spent cloning the virtual hard drive. A faster way is to use differencing disks with Hyper-V. You can also turn on virtualization extensions, which allow nested virtualization within the VM (i.e. Docker with Hyper-V containers). Here are the lines to add to your Vagrantfile to add these features:

config.vm.provider "hyperv" do |h|
  h.enable_virtualization_extensions = true
  h.differencing_disk = true
end

There are a many more customization options that can be added here (i.e. VMName, CPU/Memory settings, integration services). You can find the details in the Hyper-V provider documentation.

Tip 6: Filter for Hyper-V compatible boxes on Vagrant Cloud

You can find more boxes to use in the Vagrant Cloud (formally called Atlas). They let you filter by provider, so it’s easy to find all of the Hyper-V compatible boxes.

Tip 7: Default to the Hyper-V Provider

While adding the default provider to your Vagrantfile is useful, it means you need to remember to do it with each new Vagrantfile you create. If you don’t, Vagrant will trying to download VirtualBox when you vagrant up the first time for your new box. Again, VirtualBox doesn’t work alongside Hyper-V, so this is a problem.

PS C:vagrant> vagrant up
==>  Provider 'virtualbox' not found. We'll automatically install it now...
     The installation process will start below. Human interaction may be
     required at some points. If you're uncomfortable with automatically
     installing this provider, you can safely Ctrl-C this process and install
     it manually.
==>  Downloading VirtualBox 5.0.10...
     This may not be the latest version of VirtualBox, but it is a version
     that is known to work well. Over time, we'll update the version that
     is installed.

You can set your default provider on a user level by using the VAGRANT_DEFAULT_PROVIDER environmental variable. For more options (and details), this is the relevant page of Vagrant’s documentation.

Here’s how I set the user-level environment variable in PowerShell:

[Environment]::SetEnvironmentVariable("VAGRANT_DEFAULT_PROVIDER", "hyperv", "User")

Again, you can also set the default provider in the Vagrant file (see Trick 3), which will prevent this issue on a per project basis. You can also just add --provider hyperv when running vagrant up. The choice is yours.

Wrapping Up

Those are my tips and tricks for getting started with Vagrant on Hyper-V. If there are any you think I missed, or anything you think I got wrong, let me know in the comments.

Here’s the complete version of my simple starting Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
  config.vm.box = "hashicorp/precise64"
  config.vm.provider "hyperv"
  config.vm.network "public_network"
  config.vm.synced_folder ".", "/vagrant", disabled: true
  config.vm.provider "hyperv" do |h|
    h.enable_virtualization_extensions = true
    h.differencing_disk = true
  end
end

Powered by WPeMatico

Build 2016: Conversational intelligence, new innovations for Windows 10 and cloud tools for all developers — Weekend Reading: April 1 edition

Welcome to the Build 2016 edition of Weekend Reading, highlighting all the news from Microsoft’s annual developer’s conference this week.

Microsoft CEO Satya Nadella outlined the company’s vision to help developers embrace a new era of conversational intelligence, with additions to the Cortana Intelligence Suite and new cloud services and toolkits to help us understand the world around us and create intelligent, helpful bots.

“As an industry, we are on the cusp of a new frontier that pairs the power of natural human language with advanced machine intelligence,” Nadella said. “At Microsoft, we call this Conversations as a Platform, and it builds on and extends the power of the Microsoft Azure, Office 365 and Windows platforms to empower developers everywhere.”

The Cortana Intelligence Suite – the new name for the Cortana Analytics Suite – can transform lives, with a prime example being Seeing AI, an app in development to help people with visual impairment understand their surroundings.

Additions to the Cortana suite include Microsoft Cognitive Services, a collection of intelligence application programming interfaces (APIs) that allow systems to see, hear, speak, understand and interpret our needs with natural communication. Also new is the Microsoft Bot Framework, which helps developers build intelligent bots that allow users to chat in natural language on many platforms. Both additions are in preview.

Microsoft also announced at Build the Skype Bot Platform, which allows developers to create bots that leverage Skype’s many ways to communicate, including text, voice, video and 3D interactive characters.

Terry Myerson, Microsoft executive vice president of Windows and Devices Group, shared the company’s next chapter to create more personal computing, with the Windows 10 Anniversary Update and new capabilities for the Universal Windows Platform.

The Windows 10 Anniversary Update features Windows Ink, which lets you handwrite on your device and create sticky notes. The update includes a proactive Cortana that can guide you even when your device is locked. And it has new Windows Hello features that extend the security of Windows 10 to multiple devices and Microsoft Edge.

“With Windows 10 now running on over 270 million active devices, we’re celebrating with our fans by delivering the Windows 10 Anniversary Update,” Myerson said. “This significant update will help you interact with your Windows 10 devices as naturally as you interact with the world around you — using your pen, presence and voice.”

New developer capabilities for Windows 10 include full access to Cortana’s intelligence, and new APIs and tools to integrate Windows Ink, Windows Hello and other Windows 10 innovations into apps. And Microsoft HoloLens Development Edition shipped, allowing developers to start building the future of holographic computing. Plus a new Xbox Dev Mode turns any Xbox One into a dev tool, enabling anyone to develop for the living room.

On Thursday at Build, Scott Guthrie, Microsoft executive vice president of the Cloud and Enterprise Group, announced new tools and resources for developers to tap into the cloud’s possibilities.

“Today, we made targeting every device and platform a lot easier by making Xamarin available to every Visual Studio developer for free, including the free Visual Studio Community Edition,” Guthrie wrote in his blog post.

“We are also making available a free Xamarin Studio Community Edition for OS X. Developers worldwide can now easily create apps using an end-to-end mobile development solution – joining companies like Slack, Pinterest, Alaska Airlines and more.”

BMW CONNECTED

BMW Connected

Guthrie also announced new Azure services to help developers address operational realities and take advantage of emerging trends, including the Internet of Things and microservices. BMW demonstrated how it’s using Azure, with the launch of its new digital mobility app, BMW Connected, which is based on a flexible platform the automaker built using Azure.

Also on Thursday, Qi Lu, Microsoft executive vice president of the Applications and Services Group, highlighted how developers can use the Office platform to create new business opportunities. Office developers can now build apps and place them into Word, Excel and PowerPoint ribbons. And on the Build stage, Starbucks showed off an Outlook add-in that enables people to send Starbucks e-gifts within Outlook and schedule meetings at Starbucks locations.

For the closing keynote, Steven Guggenheimer, Microsoft corporate vice president of Developer eXperience and chief evangelist, showed how partners are innovating with Azure, Office and Windows. And actor Kevin Hart weighed in with a hilarious video on how everyone wants to be a developer, highlighting Muzik LLC’s software development kit that turns headphones into a platform.

For more wrap-ups of Build, check out the Top 10 ways Build rocked it for developers on Day One and Day Two.

And finally, we heard from Nadella, Myerson, Guthrie and Lu across the Microsoft social communities as they announced the latest Microsoft updates at Build.

Thanks for reading and see you next week!

Vanessa Ho
Microsoft News Center Staff

The post Build 2016: Conversational intelligence, new innovations for Windows 10 and cloud tools for all developers — Weekend Reading: April 1 edition appeared first on The Official Microsoft Blog.