Tag Archives: events

Docker Enterprise spun off to Mirantis, company restructures

In a startling turn of events, Docker as the industry knows it is no more.

Mirantis, a privately held company based in Campbell, Calif., acquired the Docker Enterprise business from Docker Inc., including Docker employees, Docker Enterprise partnerships and some 750 Docker Enterprise customer accounts. The IP acquired in the deal for an undisclosed sum, announced today, includes Docker Engine – Enterprise, Docker Trusted Registry, Docker Universal Control Plane and Docker CLI.

“This is the end of Docker as we knew it, and it’s a stunning end,” said Jay Lyman, an analyst at 451 Research. The industry as a whole had been skeptical of Docker’s business strategy for years, particularly in the last six months as the company went quiet. The company underwent a major restructuring in the wake of the Mirantis deal today, naming longtime COO Scott Johnston as CEO. Johnston replaces Robert Bearden, who served just six months as the company’s chief executive.

“This validates a lot of the questions and uncertainty that have been surrounding Docker,” Lyman said. “We certainly had good reasons for asking the questions that we were.”

While not the end for Docker Enterprise, it appears to be the end for Docker’s Swarm orchestrator, which Mirantis will support for another two years. The primary focus will be on Kubernetes, Mirantis CEO Adrian Ionel wrote in a company blog post.

This is the end of Docker as we knew it, and it’s a stunning end.
Jay LymanAnalyst, 451 Research

Docker Enterprise customers are already being directed to Mirantis for support, though Docker account managers and points of contact remain the same for now, as they transition over to Mirantis. Going forward, Mirantis will incorporate Docker Kubernetes into its Kubernetes as a Service offering, which analysts believe will give it a fresh toehold in public and hybrid cloud container orchestration.

However, it’s a market already crowded with vendors. Competitors include big names such as Google, which offers hybrid Kubernetes services with Anthos, and IBM-Red Hat, which so far has dominated the enterprise market for on-premises and hybrid Kubernetes management with more than 1000 customers.

A surprising exit for Docker Inc.

While the value of the deal remains unknown, it’s unlikely that Mirantis, which numbers 400 employees and is best known for its on-premises OpenStack and Kubernetes-as-a-service business, could afford a blockbuster sum equivalent to the hundreds of millions of dollars in funding Docker Inc. received since it launched Docker Engine 1.0 in 2014.

“I thought Docker would find a bigger buyer — I’m not sure Mirantis has the resources or name to do a very large deal,” said Gary Chen, an analyst at IDC.

Analysts were also surprised that Docker split off Docker Enterprise rather than being acquired as a whole, though it’s possible a second deal for Docker’s remaining Docker Hub and Docker Desktop IP could follow.

“It could be another buyer only wanted that part of the business, but Docker put so much into Docker Enterprise for quite a while — this is a complete turnaround,” Chen said.

Docker Enterprise hit scalability, reliability snags for some

As Docker looked to differentiate its Kubernetes implementation within Docker Enterprise last year, one customer who used the Swarm orchestrator for some workloads hoped that Kubernetes support would alleviate scalability and stability concerns. Mitchell International, an auto insurance software company in San Diego, said it suffered a two-hour internal service outage when a Swarm master failed and a quorum algorithm to elect a new master node also did not work. This outage prompted Mitchell to move Linux containers to Amazon EKS, but members of its IT team hoped Docker Enterprise with Kubernetes support would replace swarm for Windows containers.

However, about a month ago, a senior architect at a large insurance company on the East Coast told SearchITOperations he’d experienced similar issues in his deployment, including the software’s Kubernetes integration.

This company’s environment is comprised of thousands of containers and hundreds of host nodes, and according to the architect, the Docker Enterprise networking implementation can become unstable at that scale. He traced this to its use of the Raft Consensus Algorithm, an open source utility which maintains consistency in distributed systems, and how it stores data in the open source RethinkDB, which can become corrupt when it processes high volumes of data, and out of sync with third-party overlay networks in the environment.

“The Docker implementation gives you the native Kubernetes APIs, but we do have concerns with how some of the core networking with their Universal Control Plane is implemented,” the architect said, speaking on condition of anonymity because he is not permitted to speak for his company in the press. “This is challenging at scale, and that carries forward into Kubernetes.”

The insurance company has been able to address this by running a greater number of relatively small Docker Enterprise clusters, but wasn’t satisfied with that as a long-term approach, and has begun to evaluate different Kubernetes distros from vendors such as Rancher and VMware to replace Docker Enterprise.

The senior architect was briefed on Mirantis’ managed service plans prior to the acquisition this week, and said his company will still move away from Docker Enterprise next year.

“We talked to Mirantis’ leadership team before [the acquisition] became public, but we don’t see a managed service as a strategic piece for us,” he said in an interview today. “I’m sure some customers will continue to ride out [the transition], but we’re not looking for a vendor to come in and manage our platform.”

Mirantis CEO pledges support, tech stability for customers

Docker reps said last year that it has many customers using Docker Enterprise with Windows and Swarm who had not run into the issue, in response to Mitchell International’s report of a problem. A company spokesperson did not respond to requests for comment about the more recent customer report of issues with Kubernetes last month.

Mirantis CEO Ionel said he hasn’t yet dug into that level of detail on the product, but that his company’s tech team will take the lead on Kubernetes product development going forward.

“Mirantis will contribute our Kubernetes expertise, including around scalability, robustness, ease of management and operation to the platform,” he said in an interview with SearchITOperations today. “That’s part of the unique value that we bring — the [Docker] brand will remain [Universal Control Plane], since that’s what customers are used to, but the technology underneath the hood is going to get an upgrade.”

At least for the foreseeable future, most Docker Enterprise customers will probably wait and see how the platform changes under Mirantis before they make a decision, consultants said.

“I know of only one Docker Enterprise customer, and I am sure they will stay on the platform, as it supports their production environment, until they see what Mirantis provides going forward,” said Chris Riley, DevOps delivery director at Cprime Inc., an Agile software development consulting firm in San Mateo, Calif.

Most enterprises have yet to deploy full container platforms in production, but most of his enterprise clients are either focused on OpenShift for its hybrid cloud support or using a managed Kubernetes service from a public cloud provider, Riley said.

Docker intends to refocus its efforts around Docker Desktop, but that product won’t be of interest to the insurance company’s senior architect and his team, who have developed their own process for moving apps from the developer desktop into the CI/CD pipeline.

In fact, the senior architect said he’d become frustrated by the company’s apparent focus on Docker Desktop over the last 18 months, while some Docker Enterprise customers waited for features such as blue-green container cluster upgrades, which Docker shipped in Docker Enterprise 3.0 in July.

“We’d been asking for ease of upgrade features for two years — it’s been a big pain point for us, to the point where we developed our own [software] to address it,” he said. “They finally started to get there [with version 3.0], but it’s a little too late for us.”

Mirantis’ Ionel said the company plans to include seamless upgrades as a major feature of its managed service offering. Other areas of focus will be centralized management of a fleet of Kubernetes clusters rather than just one, and self-service features for development teams.

Mirantis will acquire all of Docker’s customer support and customer success team employees, as well as the systems they use to support Docker Enterprise shops and all historical customer support data, Ionel said.

“Nothing there has changed,” he said. “They are still doing today what they were doing yesterday.”

Go to Original Article

Vancouver Canucks defend data with Veeam backup

As host of the ice hockey events at the 2010 Winter Olympics, Aquilini Investment Group, owner of Rogers Arena and the Vancouver Canucks, had to rethink its entire IT game plan.

Rogers Arena has a capacity of around 18,000 people, and its IT infrastructure had to ensure all ticket scanners, Wi-Fi and point-of-sale systems would never go down during the heavy influx of attendees. In 2010, Aquilini revamped its legacy systems, moving away from physical servers and tape to virtualization and VM backup. It deployed VMware and Veeam backup.

“We were starting to see the serious benefits of virtualization compared to traditional physical servers,” said Olly Prince, manager of infrastructure at Canucks Sports & Entertainment and Aquilini Group.

The switch dramatically changed how the Canucks handled backup. Prince described the old system as “hit-or-miss.” Backup copies of data were stored on tapes that were then sent to an off-site facility. When a user needed something restored, the correct tape had to be found and then delivered back to the data center for restoration. The whole process took four or five business days, and there was no guarantee that the restoration would succeed.

With Veeam backup, Prince said, he’s now able to restore data in 10 minutes.

Cloud considerations hinge on cost

As part of the IT revamp, Aquilini has been looking at the cloud more closely, but has only dipped a toe in. So far, there is a single test/dev workload deployed on AWS that isn’t being backed up because of how inconsequential it is. Prince had conducted a cost analysis and found that it’s still cheaper to run most workloads in VMs on premises.

Headshot of Olly PrinceOlly Prince

However, Aquilini wants to dive deeper into cloud. Some of the ways the company wants to take advantage of the cloud are disaster recovery (DR), Office 365 backup and to give coaches a way to upload videos or access useful player metadata while they are on the road. Right now, the last option is being achieved by having the team carry a “travel server” with them wherever they go.

“We’re looking at everything as a whole and strategizing what makes sense for our organization to do on cloud or on prem,” said Margaret Pawlak, IT business strategy and project manager at Aquilini Group, Canucks Sports & Entertainment.

Headshot of Margaret PawlakMargaret Pawlak

Aquilini recently finished a proof of concept with Microsoft Azure for DR. Prince said he was able to replicate on-premises applications and run them on the cloud, but the next step is factoring in costs. The company’s current DR plan involves replicating and failing over to an off-site facility about 60 kilometers away from the main data center. That site also houses its own separate production environment, so while it has enough storage to bring enough VMs back online to keep the business running, it won’t include absolutely everything.

Although Pawlak and Prince said they’re actively working on pushing some of these cloud strategies, they’re having difficulty convincing the rest of the organization that changes are necessary.

Horror stories don’t get you a [cloud backup] budget.
Olly PrinceManager of infrastructure, Canucks Sports & Entertainment

In the case of Office 365 backup, there is a pervasive myth that its native long-term retention policy is a suitable replacement for true, point-in-time backup. Prince pointed out that retention doesn’t help when trying to restore a corrupted or deleted file.

In the case of DR, Pawlak said it is hard to put a business case forward for what is essentially insurance. The benefit is not something tangible until a real disaster hits, and there’s a belief that such an event will never actually happen. Prince said it’s a difficult attitude to overcome until it’s too late — no matter how many times he’s shared IT horror stories from his peers in the industry.

“Horror stories don’t get you a budget,” Prince said.

Backup strategies beyond the rink

Prince’s team of four IT personnel, himself included, is responsible not just for the Canucks franchise and Rogers Arena, but for hotels, wineries and other properties owned by Aquilini Group. A total of 180 TB from 60 VMware VMs are being protected by Veeam backup. Aside from the daily business data generated by Rogers Arena, some of the VMs also house audio and visual data, as well as player performance metadata that the Canucks franchise uses for scouting, training and coaching.

Aquilini uses Darktrace for cyberdefense, but Prince focuses much of his attention on user training as well. He said ransomware is more likely to get through unaware staff than through vulnerabilities in devices or workstations they use, so he trains them on how to spot phishing and avoid executing programs they’re unsure of. A good backup system is also an important part of the overall security package.

Aquilini would not comment on other data protection vendors that were considered besides Veeam, but Prince said ease of deployment and use were huge factors in the decision, given how small his IT staff is.

Prince said he wants Veeam to work natively with Azure cold storage, which it currently doesn’t. On top of certain files that need to be retained for compliance reasons, the Canucks franchise has a large amount of audio and visual files that need to be archived for potential future use. Not all the footage is mission-critical, but some clips might be useful for pulling together a promotional video.

“It would be nice to take a backup of that and shove it somewhere cheap,” Prince said.

Go to Original Article

Ecstasy programming language targets cloud-native computing

While recent events have focused on Java and how it will fare as computing continues to evolve to support modern platforms and technologies, a new language is targeted directly at the cloud-native computing space — something Java continues to adjust to.

This new language, known as the Ecstasy programming language, aims to address programming complexity and to enhance security and manageability in software, which are key challenges for cloud app developers.

Oracle just completed its Oracle Open World and Oracle Code One conferences, where Java was dominant. Indeed, Oracle Code One was formerly known as JavaOne until last year, when Oracle changed its name to be more inclusive of other languages.

Ironically, Cameron Purdy, a former senior vice president of development at Oracle and now CEO of Xqiz.it (pronounced “exquisite”), based in Lexington, Mass., is the co-creator of the Ecstasy language. Purdy joined Oracle in 2007, when the database giant acquired his previous startup, Tangosol, to attain its Coherence in-memory data grid technology, which remains a part of Oracle’s product line today.

Designed for containerization and the cloud-native computing era

Purdy designed Ecstasy for what he calls true containerization. It will run on a server, in a VM or in an OS container, but that is not the kind of container that Ecstasy containerization refers to. Ecstasy containers are a feature of the language itself, and they are secure, recursive, dynamic and manageable runtime containers, he said.

For security, all Ecstasy code runs inside an Ecstasy container, and Ecstasy code cannot even see the container it’s running inside of — let alone anything outside that container, like the OS, or even another container. Regarding recursivity, Ecstasy code can create nested containers inside the current container, and the code running inside those containers can create their own containers, and so on. For dynamism, containers can be created and destroyed dynamically, but they also can grow and shrink within a common, shared pool of CPU and memory resources. For manageability, any resources — including CPU, memory, storage and any I/O — consumed by an Ecstasy container can be measured and managed in real time. And all the resources within a container — including network and storage — can be virtualized, with the possibility of each container being virtualized in a completely different manner.

Overall, the goal of Ecstasy is to solve a set of problems that are intrinsic to the cloud:

  • the ability to modularize application code, so that some portions could be run all the way out on the client, or all the way back in the heart of a server cluster, or anywhere in-between — including on shared edge and CDN servers;
  • to make code that is portable and reusable across all those locations and devices;
  • to be able to securely reuse code by supporting the secure containerization of arbitrary modules of code;
  • to enable developers to manage and virtualize the resources used by this code to enhance security, manageability, real-time monitoring and cloud portability; and
  • to provide an architecture that would scale with the cloud but could also scale with the many core devices and specialized processing units that lie at the heart of new innovation — like machine learning.

General-purpose programming language

Ecstasy, like C, C++, Java, C# and Python, is a general-purpose programming language — but its most compelling feature is not what it contains, but rather what it purposefully omits, Purdy said.

For instance, all the aforementioned general-purpose languages adopted the underlying hardware architecture and OS capabilities as a foundation upon which they built their own capabilities, but additionally, these languages all exposed the complexity of the underlying hardware and OS details to the developer. This not only added to complexity, but also provided a source of vulnerability and deployment inflexibility.

As a general-purpose programming language, Ecstasy will be useful for most application developers, Purdy said. However, Xqiz.it is still in “stealth” mode as a company and in the R&D phase with the language. Its design targets all the major client device hardware and OSes, all the major cloud vendors, and all of the server back ends.

“We designed the language to be easy to pick up for anyone who is familiar with the C family of languages, which includes Java, C# and C++,” he said. “Python and JavaScript developers are likely to recognize quite a few language idioms as well.”

Ecstasy is not a superset of Java, but [it] definitely [has] a large syntactic intersection. Ecstasy adds lots and lots onto Java to improve both developer productivity, as well as program correctness.
Mark FalcoSenior principal software development engineer, Workday

Ecstasy is heavily influenced by Java, so Java programmers should be able to read lots of Ecstasy code without getting confused, said Mark Falco, a senior principal software development engineer at Workday who has had early access to the software.

“To be clear, Ecstasy is not a superset of Java, but [it] definitely [has] a large syntactic intersection,” Falco said. “Ecstasy adds lots and lots onto Java to improve both developer productivity, as well as program correctness.” The language’s similarity to Java also should help with developer adoption, he noted.

However, Patrick Linskey, a principal engineer at Cisco and another early Ecstasy user, said, “From what I’ve seen, there’s a lot of Erlang/OTP in there under the covers, but with a much more accessible syntax.” Erlang/OTP is a development environment for concurrent programming.

Falco added, “Concurrent programming in Ecstasy doesn’t require any notion of synchronization, locking or atomics; you always work on your local copy of a piece of data, and this makes it much harder to screw things up.”

Compactness, security and isolation

Moreover, a few key reasons for creating a new programming language for serverless, cloud and connected devices apps are their compactness, security and isolation, he added.

“Ecstasy starts off with complete isolation at its core; an Ecstasy app literally has no conduit to the outside world, not to the network, not to the disk, not to anything at all,” Falco said. “To gain access to any aspect of the outside world, an Ecstasy app must be injected with services that provide access to only a specific resource.”

“The Ecstasy runtime really pushes developers toward safe patterns, without being painful,” Linskey said. “If you tried to bolt an existing language onto such a runtime, you’d end up with lots of tough static analysis checks, runtime assertions” and other performance penalties.

Indeed, one of the more powerful components of Ecstasy is the hard separation of application logic and deployment, noted Rob Lee, another early Ecstasy user who is vice president and chief architect at Pure Storage in Mountain View, Calif. “This allows developers to focus on building the logic of their application — what it should do and how it should do it, rather than managing the combinatorics of details and consequences of where it is running,” he noted.

What about adoption?

However, adoption will be the “billion-dollar” issue for the Ecstasy programming language, Lee said, noting that he likes the language’s chances based on what he’s seen. Yet, building adoption for a new runtime and language requires a lot of careful and intentional community-building.

Cisco is an easy potential candidate for Ecstasy usage, Linskey said. “We build a lot of middlebox-style services in which we pull together data from a few databases and a few internal and external services and serve that up to our clients,” he said. “An asynchronous-first runtime with the isolation and security properties of Ecstasy would be a great fit for us.”

Meanwhile, Java aficionados expect that Java will continue to evolve to meet cloud-native computing needs and future challenges. At Oracle Code One, Stewart Bryson, CEO of Red Pill Analytics in Atlanta, said he believes Java has another 10 to 20 years of viability, but there is room for another language that will better enable developers for the cloud. However, that language could be one that runs on the Java Virtual Machine, such as Kotlin, Scala, Clojure and others, he said.

Go to Original Article

SIEM benefits include efficient incident response, compliance

Security information and event management systems collect security log events from numerous hosts within an enterprise and store their relevant data centrally. By bringing this log data together, these SIEM products enable centralized analysis and reporting on an organization’s security events.

SIEM benefits include detecting attacks that other systems missed. Some SIEM tools also attempt to stop attacks — assuming the attacks are still in progress.

SIEM products have been available for many years, but initial security information and event management (SIEM) tools were targeted at large organizations with sophisticated security capabilities and ample security analyst staffing. It is only relatively recently that SIEM systems have emerged that are well-suited to meet the needs of small and medium-sized organizations.

SIEM architectures available today include SIEM software installed on a local server, a local hardware or virtual appliance dedicated to SIEM, and a public cloud-based SIEM service.

Different organizations use SIEM systems for different purposes, so SIEM benefits vary across organizations. This article looks at the three top SIEM benefits, which are:

  • streamlining compliance reporting;
  • detecting incidents that would otherwise not be detected; and
  • improving the efficiency of incident handling

1. Streamline compliance reporting

Many organizations deploy the tools for these SIEM benefits alone, including streamlining enterprise compliance reporting efforts through a centralized logging solution. Each host that needs to have its logged security events included in reporting regularly transfers its log data to a SIEM server. A single SIEM server receives log data from many hosts and can generate one report that addresses all of the relevant logged security events among these hosts.

An organization without a SIEM system is unlikely to have robust centralized logging capabilities that can create rich customized reports, such as those necessary for most compliance reporting efforts. In such an environment, it may be necessary to generate individual reports for each host or to manually retrieve data from each host periodically and reassemble it at a centralized point to generate a single report.

Many organizations deploy the tools for these SIEM benefits alone, including streamlining enterprise compliance reporting efforts through a centralized logging solution.

The latter can be incredibly difficult, in no small part because different operating systems, applications and other pieces of software are likely to log their security events in various proprietary ways, making correlation a challenge. Converting all of this information into a single format may require extensive code development and customization.

Another reason why SIEM tools are so useful is that they often have built-in support for most common compliance efforts. Their reporting capabilities are compliant with the requirements mandated by standards such as the Health Insurance Portability and Accountability Act (HIPAA), the Payment Card Industry Data Security Standard (PCI DSS) and the Sarbanes-Oxley Act.

By using SIEM logs, an organization can save considerable time and resources when meeting its security compliance reporting requirements, especially if it is subject to more than one such compliance initiative.

2. Detect the undetected

SIEM systems are able to detect otherwise undetected incidents.

Many hosts that log security breaches do not have built-in incident detection capabilities. Although these hosts can observe events and generate audit log entries for them, they lack the ability to analyze the log entries to identify signs of malicious activity. At best, these hosts, such as end-user laptops and desktops, might be able to alert someone when a particular type of event occurs.

SIEM tools offer increased detection capabilities by correlating events across hosts. By gathering events from hosts across the enterprise, a SIEM system can see attacks that have different parts on different hosts and then reconstruct the series of events to determine what the nature of the attack was and whether or not it succeeded.

In other words, while a network intrusion prevention system might see part of an attack and a laptop’s operating system might see another part of the attack, a SIEM system can correlate the log data for all of these events. A SIEM tool can determine if, for example, a laptop was infected with malware which then caused it to join a botnet and start attacking other hosts.

It is important to understand that while SIEM tools have many benefits, they should not replace enterprise security controls for attack detection, such as intrusion prevention systems, firewalls and antivirus technologies. A SIEM tool on its own is useless because it has no ability to monitor raw security events as they happen throughout the enterprise in real time. SIEM systems use log data as recorded by other software.

Many SIEM products also have the ability to stop attacks while they are still in progress. The SIEM tool itself doesn’t directly stop an attack; rather, it communicates with other enterprise security controls, such as firewalls, and directs them to block the malicious activity. This incident response capability enables the SIEM system to prevent security breaches that other systems might not have noticed elsewhere in the enterprise.

To take this a step further, an organization can choose to have its SIEM tool ingest threat intelligence data from trusted external sources. If the SIEM tool detects any activity involving known malicious hosts, it can then terminate those connections or otherwise disrupt the malicious hosts’ interactions with the organization’s hosts. This surpasses detection and enters the realm of prevention.

3. Improve the efficiency of incident handling activities

Another of the many SIEM benefits is that SIEM tools significantly increase the efficiency of incident handling, which in turn saves time and resources for incident handlers. More efficient incident handling ultimately speeds incident containment, thus reducing the amount of damage that many security breaches and incidents cause.

A SIEM tool can improve efficiency primarily by providing a single interface to view all the security log data from many hosts. Examples of how this can expedite incident handling include:

  • it enables an incident handler to quickly identify an attack’s route through the enterprise;
  • it enables rapid identification of all the hosts that were affected by a particular attack; and
  • it provides automated mechanisms to stop attacks that are still in progress and to contain compromised hosts.

The benefits of SIEM products make them a necessity

The benefits of SIEM tools enable an organization to get a big-picture view of its security events throughout the enterprise. By bringing together security log data from enterprise security controls, host operating systems, applications and other software components, a SIEM tool can analyze large volumes of security log data to identify attacks, security threats and compromises. This correlation enables the SIEM tool to identify malicious activity that no other single host could because the SIEM tool is the only security control with true enterprise-wide visibility.      

Businesses turn to SIEM tools, meanwhile, for a few different purposes. One of the most common SIEM benefits is streamlined reporting for security compliance initiatives — such as HIPAA, PCI DSS and Sarbanes-Oxley — by centralizing the log data and providing built-in support to meet the reporting requirements of each initiative.

Another common use for SIEM tools is detecting incidents that would otherwise be missed and, when possible, automatically stopping attacks that are in progress to limit the damage.

Finally, SIEM products can also be invaluable to improve the efficiency of incident handling activities, both by reducing resource utilization and allowing real-time incident response, which also helps to limit the damage.

Today’s SIEM tools are available for a variety of architectures, including public cloud-based services, which makes them suitable for use in organizations of all sizes. Considering their support for automating compliance reporting, incident detection and incident handling activities, SIEM tools have become a necessity for virtually every organization.


If you long to become a swingin’ cat who makes the rounds of local cultural events like outdoor concerts, book-club meetups, film festivals, and sake tastings, you should check out Eventbrite. The service boasts a robust event listing and the ability to purchase tickets. You won’t be able to use it to buy tickets for sporting events or high-profile shows, and you can’t create your own events directly from the mobile app. Still, Eventbrite is worth checking out, if you’re looking for cool smaller-scale happenings in your town.

Similar Products

Welcome to the Party, Pal
The web-based Eventbrite puts the most important thing, search, front and center. Like Songkick, Eventbrite’s homepage automatically detects your location based on your location and pre-fills the location search criteria; you simply key-in an event name or category. You can even filter your searches by date, if you’re specifically looking for a good time this upcoming weekend.
Beneath that search bar lives a large area that’s filled with upcoming events. If you haven’t created an account, you see a simple listing of activities and performances in your area. If you create an account, however, the app shows you events that match the topics of interest you selected when you signed up. For example, I selected the Cultural, Food & Drink, and Festival categories when I signed up, so my Eventbrite homepage is often filled with the likes of A Decadent Evening of Chocolate and Cocktails, Pop Up Dinner NYC, and New York Cocktail Expo. The bottom of the page displays trending topics, such as Networking and Sports & Wellness.

It’s important to understand what kind of bookings you can make with the service—and what kinds you can’t. Eventbrite specializes in smaller, cooler events, such as tastings, indie music performances, conventions, and readings. Ticketmaster, on the other hand, offers tickets for high-profile shows, such as The Book of Mormon and The Lion King. This is not the sort of thing you’ll find on Eventbrite. That said, Tribeca Film Festival uses Eventbrite to handle its ticketing, so I used the service to gain access to a Cobra Kai screening before the television show made its debut on YouTube Premium.
Selecting an event takes you to a page on which you can view the event venue, address, date, start and end times, and location (which you can view using an embedded Google Map, much as you can with Songkick). I also like that each event has tags that lead you to similar events when you click through. Unfortunately, you can’t track recurring events, such as the NYC Craft Beer Festival, to receive alerts for an impending show. This kind of tracking is an area in which Songkick truly excels.

Saving and Purchasing Tickets
You can save events by clicking the bookmark icon located just beneath the listing’s main image. Alternately, you can buy tickets by clicking the large green Tickets icon. Saved events live in the Saved section, while purchased tickets live in the Tickets section. I like that Eventbrite separates those two categories, as it makes identifying which is which much simpler. Ticketmaster does the same, but Songkick, sadly, combines favorites and purchases into one category in its Plans section. You can also add events to your calendar, be it from Apple, Google, Outlook, or Yahoo.
The ticket purchase process requires that you input your name, email address, and credit/debit card information within a 15-minute time frame. If you don’t complete the purchase within the allotted time, you lose the held ticket and have to begin the process anew. In my experience with the service, Eventbrite handles the ticketing; I can’t recall a time when I was shuttled to Ticketmaster or StubHub to complete the transaction. Songkick, on the other hand, sends you to a third party to make a ticket purchase.
Sadly, Eventbrite doesn’t support multiple account logins. This makes things a bit frustrating for people like me who use Eventbrite for both professional (E3 after parties) and personal reasons (ramen festivals). The ability to switch between accounts would be a welcome addition. Instead, you have to create two accounts, with two different email addresses, and then log in and out as needed.
Eventbrite keeps a full record of all your ticket purchases. In fact, I scrolled back to 2009 (nearly a decade ago!) to find an invite for a housewarming party. I didn’t expect Eventbrite to serve up some warm and fuzzy memories when I started testing it for this review.

The Eventbrite Mobile Apps
Eventbrite has apps for the Android and iOS platforms. I mainly tested Eventbrite on my Google Pixel XL, but I spent a bit of time with the iPhone version, too. The apps are very similar to each other in terms of design, and both offer the browser-based version’s many useful features. That said, they differ from the web version in small ways.
For example, you can use your phone as a barcode-based e-ticket instead of physically printing one out a ticket—that’s very convenient. In addition, the mobile apps let you edit your user profile to add or remove event topics. Unfortunately, that option isn’t available in the web version.
Paint the Town Red
Overall, Eventbrite is a useful tool for discovering interesting events in your neck of the woods. Depending on what you’re into, Eventbrite could serve as your main method of discovering local activities, or, if you’re like me, you may find it works best as a companion to the likes of StubHub and Ticketmaster. Eventbrite is excellent at serving up cupcake bake-offs, walking tours, and other relatively small-scale events, but if it’s big-name Broadway shows you’re after, StubHub and Ticketmaster are better choices.

Liquidware user experience monitoring fills gap in DaaS migration

When IT leaders at a global events and publishing company chose to move their physical and virtual desktops to the cloud, they quickly discovered they couldn’t do it alone.

By early 2016, Informa had more than 1,000 employees using Citrix virtual desktops. As that number had grown, the desktops and their support infrastructure became increasingly difficult to manage.

“Complexity is the biggest enemy in IT,” said Martin van Nijnatten, head of end-user computing at the London-based company. “That was the key argument for moving from doing your own VDI to desktop as a service.”

At the same time, the VDI user experience was getting worse.

“There was a big gap between the physical desktops and the VDI estate,” said Peter MacNamara, senior VDI engineer at Informa. “Your user experience would not be the same wherever you went.”

The end-user computing team decided to migrate from physical and virtual desktops to Amazon Web Services’ desktop as a service (DaaS) offering, WorkSpaces. The move was made possible by Liquidware, whose products — particularly its user experience monitoring software — identified potential problems and provided much-needed management capabilities for the new cloud desktops and applications.

“[With WorkSpaces], you don’t have the tools that Citrix and VMware have natively,” MacNamara said. “So we had to fill that gap. Liquidware, especially with their monitoring tool, let us do that.”

User experience monitoring gets proactive

It’s moving from being reactive to proactive.
Martin van Nijnattenhead of end-user computing, Informa

After selecting AWS, Informa evaluated several virtual desktop management and user experience monitoring vendors to assist with the migration. The company considered RES Software (which Ivanti has since acquired), Unidesk (which Citrix has since acquired) and FSLogix in addition to Liquidware.

After a proof-of-concept deployment that ran through late 2016, Liquidware won out. Its Liquidware Essentials Bundle — which includes Stratusphere for user experience monitoring, ProfileUnity for user environment management and FlexApp for application layering — provided the capabilities Informa needed, and it wasn’t overly complicated to use, MacNamara said. It took less than a day to set up Stratusphere, which is available as an appliance in the Amazon Marketplace, and get it monitoring the Citrix virtual desktops, he said.

Peter MacNamara, senior VDI engineer at InformaPeter MacNamara

The user experience monitoring tool immediately paid dividends, identifying applications that could potentially cause problems when they moved to the cloud. The performance hit that McAfee’s antivirus software caused on the virtual desktops, for example, would have been too much to bear on WorkSpaces, MacNamara said. Armed with this information, the IT department was able to address the issue before it affected users.

“It’s moving from being reactive to proactive,” van Nijnatten said.

Informa used the information gleaned from Stratusphere to right-size its Amazon WorkSpaces deployment, making sure it allocated enough resources so as to not cause any performance problems, said Dave Johnson, who worked with Informa on this project as a Liquidware sales manager. And Liquidware’s ProfileDisks feature helped Informa capture user profiles on physical and Citrix virtual desktops and migrate them to Amazon WorkSpaces, Johnson said.

The performance data Stratusphere provided proved so valuable that Informa rolled the product out to its physical desktops as well. There are some improvements that van Nijnatten said he would like to see, however. Tops on that list is the incorporation of machine learning technology.

“Right now, you still have to do a lot of digging and conclusion-drawing yourself by looking at the data,” he said. “I think that there’s an opportunity to collate that data and create some more intelligence out of it.”

Liquidware ProfileUnity screenshot

VDI-to-DaaS migrations catching on

For most of DaaS’ existence, organizations considered it almost exclusively for greenfield deployments. Migrating from VDI to DaaS was too complex, and it was a waste to abandon investments in on-premises virtual desktops, the thinking went.

That’s slowly changing. At Informa, it was more important to embrace the future than to hold on to the past, MacNamara said.

Informa is one of many Liquidware customers that have moved or are considering moving from VDI to DaaS, Johnson said.

“A number of organizations have moved their core infrastructure to the cloud, and now they’re looking at moving their desktops,” he said. “To the user, it’s a very minimal impact, because the look and feel of the desktop is the same.”

Informa has run its IT infrastructure on AWS for more than a decade, dating back to a time when “everybody said you were out of your mind” if you moved core services to the cloud, van Nijnatten said. That familiarity led the company to choose Amazon WorkSpaces over DaaS offerings from Citrix and VMware, because those vendors still have a certain level of reliance on their on-premises VDI products, he said.

“Amazon was born in the cloud, and Citrix and VMware [weren’t],” van Nijnatten added.

An ongoing process

Informa’s work with Liquidware and Amazon WorkSpaces is not complete; the company still plans to move the remaining pockets of Citrix users to AWS and is also in the process of migrating from Windows 7 to Windows 10. The scale of that operating system upgrade would have been impossible for Informa’s Citrix infrastructure to handle, van Nijnatten said.

“We would’ve needed to redesign the whole setup,” he said.

The ultimate goal is to offer nonpersistent cloud desktops that rely on Profile Unity to provide a consistent user experience and an added level of security.

“Now what we’re working towards is, you can log on to an Amazon workspace and your settings follow you,” MacNamara said. “Your documents follow you. It’s all there.”

2017 tech events: The year in photos

It’s been another whirlwind year for SearchCIO. Our 2017 tech events coverage not only spanned multiple IT disciplines and technology trends, but also multiple states. Our team journeyed to California, Nevada, Texas, Florida and a few places in between to report on the year’s most notable IT conferences, summits, symposiums and forums.

At these events, we gathered expert insight on topics like AI, chatbots, wearable technology, drones, cybersecurity, digital transformation, changing C-suite roles and much more.

Throughout the year, our team turned to Instagram to help document our time at these events. This 2017 tech events roundup provides a sampling of the best and most interesting photos from the SearchCIO team’s travels.

RSA Conference

February 2017, San Francisco

How are artificial intelligence and machine learning already being used by companies and consumers? What can we expect from AI in the years to come? Those were some of the questions posed during a keynote discussion with Alphabet Inc. executive chairman Eric Schmidt (right) at the RSA Conference 2017. Schmidt said we will eventually move from a mobile-first world to an AI-centered one, but admitted that AI is still in its early stages.

More from this conference:

Gartner Data & Analytics Summit

March 2017, Grapevine, Texas

In their opening keynote, Gartner’s Debra Logan and Kurt Schlegel said that although there’s an abundance of data, we’re still lacking in areas like budgeting, skill development and establishing the right culture to truly take advantage of analytics opportunities. One way to overcome that scarcity, they explained, is through organizational restructuring — including the addition of a chief data officer.

More from this conference:

Chatbots and Virtual Assistants for the Enterprise

May 2017, San Francisco

Beerud Sheth, founder and CEO of smart messaging platform Gupshup, said the next frontier of bot evolution is interbot communication. He foresees a future in which bots can collaborate, multiply and upgrade themselves.

Also from this conference:

MIT Sloan CIO Symposium

May 2017, Cambridge, Mass.

One of the signature 2017 tech events we covered was the MIT Sloan CIO Symposium, where Andrew McAfee and Erik Brynjolfsson of the MIT Initiative on the Digital Economy said every job and business process will be disrupted by AI. According to them, we’re in the second stage of the machine age: getting machines to learn.

Keynote panelists at the MIT CIO Symposium discussed lessons they learned from their digital transformation journeys and their predictions for the future of digital organizations. Jim Fowler, group CIO at GE, said future workers will be more of data modelers and will need extensive coding skills.

Also from this symposium:

Argyle CIO Leadership Forum

June 2017, New York

Michael Herman of consulting firm KPMG challenged attendees at the Argyle CIO Leadership Forum to think differently about the clichéd term “digital transformation.” One of Herman’s key pieces of advice was that digital transformation projects should be shaped with a human experience in mind — as well as with a strong tie-in to business needs.

More from this forum:

InterDrone 2017

September 2017, Las Vegas

Michael Huerta of the FAA said the drone industry is still very much in its infancy, but massive progress is being made. Huerta noted that drones were critical in hurricane response efforts this year. The next steps in drone development include remote identification and tracking capabilities, he said.

Also from this conference:

Gartner Symposium/ITxpo

October 2017, Orlando, Fla.

Gartner’s Daryl Plummer shared the research firm’s top predictions, which touched on chatbots, AI, IoT and — most surprisingly — fake news. By 2022, Plummer posits that the majority of individuals in a mature economy will consume more false information than true information. The rise of “counterfeit reality” — driven by AI — will contribute to digital distrust, he said.

Gerri Martin-Flickinger, EVP and CTO at Starbucks, said transforming the coffee giant for the digital world meant embracing digital natives, a cloud-based platform model, Agile methodologies and a variety of other emerging technologies.

More from this symposium:

Enterprise Wearable Technology Summit

October 2017, Boston

Jay Kothari, project lead for Glass at X, the moonshot factory, discussed the lessons learned from Glass’s past struggles in the consumer market and described how Google’s wearable efforts have found new life in the enterprise. “We went from what we thought was a consumer fashion device to something that’s very function-oriented and has a very clear use case,” Kothari said.

More from this summit:

MIT Sloan CFO Summit

November 2017, Newton, Mass.

Expert panelists from different industries discussed the changing CFO role and the importance of experience over certifications or degrees. They emphasized the significance of leadership, a good relationship with IT, cross-functional ability, global experience and cultural awareness as vital to being successfully in the CFO role.

More from this summit:

Forrester New Tech Forum

December 2017, Boston

Forrester’s Julie Ask said that smart conversational tech is the future of consumer experiences, but it will take at least 10 years for it to mature. A key challenge for conversational AI is understanding intent, which requires context. Current chatbots don’t have that understanding, Ask said.

AI World Conference

December 2017, Boston

Security guru Bruce Schneier said he doesn’t worry about an apocalypse spawned by AI. Instead, he worries about near-term — and more realistic — dangers like the weaponization of AI to do things like remotely hack airliners and self-driving cars, or alter medical records. As AI continues to evolve to protect assets and information, so does the likelihood of “bad guys” using AI to attack these systems, Schneier said.

More from this conference:

Now that you’ve looked back at our 2017 tech events coverage, check out our Instagram account and give us a follow! It’s your resource for event photos, videos and session snippets.

Push for public, private sector cybersecurity cooperation continues

Recent events such as the Equifax data breach and allegations regarding Russian interference with the 2016 presidential election are sobering reminders of cybersecurity holes in both the public and private sectors.

Cooperation between government and businesses has long been heralded as vital to protect digital assets and improve U.S. cybersecurity, which is why such cooperation is becoming part of U.S. cybersecurity strategy, said acting FBI Director Andrew McCabe.

“There is no law enforcement or exclusive intelligence answer to these questions,” McCabe said about cybersecurity strategy during the Cambridge Cyber Summit hosted by CNBC and the Aspen Institute earlier this month. “We’ve got to work together with the private sector to get there.”

Achieving this goal was the main topic presented at the annual conference, which examines how the public and private sectors can work together to safeguard economic, financial and government assets, while also maintaining convenience and protecting online privacy.

Regulations are usually anathema to a tech industry that worries cybersecurity mandates hinder the innovation upon which their industry thrives. There has been headway of late, however: In response to claims that Russian agents bought social media advertisements designed to sow discord in American politics, Facebook CEO Mark Zuckerberg announced policy changes to “protect election integrity.”

McCabe admitted that the relationship between the federal government and the private sector has had its ups and downs through the years. Edward Snowden’s disclosures about U.S. digital surveillance practices and law enforcement’s confrontation with Apple over the San Bernardino, Calif., shooter’s iPhone, for example, have hindered public and private sector cybersecurity cooperation.

“I see things like this and I hope that we are now edging back into a warmer space … to actually work on solutions,” McCabe said.

The public sector is doing its part to help facilitate these partnerships: The New Democrat Coalition has established a Cybersecurity Task Force that promotes “public-private sector cooperation and innovation” designed to protect against cyberattacks. The U.S. House of Representatives recently passed the National Institute of Standards and Technology (NIST) Small Business Cybersecurity Act, which sets “guidelines,” as opposed to mandatory requirements, for small businesses.

If you try to put too much constraint and mandatory check boxes on the security of a device, you will find that the manufacturers are going to be slowed in their ability to innovate.
Rob Joycecybersecurity coordinator, U.S. White House

Incentives are a big part of these types of efforts. Last month, senators introduced a cybersecurity bill that would establish a reward program designed to incentivize private researchers to identify security flaws in U.S. election systems.

These types of partnerships are beneficial for both sides, said Rod Rosenstein, deputy attorney general at the Department of Justice, at the Cambridge Cyber Summit. Law enforcement investigations can help a company understand what happened, share context and information about related incidents, and even provide advice to shore up defenses if the hackers act again, he said.

“We can inform regulators about your cooperation, and we are uniquely situated to pursue the perpetrators through criminal investigation and prosecution,” Rosenstein said. “In appropriate cases that involve overseas actors, we can also pursue economic sanctions, diplomatic pressure and intelligence operations ourselves.”

International efforts, global companies

The “overseas” variable doesn’t end with nefarious foreign actors hacking U.S. companies. Public and private sector cybersecurity cooperation is further complicated in the global economy with enterprises that have customers, headquarters and employees stationed all over the world. This makes it difficult to incorporate cybersecurity best practices as digital information moves across borders.

Different countries have different rules when it comes to handling digital information, leaving international organizations to navigate conflicting international laws.

“They have different threats to their systems, to their data, to their employees in many different places,” McCabe said. “I think we have a clear and important role in helping them address those threats and those challenges.”

McCabe was quick to add, however, that U.S.-based security professionals and law enforcement prioritize U.S. cybersecurity standards.

“Although we acknowledge that [global companies] have responsibilities in other parts of the world, we expect them to live up to our norms of behavior and in compliance with U.S. law and all the ways that that’s required here in the United States,” McCabe said.

The power of voluntary enforcement

When it comes to cybersecurity, White House Cybersecurity Coordinator Rob Joyce said he is a fan of “voluntary enforcement” among industry. If industry groups can rise up to identify unique risks and push best cybersecurity practices, it could create a sort of peer pressure for other organizations to step up their cybersecurity game, he said at the summit.

The goal is to give consumers the opportunity to choose companies that have voluntarily implemented well-planned cybersecurity best practices and compliance standards, as opposed to security protocols that are slapped together just so new products can be put on the market quickly, he said.

“We would expect industry groups to start labeling themselves as compliant and then consumers to make smart choices about what they’re buying,” Joyce said.

Forcing cybersecurity standards on the technology industry through government regulation poses problems, Joyce said, mostly because the industry evolves so fast. A cybersecurity standard that provides effective data protection and enforcement today could quickly become obsolete when the next iteration of technology is introduced.

“The problem with forcing it through government regulation is you snap a chalk line today, and this industry moves fast,” Joyce said. “You impede good security because people have to do the thing to regulate it instead of doing the thing that’s right.”

The trick is to find that balance between innovation and cybersecurity protection, Joyce added.

“If you try to put too much constraint and mandatory check boxes on the security of a device, you will find that the manufacturers are going to be slowed in their ability to innovate and give us that next better product,” Joyce said. “But we’ve got to have the ability to drive that next better product to have some base security.”

Developing for the intelligent cloud and intelligent edge at Microsoft Connect(); 2017 – The Official Microsoft Blog

Today we’re kicking off Connect(); 2017, one of my favorite annual Microsoft developer events, where over three days we get to host approximately 150 livestreamed and interactive sessions for developers everywhere — no matter the tools they use or the platforms they prefer. Today at Connect(); 2017 I’m excited to share news that will help developers build for the intelligent cloud and the intelligent edge. It’s never been a better time to be a developer, as developers are at the forefront of building the apps driving monumental change across organizations and entire industries. At Microsoft, we’re laser-focused on delivering tools and services that make developers more productive, helping developers create in the open, and putting AI into the hands of every developer so they unleash the power of data and reimagine possibilities that will improve our world.

Any developer, any application, any platform

In previous years at Connect(); we announced the open-sourcing of .NET Core. Last year we announced Microsoft joining the Linux foundation and shared SQL Server on Linux. This year we’re continuing to deliver on our commitment to the open source community and making sure we can support customers no matter their platform of choice.

Azure Databricks — preview: Built in collaboration with the founders of Apache® Spark, Azure Databricks is a fast, easy and collaborative Apache® Spark-based analytics platform optimized for Azure. Azure Databricks combines the best of Databricks and Azure to help customers accelerate innovation with one-click set up, streamlined workflows and an interactive workspace. Native integration with Azure SQL Data Warehouse, Azure Storage, Azure Cosmos DB and Power BI simplifies the creation of modern data warehouses that enable organizations to provide self-service analytics and machine learning over both relational and non-relational data with enterprise-grade performance and governance. Customers inherently benefit from enterprise-grade Azure security, compliance and SLAs, as well as simplified security and identity control with Azure Active Directory integration. With these innovations, Azure is the one-stop destination to unlock powerful scenarios that make AI easy.

Microsoft joins MariaDB Foundation: Today we’re excited to be joining the MariaDB community as a platinum member of the MariaDB Foundation. As part of this membership, we’re committed to working closely with the foundation, actively contributing to MariaDB and the MariaDB community. We’re also announcing we’ll be delivering a preview of Azure Database for MariaDB, which will bring the fully managed service capabilities to MariaDB. Developers can sign up for the upcoming preview for Azure Database for MariaDB.

Azure Cosmos DB with Apache® Cassandra API — preview: With this preview, developers now get a Cassandra-as-a-service using the Cassandra SDKs and tools they are familiar with using the power of Azure Cosmos DB. Developers re-use existing code they’ve already written and build new applications using the Cassandra API against Azure Cosmos DB’s globally distributed, multi-model database service. Azure Cosmos DB has been designed to scale throughput and storage across any number of geographical regions with comprehensive SLAs and with greater consistency levels for more precise data latency management.

GitHub Partnership on GVFS: With GitHub, today we’re announcing Microsoft and GitHub are partnering to bring GVFS to GitHub’s 25 million users. GVFS is an open-source extension to the Git version control system developed by Microsoft to support the world’s largest repositories.

Helping developers be more productive

At Microsoft our mission is to empower every person and every organization on the planet to achieve more, and developers are no exception to this. We have a strong set of new announcements to help developers, as well as whole development teams, be more productive as they move into a world of continual innovation and continual development of their apps. At Connect(); we’re announcing:

 Visual Studio App Center — general availability: The most comprehensive app development lifecycle solution for Objective-C, Swift, Java, Xamarin and React Native, Visual Studio App Center helps developers automate and manage the lifecycle of their iOS, Android, Windows and macOS apps. Developers can connect their repos and within minutes automate their builds, test on real devices in the cloud, distribute apps to beta testers and monitor real-world usage with crash and analytics data, all in one place.

 Visual Studio Live Share — first look: Visual Studio is delivering the next major advancement in developer productivity with Visual Studio Live Share, which enables true real-time collaboration within both Visual Studio and Visual Studio Code. It lets developers seamlessly and securely share their project with other developers so that they can collaboratively edit and debug in real time together without having to sit in front of the same screen or in the same room. Rather than just screen sharing, Visual Studio Live Share lets developers share their full project context with a bi-directional, instant and familiar way to jump into opportunistic, collaborative programming.

Visual Studio Connected Environment for Azure Container Service (AKS) — upcoming preview: Visual Studio and Visual Studio Code will now use the Connected Environment for AKS features, making Kubernetes development a natural for Visual Studio developers. Developers will be able to easily edit and debug cloud native applications running on Kubernetes in the cloud with the speed, ease and full functionality and productivity they’ve come to expect from Visual Studio.

Azure DevOps Projects — preview: Available in the Azure management portal, Azure DevOps Projects will deliver a guided experience, helping developers easily explore the many Azure platform services available to help build their apps and in the process, configure a full DevOps pipeline powered by Visual Studio Team Services.  In less than five minutes, this feature will ensure that DevOps is not an afterthought, but instead the foundation for new projects and one that works with many application frameworks, languages and Azure hosted deployment endpoints.

Take a look at how Columbia Sportswear is leveraging Microsoft’s developer tools and DevOps platform to drive their own digital transformation.

Putting AI in the hands of every developer

As AI becomes more pervasive and developers are able to harness the vast amounts of data being created every day, coupling with the power and scale of the cloud, we want to make it easy for developers to create the next generation of intelligent applications. We want to put AI in the hands of every developer with the tools and platforms they are most familiar with. With the announcements below, we’re delivering new AI tools and bringing machine leaning and intelligence to the edge.

Visual Studio Tools for AI — preview: This is an extension of our popular Visual Studio IDE, which will allow developers and data scientists to create AI models with maximum productivity. Visual Studio Tools for AI delivers debugging and rich editing, with the support of most deep learning frameworks such as Cognitive Toolkit, TensorFlow or Caffe. With this addition, developers and data scientists have a full development experience at their fingertips to create, train, manage and deploy models locally, and scale to Azure.

Azure IoT Edge — preview: Today we’re making available the preview of Azure IoT Edge, a service that deploys cloud intelligence to IoT devices via containers, and we’re introducing a new set of breakthrough cloud capabilities to run on IoT Edge, with Azure Machine Learning, Azure Functions and Azure Stream Analytics. Azure IoT Edge enables developers to build and test container-based workloads using C, Java, .NET, Node.js and Python, and simplifies the deployment and management of workloads at the edge. Azure IoT Edge can run on IoT devices with as little as 128MB of memory. As part of this announcement, we’re also releasing Azure Machine Learning updates, which enables AI models to be deployed and run on edge devices through the Azure IoT Edge service. Additional updates include easier AI model deployment on iOS devices with Core ML, as well as updates to the Azure Machine Learning Workbench tool.

Every year at Connect(); we get to share new tools and services that we hope will empower and inspire developers to build great apps. I encourage you to tune into Connect(); 2017 to learn more about all of the new innovations we’re announcing today, and to see what you can reimagine.

Tags: Azure, Azure Machine Learning, connect, data, developers, IoT, open source, Visual Studio