Tag Archives: technology

Missions acquisition will simplify Slack integrations

Slack plans to use the technology gained from its acquisition of Missions, a division of the startup Robots & Pencils, to make it easier for non-developers to customize workflows and integrations within its team collaboration app.

A Slack user with no coding knowledge can use Missions to build widgets for getting more work done within the Slack interface. For example, a human resources department could use a Missions widget to track and approve interviews with job applicants.

The Missions tool could also power an employee help desk system within Slack, or be used to create an onboarding bot that keeps new hires abreast of the documents they need to sign and the orientations they must attend. 

“In the same way that code libraries make it easier to program, Slack is trying to make workflows easier for everyone in the enterprise,” said Wayne Kurtzman, an analyst at IDC. “Without training, users will be able to create their own automated workflows and integrate with other applications.”

Slack said it would take a few months to add Missions to its platform. It will support existing Missions customers for free during that time. In a note to its 200,000 active developers, Slack said the Missions purchase would benefit them too, by making it easier to connect their Slack integrations to other apps.

Slack integrations help startup retain market leadership

The acquisition is Slack’s latest attempt to expand beyond its traditional base of software engineers and small teams. More than 8 million people in 500,000 organizations now use the platform, which was launched in 2013, and 3 million of those users have paid accounts.

With more than 1,500 third-party apps available in its directory, Slack has more outside developers than competitors such as Microsoft Teams and Cisco Webex Teams. The vendor has sought to capitalize on that advantage by making Slack integrations more useful.

Earlier this year, Slack introduced a shortcut that lets users send information from Slack to business platforms like Zendesk and HubSpot. Slack could be used to create a Zendesk ticket asking the IT department for a new desktop monitor, for example.

The automation of workflows, including through chatbots, is becoming increasingly important to enterprise technology buyers, according to Alan Lepofsky, an analyst at Constellation Research, based in Cupertino, Calif.

But it remains to be seen whether the average Slack user with no coding experience will take advantage of the Missions tool to build Slack integrations.

“I believe the hurdle in having regular knowledge workers create them is not skill, but rather even knowing that they can, or that they should,” Lepofsky said.

Cost, doubt about tech hold back AI for HR investment

Concern about the cost of AI for HR technology and its maturity is keeping HR departments from deploying this tech. But users believe this technology will improve productivity and cut labor costs.

Those were some of the findings in a new survey of more than 1,300 HR professionals about AI for HR deployments. The survey was conducted by Future Workplace — an HR research and networking group — and Oracle.

More than half of the survey respondents believe the leading benefit of AI for HR technology will be an increase in worker productivity. These respondents said they expect HR technology will become interactive — not dissimilar from how people communicate with Apple’s Siri or Amazon’s Alexa. The simplification of this technology was the second leading benefit cited by survey respondents.

The third benefit — cited by 41% of the respondents — is AI for HR’s ability to eliminate labor costs. But this labor-cost elimination won’t happen quickly, and there’s debate about the real impact of this AI tech on labor.

Nearly 70% of the respondents to the Future Workplace and Oracle survey reported cost was a barrier to AI tool adoption. This was followed by “failure of technology” at 66%, meaning users don’t see the technology as mature and ready for adoption. The third leading impediment, at 55%, was security risks.

About half of HR processes can be automated

The reason AI for HR tech may be disruptive is approximately half of all HR spending goes to transactional processes and routine administrative activities, according to The Hackett Group, a management consulting organization based in Miami. These are processes that are primed for automation.

The expectation is chatbots and machine-to-voice interactions will take over much of the work of HR help desks and redundant administrative tasks.

In the coming years, all the administrative jobs in HR “are going to be wiped out,” said Dan Schawbel, research director at Future Workplace, based in New York.

As automation arrives, these HR employees will have to take on new work, shift to other jobs in HR or deal with layoffs. The people remaining will be focused on the strategic work, Schawbel said.

HR employees need to get training on these new technologies and skills, Schawbel said, “so when these shifts happen, you’re prepared and set up for success.”

Schawbel said he believes AI for HR technology is ready for enterprise use. Chatbots and voice user interfaces have demonstrated that they can handle initial employee inquiries, such as answering basic questions about benefits or onboarding. But, according to some, this doesn’t mean HR staffs will necessarily shrink. 

HR workloads are rising

As these [AI] technologies get deployed, I don’t think you’re going to see a net loss of people.
Tony DiRomualdosenior research director for global HR executive advisory at The Hackett Group

HR workloads “are rising every year. They’ve got more work, and they have to do more with the same or fewer employees,” said Tony DiRomualdo, senior research director for global human resources executive advisory at The Hackett Group.

DiRomualdo said workloads are increasing because of difficulties in recruiting, as well as demands by the business on HR to help improve the productivity of the workforce, develop better leaders and exploit human capital for competitive advantage.

“As these technologies get deployed, I don’t think you’re going to see a net loss of people,” DiRomualdo said. But HR workers will have to learn how to handle more sophisticated tools, he said.

Even though AI technology creates some uncertainty about the future of work, people in HR aren’t necessarily opposed to its adoption. Failure to adopt this technology may hurt their careers and leave them with obsolete skills.

The use of advanced tech tools in the home life of HR employees may help drive adoption in business, similar to what happened with mobile devices, according to Emily He, Oracle’s senior vice president of the human capital management cloud business group.

“Employees are more ready  [to use advanced tech] than the enterprise,” He said. “There’s a gap between the rate at which employees are ready to adopt new technology and the rate at which enterprises are adopting technology.”

DHS, SecureLogix develop TDoS attack defense

The U.S. Department of Homeland Security has partnered with security firm SecureLogix to develop technology to defend against telephony denial-of-service attacks, which remain a significant threat to emergency call centers, banks, schools and hospitals.

The DHS Science and Technology (S&T) Directorate said this week the office and SecureLogix were making “rapid progress” in developing defenses against call spoofing and robocalls — two techniques used by criminals in launching telephony denial-of-service (TDoS) attacks to extort money. Ultimately, the S&T’s goal is to “shift the advantage from TDoS attackers to network administrators.”

To that end, S&T and SecureLogix, based in San Antonio, are developing two TDoS attack defenses. First is a mechanism for identifying the voice recording used in call spoofing, followed by a means to separate legitimate emergency calls from robocalls.

“Several corporations, including many banks and DHS components, have expressed interest in this technology, and SecureLogix will release it into the market in the coming months,” William Bryan, interim undersecretary for S&T at DHS, said in a statement.

In 2017, S&T handed SecureLogix a $100,000 research award to develop anticall-spoofing technology. The company was one of a dozen small tech firms that received similar amounts from S&T to create a variety of security applications.

Filtering out TDoS attack calls

SecureLogix’s technology analyzes and assigns a threat score to each incoming call in real time. Calls with a high score are either terminated or redirected to a lower-priority queue or a third-party call management service.

SecureLogix built its prototype on existing voice security technologies, so it can be deployed in complex voice networks, according to S&T. It also contains a business rules management system and a machine learning engine “that can be extended easily, with limited software modifications.”

Over the last year, SecureLogix deployed the prototype within a customer facility, a cloud environment and a service provider network. The vendor also worked with a 911 emergency call center and large financial institutions.

In March 2013, a large-scale TDoS attack highlighted the threat against the telephone systems of public-sector agencies. An alert issued by DHS and the FBI said extortionists had launched dozens of attacks against the administrative telephone lines of air ambulance and ambulance organizations, hospitals and financial institutions.

Today, the need for TDoS protection has grown from on premises to the cloud, where an increasing number of companies and call centers are signing up for unified communications as a service. In 2017, nearly half of organizations surveyed by Nemertes Research were using or planned to use cloud-based UC.

The Complete Guide to Azure Virtual Machines: Part 1

Azure Virtual Machines make an already hugely flexible technology in virtualization even more adaptable through remote hosting.

Virtual machines are a part of Azure’s Infrastructure as a Service (IaaS) offering that allows you to have the flexibility of virtualization without having to invest in the underlying infrastructure. In simpler words, you are paying Microsoft to run a Virtual Machine of your choosing in their Azure environment while they provide you access to the VM.

One of the biggest misconceptions I see in the workplace is that managing Cloud Infrastructure is the same as or very similar to managing on-premise infrastructure. THIS IS NOT TRUE. Cloud Infrastructure is a whole new ball game. It can be a great tool in our back pockets for certain scenarios but only if used correctly. This blog series will explain how you can determine if a workload is suitable for an Azure VM and how to deploy it properly.

Why Use Azure Virtual Machines Over On-Premise Equipment?

One of the biggest features of the public cloud is its scalability. If you write an application and need to scale up the resources dramatically for a few days, you can create a VM in Azure, install your application, run it in there and turn it off when done. You only pay for what you use. If you haven’t already invested in your own physical environment this is a very attractive alternative. The agility this solution provides software developers is on a whole new level compared to before and enables companies to become more efficient at creating applications, and being able to scale when desired is huge.

Should I Choose IaaS or PaaS?

When deploying workloads in Azure, it is important to determine whether or not an application or service should be run using Platform as a Service (PaaS) or a Virtual Machine (IaaS). For example, let’s say you are porting an application into Azure that runs on SQL. Do we want to build a Virtual Machine and install SQL or do we want to just leverage Azure’s PaaS services and just use one of their SQL instances? There are many factors in deciding whether or not to use PaaS or IaaS but one of the biggest is, how much control do you require for your application to run effectively. Do you need to make a lot of changes to the registry and do you require many tweaks within the SQL install? If so, then the virtual machine route would seem a better fit.

How To Choose The Right Virtual Machine Type

In Azure, the Virtual Machine resource specifications are cookie cutter. You don’t get to customize down to the details of how much CPU and Memory you want. They come in an offering of different sizes and you have to make those resource templates work for your computing needs. Making sure the correct size of VM is selected is crucial in Azure, not only because of performance implications for your applications but also because of the pricing. You don’t want to be paying more for a VM that is too large for your workloads.

Make sure you do your homework to determine which size is right for your needs. Also, pay close attention to i/o requirements. Storage is almost always the most common performance killer, so do your due diligence and make sure you’re getting the VM with the proper IOPS (Input/Output Operations per  Second) requirements. For Windows licensing, Microsoft covers the license and the Client Access License if you’re running a VM that needs CALs. For Linux VMs the licensing differs per the distribution.

Before we go and create a Virtual Machine inside Azure, let’s go over one of the gotchas that you might run into if you’re not aware. In Azure, since everything is “pay as you go”, if you’re not aware of the pricing at all times, you or your company may be getting a hefty bill from Microsoft. One of the common mistakes with VMs is that If you don’t completely remove your VM you can still get a charge. Simply just shutting down the VM will not stop the meter from running – you’re still reserving the hardware space from Microsoft so you’ll still be billed. Also when you delete the VM, you are going to have to delete the managed disk as well separately. The VM itself is not the only cost applied when running virtual machines.

Getting Started – Creating the Virtual Network

We will now demonstrate how to configure a Virtual Machine on Azure and getting connected to it. First, we will need to create the virtual networking so that the VM has some sort of network to talk out on. Afterward, we will create the Network Security Group which is like the “firewall” to the VM, and then finally we will create the VM itself. To create the Virtual Network, log into the Azure Portal and select “Create a Resource”. Then click on Networking > Virtual Network:

Azure Virtual Machines

Now we can specify the settings for our Virtual Network. First, we’ll give it a name. I’ll call mine “LukeLabVnet1”. I’ll leave the address space default here but we could make it smaller if we chose too. Then we will select our subscription type. You can use multiple subscriptions for different purposes, like a Development subscription and a Production subscription. Resource groups are a way for you to manage and group together your Azure resources for billing, monitoring, and to access control purposes. We already have a resource group created for this VM and its components so I will go ahead and select that. If we wanted, we could create a new one on the fly here. Then, we fill in the time zone which is Eastern for me. Next, we’ll give the subnet a name because we can create multiple subnets on this virtual network later, I’ll call it “LukeLabSubnet”. I’ll leave the default Address space for the subnet out since we are just configuring one VM and setting up access to it. Once we are done we will hit “create:

Now, to get to our newly created Virtual Network, on the left-hand side of the portal we select “Virtual Networks” and click on the one we just deployed:

We can configure all of our settings for our Virtual Network here. However, for the simplicity of the demonstration we will leave everything how it is for now:

Now that we have our virtual network in place, we will need to create our Network Security Group and then finally deploy our VM which will we do in part 2 of this series. As you can see there are a lot of components to learn when deploying VMs in Azure.

Comments/Feedback?

If you’re unsure about anything stated here let me know in the comments below and I’ll try to explain it better.

Have you tried Azure Virtual Machines? Let us know your verdict!

MU-MIMO technology boosts system capacity for WLANs

It was legendary science-fiction writer Arthur C. Clarke who wrote, “Any sufficiently advanced technology is indistinguishable from magic.”

Many would put wireless LAN in the magical category. And if that’s the case, multiple input, multiple output (MIMO) and, most recently, multiuser MIMO (MU-MIMO) technology would really have to be the rabbits in the hat. Even those of us who’ve spent the majority of our careers in wireless networking are amazed at what those technologies can do.

MIMO’s been with us since 802.11n, and it unlocked the amazing performance improvements that extend into the current era. Yet, MIMO is still little understood, even among many network engineers. No surprise there, as MIMO seems to violate many of the laws that govern communications theory.

In a nutshell, MIMO technology takes two-dimensional radio waves — you can think of them as having just frequency and amplitude, which are the variables that we use to modulate a carrier signal, thus embossing the information we wish to transmit on this simple structure — and makes them three-dimensional. The third dimension is space, and that’s why MIMO is often referred to as spatial multiplexing. More dimensions result in a greater capacity to carry information. That’s why MIMO yields such amazing results, without violating any physical laws whatsoever.

Overcoming the wireless interference problem

But there’s more to the performance of wireless communications than simply cramming more bits into a given transmission cycle. Wireless has historically been a serial-access medium: Only one transmitter can be active in any given location at a particular frequency at any given moment in time. Two transmitters in close proximity attempting simultaneous transmission will likely cause mutual interference and at least some degradation to overall system capacity. While licensed radio services, like cellular, can schedule transmissions to avoid this problem, the unlicensed bands have no such controls; stations cannot coordinate with one another and must accept as reality any interference they encounter.

Now, several stations can receive the data they need with less overall waiting. System capacity goes up.

In response, the wireless industry developed many techniques to deal with the potential damage inherent in interference, mostly related to how information is modulated and coded before it’s sent over the air. But a large problem remains: Only one station can transmit at any moment in time. Until recently, this has meant only one receiving client station could be served — again, at any moment in time — and any others desiring communication would have to wait. The result: Overall system capacity was limited.

And that’s just the problem MU-MIMO technology solves. In yet another incarnation of magic, MU-MIMO enables an access point to transmit — simultaneously — to multiple clients in one transmit cycle, with each client receiving a unique data stream from the others also sent at the same time. Now, several stations can receive the data they need with less overall waiting. System capacity goes up. And more users go home — or, better said, stay at work — happy.

802.11ax standard introduces new flavor of MU-MIMO technology

Now, how this technique is actually implemented is so complex that the math involved would make great bedtime reading. Suffice it to say, MU-MIMO technology works very, very well — Farpoint Group’s own testing shows performance gains close to the theoretical maximum. Your mileage will likely vary, but the potential here is enormous. The upcoming 802.11ax standard is expected to add bi-directional MU-MIMO, meaning stations will be able to transmit simultaneously to an access point. Now, we’re talking real magic.

Farpoint Group recommends new Wi-Fi infrastructure and client purchases specify support for MU-MIMO — yes, you’ll need new gear on both ends; field upgrades are not possible in most cases. But you’ll be glad you specified this requirement. Of course, MU-MIMO technology isn’t really magic, but it certainly looks that way.

CIO position: Evolve conference shows many ways to manage IT

The annual Evolve Technology Conference, which ran last month in Las Vegas, put the spotlight on the CIO position and drove home one point in particular on the top-level technology management job: There’s more than one way to do it.

Trace3, an IT solutions provider based in Irvine, Calif., hosts the Evolve leadership and technology event, which attracts numerous CIOs and IT managers who discuss emerging technology and business trends. The conference program, which this year included keynotes from retired NFL quarterback and five-time league MVP Peyton Manning and entrepreneur and author Peter Hinssen, culminates with the Outlier Award. The award recognizes a technology manager who “consistently delivers dynamic innovation and outstanding leadership,” according to Trace3.

I had the opportunity to speak with the eight finalists for the Outlier Award at the Evolve conference. I was one of seven judges pulled together from the ranks of CIOs and tech writers to evaluate the candidates and cast our votes. The two-day process revealed a variety of takes on IT management philosophy among those holding a CIO position or similar tech role.

Harnessing emerging technologies

Some of the finalists take a deep dive in technology. Darren Haas, senior vice president of software engineering at GE Digital and the Outlier Award winner, is the technologist’s technologist. Haas co-founded Siri and is one of the personal assistant application’s original developers, harnessing in recent years such technologies as Apache Mesos, an open source cluster manager. After Apple’s Siri acquisition in 2010, Haas helped devise Apple’s proprietary cloud services platform. Haas now is pursuing a similar task at GE Digital, where he supports a number of initiatives, including an edge-to-cloud IoT deployment.

Outlier finalist Ravi Nekkalapu also deals with cutting-edge technology in his role as CIO and head of IT at Drive Shack, a company that’s building virtual reality golfing complexes. He didn’t have much of a choice: The virtual reality- and augmented reality-driven golfing systems Drive Shack envisioned didn’t exist when he took the job in 2016. Nekkalapu had to evaluate technology and vendors and essentially build everything from scratch to equip Drive Shack’s 60,000 square-foot facility in Orlando, Fla. The prospect of fielding new technology attracted Nekkalapu to the Drive Shack assignment, but perhaps not the sport of golf. He acknowledged he had little interest in golf prior to joining Drive Shack from a previous IT management role at Wyndham Worldwide.

Building the foundation for innovation

Innovation is well and good, but the technology foundation must be rock solid. Philip Irby, Outlier finalist and CIO at The Cosmopolitan of Las Vegas said he takes an architecture-first view of IT, in which stability is the core objective and security is “part of our DNA.” In his IT management philosophy, innovative systems can be built on a reliable and secure platform. To wit, the hotel resort and casino has launched a new online component to its rewards program that delivers offers directly to customers’ mobile devices.

Stability as the foundation of innovation is also a key theme for Michael McGibbney, senior vice president of delivery and operations at SAP SuccessFactors and an Outlier finalist. For a SaaS company such as SAP SuccessFactors, which provides cloud-based human capital management software, the ability to handle peak usage periods is critical for customer satisfaction and retention. McGibbney’s IT team created a “Service Delivery & Operations” organization to focus on peak-season performance.

For Paul Chapman, who holds the CIO position at Box and an Outlier finalist, his role might be seen as creating the cultural conditions in which innovation can occur. He emphasizes people, rather than technology, as the key to maintaining the accelerating pace of transformation. At Box, which has a heavily millennial workforce, Chapman’s to-do list includes working to create a new-look workplace that’s collaborative and employs such features as voice-enabled conference rooms.

Aligning with the business mission

Some CIOs may see their roles as driving new technology adoption or taking a more pragmatic line on innovation. Others, meanwhile, put a premium on the CIO position as business partner.

The Outlier finalists demonstrate there’s no common path to a CIO position or an IT management role.

For Michelle McKenna-Doyle, her role as senior vice president and CIO at the National Football League ranges from working with team owners to upgrade stadium infrastructure and the fan’s digital experience to expanding the league’s business partnerships. As for the latter, the Outlier finalist sparked an initiative to land multimillion-dollar corporate partnerships with technology giants such as Microsoft; the NFL had previously cultivated ties with established consumer brands. The CIO also created a career path for technical personnel within the NFL. Today positions such as chief architect carry the same weight as senior vice president.

At Western Digital Corp., Terry Dembitz, vice president of IT and Outlier finalist, helped build out the Office of the CIO to include an IT Business Partner Program, which serves as the business advocate within the IT organization. IT staffers within the IT Business Partner Program work with the business side on technology roadmaps and individual projects. One especially large project was getting Western Digital and two acquired companies — Hitachi Global Storage Technologies and SanDisk — on one ERP system. Instead of selecting one of three companies’ ERP systems to standardize on, Western Digital opted for a fourth approach: Adopt a cloud-based ERP system. Dembitz’s thinking was to “turn integration into transformation” and position the company for the future.

The “businessperson-first” philosophy informs Bryan Kissinger’s outlook as vice president and CISO at Banner Health. A few months after joining Banner Health in 2017, the Outlier finalist gained support from the health system’s clinical leadership to deploy a single sign-on system that aims to save each clinician several hours a week in multiple, manual logins. Banner Health uses Imprivata’s single sign-on technology, which integrates with its Cerner electronic health records system. In another IT initiative, Kissinger said Banner Health is looking to invest in technology startups that can advance the health system’s patient care mission.

The CIO position: Horses for courses

The differences in technology management approaches stem to some degree from the workplace milieu. A greenfield operation, for instance, is going to call for a tech-heavy approach, at least during the early going. A manager’s educational and professional experiences also play an important role in shaping a CIO’s IT management philosophy. CIO and technology managers participating in the Trace3 event come from a range of backgrounds, including finance, business administration, IT and military.

Indeed, the Outlier finalists demonstrate there’s no common path to a CIO position or an IT management role.

Unchecked cloud IoT costs can quickly spiral upward

The convergence of IoT and cloud computing can tantalize enterprises that want to delve into new technology, but it’s potentially a very pricey proposition.

Public cloud providers have pushed heavily into IoT, positioning themselves as a hub for much of the storage and analysis of data collected by these connected devices. Managed services from AWS, Microsoft Azure and others make IoT easy to initiate, but users who don’t properly configure their workloads quickly encounter runaway IoT costs.

Cost overruns on public cloud deployments are nothing new, despite lingering perceptions that these platforms are always a cheaper alternative to private data centers. But IoT architectures are particularly sensitive to metered billing because of the sheer volume of data they produce. For example, a connected device in a factory setting could generate hundreds of unique streams of data every few milliseconds that record everything from temperatures to acoustics. That much data could add up to a terabyte of data being uploaded daily to cloud storage.

“The amount of data you transmit and store and analyze is potentially infinite,” said Ezra Gottheil, an analyst at Technology Business Research Inc. in Hampton, N.H. “You can measure things however often you want. And if you measure it often, the amount of data grows without bounds.”

Users must also consider networking costs. Most large cloud vendors charge based on communications between the device and their core services. And in typical public cloud fashion, each vendor charges differently for those services.

Predictive analytics reveals, compares IoT costs

To parse the complexity and scale of potential IoT cost considerations, analyst firm 451 Research built a Python simulation and applied predictive analytics to determine costs for 10 million IoT workload configurations. It found Azure was largely the least-expensive option — particularly if resources were purchased in advance — though AWS could be cheaper on deployments with fewer than 20,000 connected devices. It also illuminated how vast pricing complexities hinder straightforward cost comparisons between providers.

In a VM, you have a comparison with dedicated servers from before. But with IoT, it’s a whole new world.
Owen Rogersanalyst, 451 Research

For example, Google charges in terms of data transferred, while AWS and Azure charge against the number of messages sent. Yet, AWS and Azure treat messages differently, which can also affect IoT costs; Microsoft caps the size of a message, potentially requiring a customer to send multiple messages.

There are other unexpected charges, said Owen Rogers, a 451 analyst. Google, for example, charges for ping messages, which check that the connection is kept alive. That ping may only be 64 bytes, but Google rounds up to the kilobyte. So, customers essentially pay for unused capacity.

“Each of these models has nuances, and you only really discover them when you look through the terms and conditions,” Rogers said.

Some of these nuances aim to protect the provider or hide complexity from the users, but users may scratch their heads. Charging discrepancies are endemic to the public cloud, but IoT costs present new challenges for those deciding which cloud to use — especially those who start out with no past experience as a reference point.

“How are you going to say it’s less or more than it was before? At least in a VM, you have a comparison with dedicated servers from before. But with IoT, it’s a whole new world,” Rogers said. “If you want to compare providers, it would be almost impossible to do manually.”

There are many unknowns to building an IoT deployment compared to more traditional applications, some of which apply regardless of whether it’s built on the public cloud or in a private data center. Software asset management can be a huge cost at scale. In the case of a connected factory or building, greater heterogeneity affects time and cost, too.

“Developers really need to understand the environment, and they have to be able to program for that environment,” said Alfonso Velosa, a Gartner analyst. “You would set different protocols, logic rules and processes when you’re in the factory for a robot versus a man[-operated] machine versus the air conditioners.”

Data can also get stale rather quickly and, in some cases, become useless, if it’s not used within seconds. Companies must put policies in place to make sure they understand how frequently to record data and transmit the appropriate amount of data back to the cloud. That includes when to move data from active storage to cold storage and if and when to completely purge those records.

“It’s really sitting down and figuring out, ‘What’s the value of this data, and how much do I want to collect?'” Velosa said. “For a lot of folks, it’s still not clear where that value is.”

AARP, startups partner to study digital healthcare technology

Research from AARP has found 90% of adults aged 50 and older use technology to stay connected. Based on that research, AARP has partnered with two Boston-based digital health startups that have combined technology and healthcare with a friendly face to provide a health-focused robotic companion in the homes of individuals selected to participate in a pilot study of the product.

Pillo, a HIPAA-compliant digital healthcare companion robot, will be placed in the homes of six to 10 pilot study participants later this month for about four weeks to determine how the robot can improve disease management for individuals who have been newly diagnosed with diabetes.

Pillo, which was created by Pillo Health and given a voice through Orbita’s voice experience management platform, is a voice- and video-enabled intelligent assistant that’s able to dispense medication, connect to caregivers, issue voice reminders and perform daily tasks, like reporting the weather and playing radio stations. Emanuele Musini, CEO and co-owner of Pillo Health, said the robot features a 7-inch touchscreen and facial recognition technology. Once Pillo recognizes the patient, it is able to dispense medication that has been preloaded into the robot.

In-home digital healthcare technology is “the future of healthcare,” said Brian Jack, chief of family medicine at Boston Medical Center. Jack said, over the next several years, he expects there will be gradual to rapid movement of care from the office and hospital settings to the home. And he said he believes in-home digital healthcare technology is an opportunity to “provide better care at a lower cost.”

Investing in digital health startups

AARP chose to partner with Orbita and Pillo Health in May as a result of the PULSE@MassChallenge event — a digital health innovation hub established by the city of Boston, MassChallenge and other entities to support digital health startups. AARP launched its $40 million Innovation Fund in 2015 that allows the organization to invest in companies working in three major health-related areas: aging at home, convenience and access to healthcare, and preventive health.

We want to help bring solutions to market that make life better for people 50-plus and increase their health security, financial well-being and personal fulfillment.
Andy Millersenior vice president of innovation and product development, AARP

AARP’s purpose is to “empower” people to choose how they live as they age, said Andy Miller, senior vice president of innovation and product development at AARP, based in Washington, D.C.

“Innovation is a major way to make this happen,” Miller said. “We want to help bring solutions to market that make life better for people 50-plus and increase their health security, financial well-being and personal fulfillment.”

Technology makes it easier for providers to monitor and diagnose patients at critical moments and to provide ongoing care without having the patient always in the room with them, Miller said.

Bringing robotics into the home

Orbita CEO Bill Rogers said Pillo will empower older adults by reminding them to take their medication on time and providing education about diabetes. Pillo can also communicate information to caregivers, alerting them if a person’s medication has not been taken or if some other issue occurs. 

Rogers said the challenge with mobile applications and web portals is the user needs to learn that experience to be able to collaborate with their doctors and physicians. Voice technology “changes the whole game of engagement,” he explained.  

“It allows people to be able to engage and interact with their voice, which is the natural way people engage,” Rogers said.

Pillo’s Musini said the idea to create Pillo stemmed from his own personal experience with his father, who had serious health issues and would forget to take his medication and follow the doctor’s orders.

“We started it with a mission to empower older adults living at home with chronic conditions,” Musini said. “The approach I had was, ‘What if there was someone with my father at that time?’ There was something that could be with him 24 hours a day, 7 days a week and was alert.”

Providing aftercare in-home help

Jack, who leads Project Re-Engineered Discharge (RED), a Boston University Medical Center research group responsible for developing and testing strategies to improve the hospital discharge process, helped design an animated health information technology system named Louise that provides aftercare information to people recently discharged from the hospital.

Project RED studied the system and found twice as many people who used Louise preferred to receive their discharge information from the system, rather than a doctor or nurse for several reasons, including Louise’s availability and accessibility. After returning home, Jack said patients and their caregivers are able to sign onto the Louise technology and learn about medication, proper care and follow-up appointments, as well as easily connect with their clinicians.

“When patients leave the hospital, in our studies, when we ask them what they are most worried about, they say that, ‘I’m all by myself,'” Jack said. “When there are at-home technologies, where the patient can access the technology, the technology can access the clinicians, and the patients are super happy. Plus, they can get their problem fixed in a timely way, rather than waiting for an appointment.”

Identifying best practices for digital healthcare technology

Jack said thorough study of in-home digital healthcare technology is critical before sending it out into the public — a sentiment echoed by John Torous, co-director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Boston.

Torous said it’s up to researchers and groups like AARP to find best practices for in-home digital healthcare technology to avoid potentially harmful consequences.

“I think together we can learn how to use this technology in a productive, ethical and meaningful way, and it will have a bigger role in healthcare,” Torous said.

Miller said the goal of AARP’s collaborations with companies like Pillo Health and Orbita is to “gain useful and impactful information that can be used to continue to improve the customer experience and help make these products as beneficial as possible.”

Along with Orbita and Pillo, AARP has partnered with digital health startups like Folia Health and One Medical Group.

“When considering which startups to work with, we are looking for mission-aligned companies who have transformational solutions and those we can work with to co-create ageless design solutions that could have meaningful impact in the lives of the 50-plus consumer,” Miller said.

Hortonworks cloud options grow via Google, Microsoft, IBM

Hadoop distribution provider Hortonworks is expanding technology partnerships with Google, Microsoft and IBM to broaden the options for users looking to deploy Hortonworks cloud systems.

Most notably, Hortonworks now supports the Google Cloud Storage (GCS) service, with the ability to run applications against data stored there. Cloud-based object stores like GCS have gained greater prominence, at times supplanting the Hadoop Distributed File System (HDFS) as a repository for Hadoop-based big data applications in the cloud.

For Google, the expanded deal announced June 18 furthers its efforts to close a gap with cloud platform market leaders Amazon Web Services and Microsoft. For Hortonworks, the move is part of its efforts to enable users to run big data workloads on multiple clouds, according to Ovum analyst Tony Baer.

Baer said that for many organizations — particularly ones that are a step below the size of the biggest enterprises — big data analytics will largely be done on the cloud going forward.

“For people just getting started, even with the work done by the distribution providers, Hadoop is a complicated platform with a lot of moving parts,” Baer said. “There’s a lot of knowledge needed just to set it up, and that is not a skill most organizations have.”

When moving big data workloads to the cloud, users often see a money-saving opportunity in cloud storage tools like GCS, the Amazon Simple Storage Service (S3) and Microsoft’s Azure Blob Storage. Such technologies may provide slower performance as opposed to HDFS, but Baer said that gap could close with improvements over time. Among users of GCS now are Spotify, Coca-Cola, the Broad Institute and others.

Cold data play

Scott Gnau, CTO, HortonworksScott Gnau

Hortonworks CTO Scott Gnau said interest in cloud object stores doesn’t prefigure a complete move away from HDFS for Hortonworks cloud users.

“What we see is customers looking to take advantage of different options,” Gnau said. Running applications against data stored natively in GCS or S3 lets users “play the data where it lies without having to move it” to HDFS first, he noted. Object stores are also typically less expensive to use than keeping data in HDFS is, according to Gnau.

However, users are likely to continue using HDFS for Hortonworks cloud applications that require high-performance and sophisticated data analysis, Gnau added. Object storage “has advantages, but it also has difficulties,” he said. “It’s not as performant as HDFS.”

What we see is customers looking to take advantage of different options.
Scott GnauCTO, Hortonworks

As a result, Gnau said he sees the best immediate role for cloud-based object storage in handling “colder data” — that is, data that isn’t an immediate part of an analytics workflow.

Sudhir Hasbe, director of product management for the Google Cloud Platform, said Hortonworks users can now decouple storage and compute by using GCS instead of HDFS. That could make it more cost-effective for on-premises HDFS users to use Hortonworks cloud systems for their big data workloads, he continued.

IBM, Microsoft clouds also in sight

The Google deal complements other Hortonworks cloud pacts with AWS, IBM and Microsoft. Coming on the first day of Hortonworks’ DataWorks Summit 2018 conference in San Jose, Calif., the addition of the GCS support was accompanied by updates to the alliances that the big data platform vendor has with IBM and Microsoft.

[embedded content]

Hortonworks, Microsoft execs discuss moves to the cloud.

Hortonworks said organizations can now run its Hortonworks Data Platform (HDP) software natively on the Microsoft Azure cloud, in addition to using the HDP-based Azure HDInsight managed service that Microsoft sells to customers. Hortonworks DataFlow and Hortonworks DataPlane Service, two related technologies offered by the Santa Clara, Calif., company, also are now available for native deployments on Azure.

Meanwhile, in a blog post, Rob Thomas, general manager of IBM Analytics, said IBM is adding a managed service on its cloud platform called IBM Hosted Analytics with Hortonworks, or IHAH. The new service combines HDP with IBM’s Db2 Big SQL query engine and Data Science Experience workbench platform, extending a relationship that began last year when IBM dropped its own Hadoop distribution and agreed to resell HDP instead.

In addition to the expanded cloud deals, Hortonworks detailed plans for an HDP 3.0 release that will let users put big data applications in Docker containers to help speed up deployments and make it easier to move processing workloads to different servers. Due out in the third quarter, HDP 3.0 also adds the ability to run deep learning applications on GPU-based systems, plus support for Apache Hive 3.0, an update of the open source SQL query engine and data warehouse environment that was released in May.

Hive 3.0 functions as a real-time database for analytics applications that require fast query response rates, Gnau said. “It really is a database now versus Hive historically being viewed as a SQL programming environment that ran on Hadoop.”

Senior executive editor Craig Stedman contributed to this story.

Microsoft expands commitment to military spouse community – Microsoft Military Affairs

Today in San Francisco, Microsoft Military Affairs will join our partners from LinkedIn to each share new commitments to the military spouse community.

Military spouses are an integral supporting force for members of our military, but face staggering 18 percent unemployment and 53 percent underemployment due to moves every two to three years, according to a 2016 study from Blue Star Families on the social cost of unemployment and underemployment of military spouses.

As part of our commitment to the military spouse community, Microsoft will launch a pilot program to provide spouses with technology skills training beginning in September.

Microsoft has successfully opened a technology career pipeline for transitioning service members and veterans via the Microsoft Software & Systems Academy (MSSA) program, which has expanded coast-to-coast and has a graduation rate of over 90 percent. We are excited to explore how to expand and tailor these opportunities to military spouses, which represent a diverse talent pool that is adaptable, resilient and highly educated and ready to take on new and exciting opportunities to further their professional and personal goals.

The U.S. government estimates information technology occupations are projected to grow 12 percent from 2014 to 2024, faster than the average for all occupations. Because there are 500,000 open technology jobs annually, we know that career programs are needed to help close the technology skills gap.

“Microsoft is excited to work with technology leaders and other organizations committed to supporting military spouses, and to find avenues that lead to meaningful career opportunities for active duty military spouses,” said U.S. Marine Corps Major General (Ret.) Chris Cortez, Vice President of Microsoft Military Affairs.

LinkedIn also announced today that it is expanding its military and veterans program to include military spouses through a new partnership with the U.S. Department of Defense’s Spouse Education and Career Opportunities program. Beginning this July, LinkedIn will provide one year of LinkedIn Premium to every military spouse during each of their moves to new installations to facilitate their career transitions, and once again upon conclusion of military service. This will include free access to LinkedIn’s online library of more than 12,000 LinkedIn Learning courses, including its newly-launched learning path designed to help military spouses succeed in flexible, freelance or remote-work opportunities.

The Microsoft Military Affairs team is working closely with military spouses and nonprofit organizations to understand firsthand the unique challenges this community faces as we build out and learn from our pilot program.

We are thrilled to begin our pilot program in the fall and to continue our support of military spouses and their community by providing the skills they need to enter technology careers.