Tag Archives: make

Netflix launches tool for monitoring AWS credentials

LAS VEGAS — A new open source tool looks to make monitoring AWS credentials easier and more effective for large organizations.

The tool, dubbed Trailblazer, was introduced during a session at Black Hat USA 2018 on Wednesday by William Bengtson, senior security engineer at Netflix, based in Los Gatos, Calif. During his session, Bengtson discussed how his security team took a different approach to reviewing AWS data in order to find signs of potentially compromised credentials.

Bengtson said Netflix’s methodology for monitoring AWS credentials was fairly simple and relied heavily on AWS’ own CloudTrail log monitoring tool. However, Netflix couldn’t rely solely on CloudTrail to effectively monitor credential activity; Bengtson said a different approach was required because of the sheer size of Netflix’s cloud environment, which is 100% AWS.

“At Netflix, we have hundreds of thousands of servers. They change constantly, and there are 4,000 or so deployments every day,” Bengtson told the audience. “I really wanted to know when a credential was being used outside of Netflix, not just AWS.”

That was crucial, Bengtson explained, because an unauthorized user could set up infrastructure within AWS, obtain a user’s AWS credentials and then log in using those credentials in order to “fly under the radar.”

However, monitoring credentials for usage outside of a specific corporate environment is difficult, he explained, because of the sheer volume of data regarding API calls. An organization with a cloud environment the size of Netflix’s could run into challenges with pagination for the data, as well as rate limiting for API calls — which AWS has put in place to prevent denial-of-service attacks.

“It can take up to an hour to describe a production environment due to our size,” he said.

To get around those obstacles, Bengtson and his team crafted a new methodology that didn’t require machine learning or any complex technology, but rather a “strong but reasonable assumption” about a crucial piece of data.

“The first call wins,” he explained, referring to when a temporary AWS credential makes an API call and grabs the first IP address that’s used. “As we see the first use of that temporary [session] credential, we’re going to grab that IP address and log it.”

The methodology, which is built into the Trailblazer tool, collects the first API call IP address and other related AWS data, such as the instance ID and assumed role records. The tool, which doesn’t require prior knowledge of an organization’s IP allocation in AWS, can quickly determine whether the calls for those AWS credentials are coming from outside the organization’s environment.

“[Trailblazer] will enumerate all of your API calls in your environment and associate that log with what is actually logged in CloudTrail,” Bengtson said. “Not only are you seeing that it’s logged, you’re seeing what it’s logged as.”

Bengtson said the only requirement for using Trailblazer is a high level of familiarity with AWS — specifically how AssumeRole calls are logged. The tool is currently available on GitHub.

Vendors race to adopt Google Contact Center AI

Google has released a development platform that will make it easier for businesses to deploy virtual agents and other AI technologies in the contact center. The tech giant launched the product in partnership with several leading contact center vendors, including Cisco and Genesys. 

The Google Contact Center AI platform includes three main features: virtual agents, AI-powered assistance for human agents and contact center analytics. Google first released a toolkit for building conversational AI bots in November and updated the platform this week, with additional tools for contact centers.

The virtual agents can help resolve common customer inquiries using Google’s natural language processing platform, which recognizes voice and textual inputs. Genesys, for example, demonstrated how the chatbot could help a customer return ill-fitting shoes before passing the phone call to a human agent, who could help the customer order a new pair.

Google’s agent assistance system scans a company’s knowledge bases, such as FAQs and internal documents, to help agents answer customer questions faster. The analytics tool reviews chats and call recordings to identify customer trends, assisting in the training of live agents and the development of virtual agents.

Vendors rush to adopt Google Contact Center AI

Numerous contact center vendors that directly compete with one another sent out strikingly similar press releases on Tuesday about their adoption of Google Contact Center AI. The Google platform is available through partners Cisco, Genesys, Mitel, Five9, RingCentral, Vonage, Twilio, Appian and Upwire.

“I don’t think I’ve ever heard of a launch like this, where almost every player — except Avaya — is announcing something with the same company,” said Jon Arnold, principal analyst of Toronto-based research and analysis firm J Arnold & Associates.

Avaya was noticeably absent from the list of partners. The company spent most of 2017 in bankruptcy court and was previously faulted by critics for failing to pivot to the cloud quickly enough. The company said at a conference earlier this year it was developing AI capabilities internally, said Irwin Lazar, an analyst at Nemertes Research, based in Mokena, Ill.

An Avaya spokesperson said its platforms integrated with a range of AI technologies from vendors, including Google, IBM, Amazon and Nuance. “Avaya does have a strong relationship with Google, and we continue to pursue opportunities for integration on top of what already exists today,” the spokesperson said.

Google made headlines last month with the release of Google Duplex, a conversational AI bot targeting the consumer market. The company demonstrated how the platform could pass as human during short phone conversations with a hair salon and restaurant. Google’s Contact Center AI was built on some of the same infrastructure, but it’s a separate platform, the company said.

“Google has been pretty quiet. They are not a contact center player. But as AI keeps moving along the curve, everyone is trying to figure out what to do with it. And Google is clearly one of the strongest players in AI, as is Amazon,” Arnold said.

Because it relies overwhelmingly on advertising revenue, Google doesn’t need its Contact Center AI to make a profit. Google will be able to use the data that flows through contact centers to improve its AI capabilities. That should help it compete against Amazon, which entered the contact center market last year with the release of Amazon Connect.

The contact center vendors now partnering with Google had already been racing to develop or acquire AI technologies on their own, and some highlighted how their own AI capabilities would complement Google’s offering. Genesys, for example, said its Blended AI platform — which combines chatbots, machine learning and analytics — would use predictive routing to transfer calls between Google-powered chatbots and live agents.  

“My sense with AI is that it will be difficult for vendors to develop capabilities on their own, given that few can match the computing power required for advanced AI that vendors like Amazon, Google and Microsoft can bring to the table,” Lazar said.

Healthcare APIs get a new trial run for Medicare claims

In the ongoing battle to make healthcare data ubiquitous, the U.S. Digital Service for the Department of Health and Human Services has developed a new API, Blue Button 2.0, aimed at sharing Medicare claims information.

Blue Button 2.0 is part of an API-first strategy within HHS’ Centers for Medicare and Medicaid Services, and it comes at a time when a number of major companies, including Apple, have embraced the potential of healthcare APIs. APIs are the building blocks of applications and make it easier for developers to create software that can easily share information in a standardized way. Like Apple’s Health Records API, Blue Button 2.0 is based on a widely accepted healthcare API standard known as Fast Healthcare Interoperability Resources, or FHIR

Blue Button 2.0 is the API gateway to 53 million Medicare beneficiaries, including comprehensive part A, B and D data. “We’re starting to recognize that claims data has value in understanding the places a person has been in the healthcare ecosystem,” said Shannon Sartin, executive director of the U.S. Digital Service at HHS.

“But the problem is, how do you take a document that is mostly codes with very high-level information that’s not digestible and make it useful for a nonhealth-savvy individual? You want a third-party app to add value to that information,” Sartin said.

So, her team was asked to work on this problem. And out of their work, Blue Button 2.0 was born.

More than 500 developers have signed on

To date, over 500 developers are working with the new API to develop applications that bring claims data to consumers, providers, hospitals and, ultimately, into an EHR, Sartin said. But while there is a lot of interest, Sartin said this is just the first step when it comes to healthcare APIs.

“The government does not build products super well, and it does not do the marketing engagement necessary to get someone interested in using it,” she said. “We’re taking a different approach, acting as evangelists, and we’re spending time growing the community.”

And while a large number of developers are experimenting with Blue Button 2.0, Sartin’s group will be heavily vetting to eventually get to a much smaller number that will release applications due to privacy concerns around the claims data.

Looking for a user-friendly approach

We’re … acting as evangelists, and we’re spending time growing the community.
Shannon Sartinexecutive director of the U.S. Digital Service at HHS

In theory, the applications will make it easier for a Medicare consumer to let third parties access their claims information and then, in turn, make that data meaningful and actionable. But Arielle Trzcinski, senior analyst serving application development and delivery at Forrester Research, said she is concerned Blue Button 2.0 isn’t pushing the efforts around healthcare APIs far enough.

“Claims information is not the full picture,” she said. “If we’re truly making EHR records portable and something the consumer can own, you have to have beneficiaries download their medical information. That’s great, but how are they going to share it? What’s interesting about the Apple effort as a consumer is that you’re able to share that information with another provider. And it’s easy, because it’s all on your phone. I haven’t seen from Medicare yet how they might do it in the same user-friendly way.”

Sartin acknowledged Blue Button 2.0 takes aim at just a part of the bigger problem.

“My team is focused just on CMS and healthcare in a very narrow way. We recognize there are broader data and healthcare issues,” she said.

But when it comes to the world of healthcare APIs, it’s important to take that first step. And it’s also important to remember the complexity of the job ahead, something Sartin said her team — top-notch developers from private industry who chose government service to help — realized after they jumped in to the world of healthcare APIs. 

“We have engineers who’ve not worked in healthcare who thought the FHIR standard was overly complex,” she said. “But when you start to dig in to the complexity of health data, you recognize sharing health data with each doctor means something different. This is not as seamless as with banks that can standardize on numbers. There, a one is a one. But in health terminology, a one can mean 10 different things. You can’t normalize it. Having an outside perspective forces the health community to question it all.”

For Sale – Breaking Up Old but Hi Spec (at the time) Windows PC

My very old but fully working windows pc has been gathering dust so need to get rid to make space and just wondered if any of the bits are of use to anyone? All prices negotiable as not too sure how much these bits worth.

please assume no original packing unless stated but everything will be well packaged

postage to be discussed for each item

Can take individual pics as needed

Collection from London Se9 also possible or London Charing Cross work days

Case £50
Antec Dark Fleet DF85 (case only no power supply, wires unless fitted originally)
Specifications

  • Dimensions (mm) 213 x 505 x 577 mm (W x D x H)
  • Material Steel
  • Colour Black
  • Weight 11kg
  • Front Panel Power and reset switches, 1 x USB 3, 3 x USB 2, Stereo, Mic
  • Drive Bays 3 x external 5.25in drive bays, 9 x internal 3.5in drive bays, 1 internal 2.5in drive bay, 1 external 2.5in drive bay
  • Form factor(s) ATX, micro-ATX
  • Cooling 3 x front 120mm fan mounts (fans included), 2 x rear 120mm fan mounts (fans included), 2 x 140mm roof fan mounts (fans included) plus extra Akasa Apache Black Fan on side
  • Graphics card dimensions supported 318mm long, dual slot, full height

Power £75
Corsair TX750W
Corsair TX750W Power Supply

Motherboard £50

Manufacturer ASUSTeK Computer INC
Model Rampage Formula (LGA775)
Version Rev 1.xx
Chipset Vendor Intel Chipset Model X48 Chipset Revision 01
Southbridge Vendor Intel Southbridge Model 82801IR (ICH9R) Revision 02
BIOS
Brand American Megatrends Inc. Version 0902 Date 28/04/2009

CPU £20
Intel Core 2 Quad Q9550 / Cores 4 / Threads 4
Name Intel Core 2 Quad Q9550
Code Name Yorkfield
Package Socket 775 LGA
Technology 45nm
Specification Intel Core2 Quad CPU Q9550 @ 2.83GHz
CPU Only no cooler

CPU Cooler £15
Arctic Freezer Pro Rev
freezer-7-pro-rev-2.html

RAM £60
4 sticks of Corsair Dominator CM2X2048 – 8500C5D 1066mhz 5-5-5-15 2.1V ver 1.1

GPU £50
NVIDIA GeForce GTX 560 Palit Platinum sonic
Memory Type GDDR5
Physical Memory 1023 MB
Virtual Memory 1024 MB
Bus Width 64×4 (256 bit)
Shaders 384 unified
Palit GeForce GTX 560 Sonic Platinum Specs

Sound Card £30
ASUS Xonar HDAV 1.3 Deluxe True HDMI 1.3a 7.1Ch Soundcard, Dolby True HD/DTS Master Audio PCI-E
Just the card itself. Have original box but not in best condition

Price and currency: various
Delivery: Delivery cost is not included
Payment method: PPG or BT
Location: London
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Lilly strives to speed innovation with help from Microsoft 365 Enterprise – Microsoft 365 Blog


Profile picture of Ron Markezich.The nearly 40,000 employees of Eli Lilly and Company are on a mission to make medicines that help people live longer, healthier, and more active lives. But they know that developing new treatments for cancer, diabetes, and other debilitating diseases requires collaboration with the best minds working together to foster innovation.

That’s why Lilly takes a collaborative approach to discovering and developing new medicines—between lab researchers and the rest of the company—as well as with a global network of physicians, medical researchers, and healthcare organizations. Working together—creatively and efficiently—can help generate new ideas that fuel innovation. To bring together scientists across hundreds of locations and organizations and truly empower the workforce, Lilly selected Microsoft 365 Enterprise.

While Lilly is in the early stage of deployment, these cloud-based collaboration tools, including Microsoft Teams, are making an impact. Mike Meadows, vice president and chief technology officer at Lilly, says that the technology will allow for enhanced productivity and teamwork, while helping to protect IP:

“Collaboration tools like Microsoft Teams enhance our ability for researchers and other employees to work together in faster and more creative ways, advancing our promise to make life better through innovative medicines. Microsoft 365 helps us bring the best minds together while keeping data secure and addressing regulatory compliance requirements.”

Like enterprise customers across the globe, Lilly sees Microsoft 365 as a robust, intelligent productivity and collaboration solution that empowers employees to be creative and work together. And when deployment of Windows 10 is complete, employees across the company will advance a new culture of work where creative collaboration that sparks critical thinking and innovation happens anywhere, anytime.

At Microsoft, we’re humbled to play a role in helping Lilly make life better for people around the world.

—Ron Markezich

For Sale – Breaking Up Old but Hi Spec (at the time) Windows PC

My very old but fully working windows pc has been gathering dust so need to get rid to make space and just wondered if any of the bits are of use to anyone? All prices negotiable as not too sure how much these bits worth.

please assume no original packing unless stated but everything will be well packaged

postage to be discussed for each item

Can take individual pics as needed

Collection from London Se9 also possible or London Charing Cross work days

Case £50
Antec Dark Fleet DF85 (case only no power supply, wires unless fitted originally)
Specifications

  • Dimensions (mm) 213 x 505 x 577 mm (W x D x H)
  • Material Steel
  • Colour Black
  • Weight 11kg
  • Front Panel Power and reset switches, 1 x USB 3, 3 x USB 2, Stereo, Mic
  • Drive Bays 3 x external 5.25in drive bays, 9 x internal 3.5in drive bays, 1 internal 2.5in drive bay, 1 external 2.5in drive bay
  • Form factor(s) ATX, micro-ATX
  • Cooling 3 x front 120mm fan mounts (fans included), 2 x rear 120mm fan mounts (fans included), 2 x 140mm roof fan mounts (fans included) plus extra Akasa Apache Black Fan on side
  • Graphics card dimensions supported 318mm long, dual slot, full height

Power £75
Corsair TX750W
Corsair TX750W Power Supply

Motherboard £50

Manufacturer ASUSTeK Computer INC
Model Rampage Formula (LGA775)
Version Rev 1.xx
Chipset Vendor Intel Chipset Model X48 Chipset Revision 01
Southbridge Vendor Intel Southbridge Model 82801IR (ICH9R) Revision 02
BIOS
Brand American Megatrends Inc. Version 0902 Date 28/04/2009

CPU £20
Intel Core 2 Quad Q9550 / Cores 4 / Threads 4
Name Intel Core 2 Quad Q9550
Code Name Yorkfield
Package Socket 775 LGA
Technology 45nm
Specification Intel Core2 Quad CPU Q9550 @ 2.83GHz
CPU Only no cooler

CPU Cooler £15
Arctic Freezer Pro Rev
freezer-7-pro-rev-2.html

RAM £60
4 sticks of Corsair Dominator CM2X2048 – 8500C5D 1066mhz 5-5-5-15 2.1V ver 1.1

GPU £50
NVIDIA GeForce GTX 560 Palit Platinum sonic
Memory Type GDDR5
Physical Memory 1023 MB
Virtual Memory 1024 MB
Bus Width 64×4 (256 bit)
Shaders 384 unified
Palit GeForce GTX 560 Sonic Platinum Specs

Sound Card £30
ASUS Xonar HDAV 1.3 Deluxe True HDMI 1.3a 7.1Ch Soundcard, Dolby True HD/DTS Master Audio PCI-E
Just the card itself. Have original box but not in best condition

Price and currency: various
Delivery: Delivery cost is not included
Payment method: PPG or BT
Location: London
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

The Complete Guide to Azure Virtual Machines: Part 1

Azure Virtual Machines make an already hugely flexible technology in virtualization even more adaptable through remote hosting.

Virtual machines are a part of Azure’s Infrastructure as a Service (IaaS) offering that allows you to have the flexibility of virtualization without having to invest in the underlying infrastructure. In simpler words, you are paying Microsoft to run a Virtual Machine of your choosing in their Azure environment while they provide you access to the VM.

One of the biggest misconceptions I see in the workplace is that managing Cloud Infrastructure is the same as or very similar to managing on-premise infrastructure. THIS IS NOT TRUE. Cloud Infrastructure is a whole new ball game. It can be a great tool in our back pockets for certain scenarios but only if used correctly. This blog series will explain how you can determine if a workload is suitable for an Azure VM and how to deploy it properly.

Why Use Azure Virtual Machines Over On-Premise Equipment?

One of the biggest features of the public cloud is its scalability. If you write an application and need to scale up the resources dramatically for a few days, you can create a VM in Azure, install your application, run it in there and turn it off when done. You only pay for what you use. If you haven’t already invested in your own physical environment this is a very attractive alternative. The agility this solution provides software developers is on a whole new level compared to before and enables companies to become more efficient at creating applications, and being able to scale when desired is huge.

Should I Choose IaaS or PaaS?

When deploying workloads in Azure, it is important to determine whether or not an application or service should be run using Platform as a Service (PaaS) or a Virtual Machine (IaaS). For example, let’s say you are porting an application into Azure that runs on SQL. Do we want to build a Virtual Machine and install SQL or do we want to just leverage Azure’s PaaS services and just use one of their SQL instances? There are many factors in deciding whether or not to use PaaS or IaaS but one of the biggest is, how much control do you require for your application to run effectively. Do you need to make a lot of changes to the registry and do you require many tweaks within the SQL install? If so, then the virtual machine route would seem a better fit.

How To Choose The Right Virtual Machine Type

In Azure, the Virtual Machine resource specifications are cookie cutter. You don’t get to customize down to the details of how much CPU and Memory you want. They come in an offering of different sizes and you have to make those resource templates work for your computing needs. Making sure the correct size of VM is selected is crucial in Azure, not only because of performance implications for your applications but also because of the pricing. You don’t want to be paying more for a VM that is too large for your workloads.

Make sure you do your homework to determine which size is right for your needs. Also, pay close attention to i/o requirements. Storage is almost always the most common performance killer, so do your due diligence and make sure you’re getting the VM with the proper IOPS (Input/Output Operations per  Second) requirements. For Windows licensing, Microsoft covers the license and the Client Access License if you’re running a VM that needs CALs. For Linux VMs the licensing differs per the distribution.

Before we go and create a Virtual Machine inside Azure, let’s go over one of the gotchas that you might run into if you’re not aware. In Azure, since everything is “pay as you go”, if you’re not aware of the pricing at all times, you or your company may be getting a hefty bill from Microsoft. One of the common mistakes with VMs is that If you don’t completely remove your VM you can still get a charge. Simply just shutting down the VM will not stop the meter from running – you’re still reserving the hardware space from Microsoft so you’ll still be billed. Also when you delete the VM, you are going to have to delete the managed disk as well separately. The VM itself is not the only cost applied when running virtual machines.

Getting Started – Creating the Virtual Network

We will now demonstrate how to configure a Virtual Machine on Azure and getting connected to it. First, we will need to create the virtual networking so that the VM has some sort of network to talk out on. Afterward, we will create the Network Security Group which is like the “firewall” to the VM, and then finally we will create the VM itself. To create the Virtual Network, log into the Azure Portal and select “Create a Resource”. Then click on Networking > Virtual Network:

Azure Virtual Machines

Now we can specify the settings for our Virtual Network. First, we’ll give it a name. I’ll call mine “LukeLabVnet1”. I’ll leave the address space default here but we could make it smaller if we chose too. Then we will select our subscription type. You can use multiple subscriptions for different purposes, like a Development subscription and a Production subscription. Resource groups are a way for you to manage and group together your Azure resources for billing, monitoring, and to access control purposes. We already have a resource group created for this VM and its components so I will go ahead and select that. If we wanted, we could create a new one on the fly here. Then, we fill in the time zone which is Eastern for me. Next, we’ll give the subnet a name because we can create multiple subnets on this virtual network later, I’ll call it “LukeLabSubnet”. I’ll leave the default Address space for the subnet out since we are just configuring one VM and setting up access to it. Once we are done we will hit “create:

Now, to get to our newly created Virtual Network, on the left-hand side of the portal we select “Virtual Networks” and click on the one we just deployed:

We can configure all of our settings for our Virtual Network here. However, for the simplicity of the demonstration we will leave everything how it is for now:

Now that we have our virtual network in place, we will need to create our Network Security Group and then finally deploy our VM which will we do in part 2 of this series. As you can see there are a lot of components to learn when deploying VMs in Azure.

Comments/Feedback?

If you’re unsure about anything stated here let me know in the comments below and I’ll try to explain it better.

Have you tried Azure Virtual Machines? Let us know your verdict!

RADWIN and Microsoft announce strategic partnership to deliver innovative TV White Space solutions | Stories

The partnership will help make broadband more affordable and accessible for underserved and unserved customers in the rural U.S. and around the world

REDMOND, Wash. — July 2, 2018 — On Monday, RADWIN and Microsoft Corp. announced a new strategic partnership to address the rural broadband gap. RADWIN, a world leader in delivering high-performance broadband wireless access solutions, will be developing and introducing to the market TV White Space solutions to deliver broadband internet to unserved communities. Focused on introducing innovative technologies into the TV White Space market, the partnership will expand the TV White Space ecosystem, making broadband more affordable and accessible for customers in the rural U.S. and around the world. This partnership is part of Microsoft’s Airband Initiative, which aims to expand broadband coverage using a mixture of technologies including TV White Space.

Broadband is a vital part of 21st century infrastructure. Yet, only about half of the world’s population is connected to the internet. New cloud services and other technologies make broadband connectivity a necessity to starting and growing small businesses and taking advantage of advances in agriculture, telemedicine and education. According to findings by the Boston Consulting Group, a connectivity model that uses a combination of technologies, including TV White Space, can reduce the cost of extending broadband coverage in rural communities. TV White Space is an important part of the solution, creating broadband connections in UHF bands and enabling communication in challenging rural terrains and highly vegetated areas, all while protecting broadcasters and other licensees from harmful interference.

“The TV White Space radio ecosystem is rapidly growing, and we are excited to work with RADWIN to bring innovative technologies to market at a global scale,” said Paul Garnett, senior director of the Microsoft Airband Initiative. “Our partnership with RADWIN, a recognized global leader in fixed wireless broadband access, will help address the rural broadband gap for residents and businesses, enabling farmers, healthcare professionals, educators, business leaders and others to fully participate in the digital economy.”

“RADWIN is a leading provider of broadband access solutions, enabling service providers globally to connect unserved and underserved homes and businesses,” said Sharon Sher, RADWIN’s president and CEO. “We are therefore very excited to be Microsoft’s partner in leading a global effort to connect rural communities and grow the TVWS ecosystem in the U.S. and around the world. The addition of innovative TV White Space solutions to RADWIN’s portfolio, which complements our sub-6GHz and mmWave fixed wireless offering, would enable our service provider customers and partners to extend their footprint by connecting more remote subscribers in challenging deployment use cases, penetrating through terrain obstructions and vegetation, and therefore helping to close the digital divide.”

In addition to the partnerships with companies like RADWIN, Microsoft’s Airband Initiative invests in partnerships with internet service providers (ISPs) and other telecommunications companies, introduces innovative solutions for rural connectivity, and provides digital skills training for people in newly connected communities. RADWIN and Microsoft will be introducing the innovative TV White Space solutions to these Airband Initiative partners, as well as to the global telecommunications industry, during the second half of 2019.

About RADWIN

RADWIN is a leading provider of broadband wireless solutions. Deployed in over 150 countries, RADWIN’s solutions power applications including fixed wireless access, backhaul, private network connectivity, video surveillance transmission as well as delivering broadband on the move for trains, vehicles and vessels. RADWIN’s solutions are adopted and deployed by tier 1 service providers globally as well as by large corporations.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, +1 (425) 638-7777,

rrt@we-worldwide.com

RADWIN Media Contact, Tammy Levy, Marketing Communications Manager, +972-3-766-2916, pr@radwin.com

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com.Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

How to muster the troops

A digital transformation journey, make no mistake, is no walk in the park. It involves major course corrections to technology, to business processes, to how people do their jobs and how they think about their roles. So, how does a company make something so radical as digital transformation part of its DNA?

Gail Evans, who was promoted in June from global CIO at Mercer to the consulting firm’s global chief digital officer, believes an important first step is getting people to see what’s in it for them, “because once you see the value, you’re all in.”

In this video recorded in May at the MIT Sloan CIO Symposium, then-CIO Evans provided some insight into how she musters the troops at Mercer, explaining that a digital transformation journey is, by nature, long and iterative, requiring people to see value all along the way.

Editor’s note: The following was edited for clarity and brevity.

What can companies do to get started on a digital transformation journey?

Gail Evans: Actually, I think there are a couple of things. I think digital transformation, at its core, is the people. At the very core of any transformation, it is about how do you inspire a team to align to this new era — this new era of different tools, different technologies that can be applied in many different, creative ways to create new business models or to drive efficiencies in your organization.

So, I think the leaders in the enterprise are ones who understand the dynamics of taking your core and moving it up the food chain. Where does it start? I think it starts with creating a beachhead, creating a platform of digital, and then allowing that to grow and swell with training and opportunities, webinars, blogs so that it becomes a part of a company’s DNA. Because I believe digital isn’t a thing — it’s a new way of doing things to create value through the application of technology and data.

Which departments at Mercer are in the vanguard of digital transformation? Who are laggards?

Evans: One would argue that marketing is already digital, right? I mean, they are already using digital technologies to drive personalized experiences on the web and have been doing that for many years. I would say that it starts in, probably, technology. Technology will embrace it, and also it needs to be infused into the business leaders.

I think the laggards are typically … I guess I wouldn’t necessarily call them ‘laggards.’ I think I would refer to them as not yet seeing the value of digital, because once you see the value, you’re all in.

Pockets of resistance

[Digital transformation is] humans plus technology and new business models. That is what digital transformation is all about and it’s fun!
Gail Evansglobal chief digital officer, Mercer

Evans: There are teams or pockets of folks who have done things the same way for a long time and there is a resistance there. It’s kind of the, ‘If it ain’t broke, don’t fix it.’ Those are pockets, but you’d find those pockets in every transformation, whether it’s digital or [moving into the] information age, whatever — you’ll find pockets of people who are not ready to go.

And so, I think there are pockets of people in our core legacy who are holding onto their technology of choice and may not have up-skilled themselves, so they are holding on and they are resisting.

And then there are business folks who have been used to one-to-one relationships and built their whole career — a very successful career — with those one-to-one relationships. And now digital is coming from a different place where some of what you might have thought was your IP in value is now in algorithms. What will you do differently and how do you manage those dynamics differently?

I think there’s education [that needs to happen] because I think it’s humans plus technology, it’s not just technology; it’s humans plus technology and new business models. That is what digital transformation is all about and it’s fun! It is a new way to just have fun. It will be something else two, three, five years from now.

Speaking to ‘hearts and minds’

What strategies do you have for getting people to sign on for that ‘fun’ digital transformation journey?

Evans: At Mercer, what I’ve done was, first, you have to create, I think, a very strong digital strategy that is not just textbook strategy, but one that speaks to the hearts and minds from the executive team down to the person who’s coding, that they can relate to and become a part of it. Many people believe, ‘What’s in it for me? Yeah, I get that technology stuff, but what is it in for me?’ [Showing that] then what is in it for the business and bringing that strategy together and having proof points along the way [is important].

It’s not a big bang approach; it’s really very agile and iterative. And so, as you iterate and show value, people will become more open to change. And as you train them, so build a strategy and inspire your team, inspire your executive leadership team because that’s where all the money is. You need the money, so they need to believe in the digital transformation [journey] and the revenue aspect and the stakeholder value that it would bring.

Basically, create a strong vision that applies to the team, create a strategy that is based on efficiencies and revenue and also create what many call a bimodal [IT approach] because you need to continue to drive the core legacy systems and optimize. They’re still the bread and butter of the company. So, you have to find a strategy that allows both to grow.

View All Videos

MongoDB 4.0, Stitch aim to broaden use of NoSQL database

MongoDB Inc. is releasing several technologies designed to make its namesake NoSQL database a viable option for more enterprise applications, led by a MongoDB 4.0 update with expanded support for the ACID transactions that are a hallmark of mainstream relational databases.

Beyond MongoDB 4.0, the company, at its MongoDB World user conference in New York, also launched a serverless platform called Stitch that’s meant to streamline application development, initially for use with the MongoDB Atlas hosted database service in the cloud.

In addition, MongoDB made a mobile version of the database available for beta testing and enabled Atlas users to distribute data to different geographic areas globally for faster performance and regulatory compliance.

While MongoDB is one of the most widely used NoSQL technologies, the open source document database still has a tiny presence compared to relational behemoths like Oracle Database and Microsoft SQL Server. MongoDB, which went public in October 2017, reported total revenue of just $154.5 million for its fiscal year that ended in January — amounting to a small piece of the overall database market.

But MongoDB 4.0’s support for ACID transactions across multiple JSON documents could make it a stronger alternative to relational databases, according to Stephen O’Grady, an analyst at technology research and consulting firm RedMonk in Portland, Maine.

The ACID properties — atomicity, consistency, isolation and durability — ensure that database transactions are processed accurately and reliably. Previously, MongoDB only offered a form of such guarantees at the individual document level. MongoDB 4.0, which has been in beta testing since February, supports multi-document ACID transactions — a must-have requirement for many enterprise users with transactional workloads to run, O’Grady said.

“Particularly in financial shops, if you can’t give me an ACID guarantee, that’s just a non-starter,” he said.

O’Grady said he doesn’t expect companies to replace the back-end relational databases that run their ERP systems with MongoDB, but he added that the document database is now a more feasible option for users who are looking to take advantage of the increased data flexibility and lower costs offered by NoSQL software in other types of transactional applications.

New technologies from MongoDB
MongoDB’s new product offerings, at a glance.

Moving from Oracle to MongoDB

That’s the case at Acxiom Corp., which collects and analyzes customer data to help companies target their online marketing efforts to web users.

Acxiom already converted two Oracle-based systems to MongoDB: a metadata repository three years ago, and a real-time operational data store (ODS) that was switched over in January. And the Conway, Ark., company wants to move more data processing work to MongoDB in the future, said Chris Lanaux, vice president of its product and engineering group.

Oracle and other relational databases are much more expensive to run and aren’t as cloud-friendly as MongoDB is, Lanaux said.

When you’re moving 90 miles per hour, it’s helpful to have guaranteed consistency. Now we don’t have to worry about that anymore.
John Riewertssenior director of engineering, Acxiom Corp.

John Riewerts, senior director of engineering on Lanaux’s team, added that Amazon Web Services and cloud platform providers offer individual flavors of relational databases. With MongoDB, “it’s just a flip of a switch for us to decide which cloud platform to put it on,” he said.

The ACID transactions support in MongoDB 4.0 is a big step forward for the NoSQL database, Riewerts said. Acxiom writes transactions to multiple documents in both the metadata system and the ODS; currently, it does workarounds to make sure that all of the data gets updated properly, but that isn’t optimal, according to Riewerts.

“When you’re moving 90 miles per hour, it’s helpful to have guaranteed consistency,” he said. “Now we don’t have to worry about that anymore.”

Acxiom also was an early user of the MongoDB Stitch backend-as-a-service platform, which was released for beta testing a year ago. Stitch gives developers an API that connects to MongoDB at the back end, plus built-in capabilities for creating JavaScript functions, integrating with other cloud services and setting triggers to automatically invoke real-time actions when data is updated.

Scott Jones, a principal architect at Acxiom, said the serverless technology enabled two developers in the product and engineering group to deploy the ODS on the MongoDB Atlas cloud service without having to wait for the company’s IT department to set up the system.

“We’re not dealing with anything really but the business logic of what we’re trying to build,” he noted.

More still needed from MongoDB

Lanaux said MongoDB still has to deliver some additional functionality before Acxiom can move other applications to the NoSQL database. For example, improvements to a connector that links MongoDB to SQL-based BI and analytics tools could pave the way for some data analytics jobs to be shifted.

“But we’re betting on [MongoDB],” he said. “Thus far, they’ve checked every box that they’ve promised us.”

Ovum analyst Tony Baer said MongoDB also needs to stay focused on competing against its primary document database rivals, including DataStax Enterprise and Amazon DynamoDB, as well as Microsoft’s Azure Cosmos DB multimodel database.

Particularly in the cloud, DynamoDB and Azure Cosmos DB “are going to challenge them,” Baer said, noting that Amazon and Microsoft can bill their products as the default NoSQL offerings for their cloud platforms. Stitch may help counter that, though, by keeping MongoDB “true to its roots as a developer-friendly database,” he added.

MongoDB 4.0 lists for $14,990 per server. MongoDB Stitch users will be charged 50 cents for each GB of data transferred between Stitch and their front-end applications, as well as back-end services other than Atlas. They’ll also pay for using compute resources at a rate of $0.000025 per GB-second, which is calculated by multiplying the execution time of each processing request by the amount of memory that’s consumed.