Tag Archives: Group

Hackers use ATM jackpotting technique to steal $1M in US

A group of hackers stole over $1 million through ATM jackpotting in the United States.

The hacking group, believed to be an international cybercrime gang, used a technique seen in other countries over the past few years to get ATMs to rapidly spit out cash on demand. Called “jackpotting” because the cash shoots out of the machine the way winnings do on a slot machine, the attack requires the hackers to have physical access to the ATM. Once they have physical access, the hackers can use malware or they can replace the hard drive with an infected one and take control over the system.

ATM jackpotting attacks have happened in other parts of the world — including Central America, Europe and Asia — for several years, but now the attacks have made their way to America, according to a warning sent out to financial organizations by the U.S. Secret Service.

The confidential Secret Service alert, which investigative cybersecurity journalist Brian Krebs reported on, said that ATMs running Windows XP were at the greatest risk of being jackpotted and the hackers were targeting ATMs located in pharmacies, big box retailers and drive-thrus. The Secret Service recommended that ATM operators upgrade to Windows 7 to minimize the risk.

According to Krebs, the Secret Service alert explained that once the hackers have physical access to an ATM, they use an endoscope — an instrument typically used in medicine — to locate where they need to plug a cord into the inside of the cash machine to sync their laptop with the ATM.

The attackers then use an advanced strain of malware called Ploutus.D, which was first reported to have been used in jackpotting attacks in 2013 in Mexico.

[embedded content]

How ATM jackpotting works

The hackers reportedly disguise themselves as ATM maintenance crews to gain access to the machines without raising suspicion. Once the malware has been installed on the compromised ATM it will appear to be out of order to potential users. Then, one attacker can go up to the machine while remote hackers trigger the malicious program, and the hacker who appears to be an ordinary ATM user receives the outpouring of cash. The Secret Service report said that in an average Ploutus.D attack, the money is continuously dispensed at a rate of 40 bills every 23 seconds until the machine is totally empty.

After they’ve emptied the ATM, the hackers disguised as the maintenance crew come back and remove their tools to return the ATM to normal operations — without any available cash.

In his blog post about the recent wave of ATM jackpotting attacks, Krebs noted that the hacking group has been targeting Diebold Nixdorf ATMs, which are vulnerable to the Ploutus.D malware. Specifically, Secret Service warned that the attacks have focused on the Opteva 500 and 700 series from Diebold.

Krebs also said the Secret Service had evidence that further attacks were being planned across the country.

Diebold issued a warning about the attacks and suggested that countermeasures to ATM jackpotting should include limiting physical access to the ATM, making sure the firmware for the machines are up to date with the latest security updates, and monitoring the physical activity of the machines. Without physical access, ATM jackpotting is not possible.

In other news

  • A fitness tracking app accidentally exposed the location of military bases around the world. Strava, an app that logs walking, running and other movements, published an interactive map with over 13 trillion GPS points from its users a few months ago. The map has since been used to confirm the location of military bases, which show extra activity along specific routes in otherwise remote areas. These are believed to be jogging routes and even patrol routes at military bases. An analyst at the Institute for United Conflict Analysts, Nathan Ruser, noticed the data last week and Twitter has since taken to posting now-confirmed locations of the military bases. The data exists because military personnel didn’t turn off their fitness trackers while on base, despite Strava’s customizable privacy settings.
  • Google Cloud has teamed up with enterprise mobility management company MobileIron to build a new cloud service. The companies announced that they will combine Google Cloud’s Orbitera commerce platform and MobileIron’s enterprise mobility management and app distribution platform. The enterprise applications and services portal is expected to be released later in 2018 and will mostly be built on top of the security assertion markup language standard. The service will enable resellers, enterprises and others to buy cloud services and distribute them to customers and employees. It will include customized service bundles, customized branding, unified billing, secure cloud access, and usage analytics, according to Google. “We hope this collaboration simplifies and streamlines enterprise application management for businesses, and helps them unlock additional value for their employees and customers,” the companies said in a blog post announcing the joint effort.
  • Researchers discovered that Oracle Micros point-of-sale (POS) systems have been breached. ERPScan researchers published details of the vulnerability, which affects its Micros POS terminals and enables an attacker to read any file and receive information without authentication from the devices. The vulnerability was discovered in September 2017 by Dmitry Chastuhin, security researcher at ERPScan, and was fixed and disclosed this month. “[The flaw is] a directory traversal vulnerability in Oracle MICROS EGateway Application Service,” ERPScan explains in its blog post. “In case an insider has access to the vulnerable URL, he or she can pilfer numerous files from the MICROS workstation including services logs and read files like SimphonyInstall.xml or Dbconfix.xml that contain usernames and encrypted passwords to connect to DB, get information about ServiceHost, etc.” This means the attacker can run a brute force login attack against the POS devices to gain full access. Micros is used on more than 330,000 cash registers across 180 countries.

What is happening with AI in cybersecurity?

Jon Oltsik, an analyst with Enterprise Strategy Group in Milford, Mass., wrote about the growing role of AI in cybersecurity. Two recent announcements sparked his interest.

The first was by Palo Alto Networks, which rolled out Magnifier, a behavioral analytics system. Second, Alphabet deployed Chronicle, a cybersecurity intelligence platform. Both rely on AI in cybersecurity and machine learning to sort through massive amounts of data. Vendors are innovating to bring AI in cybersecurity to the market, and ESG sees growing demand for these forms of advanced analytics.

Twelve percent of enterprises have already deployed AI in cybersecurity. ESG research found 29% of respondents want to accelerate incident detection, while similar numbers demand faster incident response or the ability to better identify and communicate risk to the business. An additional 22% want AI cybersecurity systems to improve situational awareness.

Some AI applications work on a stand-alone basis, often tightly coupled with security information and event management or endpoint detection and response; in other cases, machine learning is applied as a helper app. This is true of Bay Dynamics’ partnership with Symantec, applying Bay’s AI engine to Symantec data loss prevention.

Oltsik cautioned that most chief information security officers (CISO) don’t understand AI algorithms and data science, so vendors will need to focus on what they can offer to enhance security. “In the future, AI could be a cybersecurity game-changer, and CISOs should be open to this possibility. In the meantime, don’t expect many organizations to throw the cybersecurity baby out with the AI bath water,” Oltsik said.

Read more of Oltsik’s ideas about AI in cybersecurity.

Simplify networks for improved security and performance

Russ White, blogging in Rule 11 Tech, borrowed a quote from a fellow blogger. “The problem is that once you give a monkey a club, he is going to hit you with it if you try to take it away from him.”

In this analogy, the club is software intended to simplify the work of a network engineer. But in reality, White said, making things easier can also create a new attack surface that cybercriminals can exploit.

To that end, White recommended removing unnecessary components and code to reduce the attack surface of a network. Routing protocols, quality-of-service controls and transport protocols can all be trimmed back, along with some virtual networks and overlays.

In addition to beefing up security, resilience is another key consideration, White said. When engineers think of network failure, their first thoughts include bugs in the code, failed connectors and faulty hardware. In reality, however, White said most failures stem from misconfiguration and user error.

“Giving the operator too many knobs to solve a single problem is the equivalent of giving the monkey a club. Simplicity in network design has many advantages — including giving the monkey a smaller club,” he said.

Explore more from White about network simplicity.

BGP in data centers using EVPN

Ivan Pepelnjak, writing in ipSpace, focused on running Ethernet VPN, or EVPN, in a single data center fabric with either EVPN or MPLS encapsulation. He contrasts this model with running EVPN between data center fabrics, where most implementations require domain isolation at the fabric edge.

EVPN is used as a Border Gateway Protocol address family that can be run on external BGP or internal BGP connections. For single data center fabrics, engineers can use either IBGP or EBGP to build EVPN infrastructure within a single data center fabric, Pepelnjak said.

He cautioned, however, that spine switches shouldn’t be involved in intra-fabric customer traffic forwarding. The BGP next-hop in an EVPN update can’t be changed on the path between ingress and egress switch, he said. Instead, the BGP next-hop must always point to the egress fabric edge switch.

To exchange EVPN updates across EBGP sessions within a data center fabric, the implementation needs to support functionality similar to MPLS VPN. Pepelnjak added many vendors have not boosted integration for EVPN, and users often run into issues that can result in  numerous configuration changes.

Pepelnjak recommended avoiding vendors that market EBGP between leaf-and-spine switches or IBGP switches on top of intra-fabric EBGP. If engineers are stuck with an inflexible vendor, it may be best to use Interior Gateway Protocol as the routing protocol.

Dig deeper into Pepelnjak’s ideas on EVPN.

Cybersecurity skills shortage continues to worsen

Jon Oltsik, an analyst at Enterprise Strategy Group in Milford, Mass., said the global cybersecurity skills shortage is bad and getting worse. According to Oltsik, skills shortages among various networking disciplines have not eased — and the cybersecurity shortage is particularly acute — citing ESG’s annual survey on the state of IT. For instance in 2014, 23% of respondents said that their organization faced a problematic shortage of cybersecurity skills. In the most current survey, which polled more than 620 IT and cybersecurity professionals, 51% said they faced a cybersecurity skills shortage. The data aligns with the results of an ESG-ISSA survey in 2017 that found 70% of cybersecurity professionals reporting their organizations were affected by the skills shortage — resulting in increased workloads and little time for planning. 

“I can only conclude that the cybersecurity skills shortage is getting worse,” Oltsik said. “Given the dangerous threat landscape and a relentless push toward digital transformation, this means that the cybersecurity skills shortage represents an existential threat to developed nations that rely on technology as the backbone of their economy.”

Chief information security officers (CISOs), Oltsik said, need to consider the implications of the cybersecurity skills shortage. Smart leaders are doing their best to cope by consolidating systems, such as integrated security operations and analytics platform architecture, and adopting artificial intelligence and machine learning. In other cases, CISOs automate processes, adopt a portfolio management approach and increase staff compensation, training and mentorship to improve retention.

Dig deeper into Oltsik’s ideas on the cybersecurity skills shortage.

Building up edge computing power

Erik Heidt, an analyst at Gartner, spent part of 2017 discussing edge computing challenges with clients as they worked to improve computational power for IoT projects. Heidt said a variety of factors drive compute to the edge (and in some cases, away), including availability, data protection, cycle times and data stream filtering. In some cases, computing capability is added directly to an IoT endpoint. But in many situations, such as data protection, it may make more sense to host an IoT platform in an on-premises location or private data center.

Yet the private approach poses challenges, Heidt said, including licensing costs, capacity issues and hidden costs from IoT platform providers that limit users to certified hardware. Heidt recommends purchasing teams look carefully at what functions are being offered by vendors, as well as considering data sanitization strategies to enable more use of the cloud. “There are problems that can only be solved by moving compute, storage and analytical capabilities close to or into the IoT endpoint,” Heidt said.

Read more of Heidt’s thoughts on the shift to the edge.

Meltdown has parallels in networking

Ivan Pepelnjak, writing in IPSpace, responded to a reader’s question about how hardware issues become software vulnerabilities in the wake of the Meltdown vulnerability. According to Pepelnjak, there has always been privilege-level separation between kernels and user space. Kernels have always been mapped to high-end addresses in user space, but in more recent CPUs, operations needed to execute just a single instruction often following a pipeline with dozens of different instructions — thus exposing the vulnerability an attack like Meltdown can exploit.

In these situations, the kernel space location test fails once the command is checked against the access control list (ACL), but by then other parts of the CPU have already carried out instructions designed to call up the memory location.

Parallelized execution isn’t unique to CPU vendors. Pepelnjak said at least one hardware vendor created a version of IPv6 neighbor discovery that suffers from the same vulnerability. In response, vendors are rolling out operating system patches removing the kernel from user space. This approach prevents exploits but no longer gives the kernel direct access to the user space when it is needed. As a result, in many cases the kernel needs to change virtual-to-physical page tables, mapping user space into kernel page tables. Every single system call, even reading a byte from one file, means the kernel page tables need to be unmapped.

Explore more of Pepelnjak’s thoughts on network hardware vulnerabilities.

Campus network architecture strategies to blossom in 2018

Bob Laliberte, an analyst with Enterprise Strategy Group in Milford, Mass., said even though data center networking is slowing down in the face of cloud and edge computing, local and campus network architecture strategies are growing in importance as the new year unfolds.

After a long period of data center consolidation, demand for reduced latency — spurred by the growth of the internet of things (IoT) — and the evolution of such concepts as autonomous vehicles are driving a need for robust local networks.

At the same time, organizations have moved to full adoption of the cloud for roles beyond customer relationship management, and many have a cloud-first policy. As a result, campus network architecture strategies need to allow companies to use multiple clouds to control costs. In addition, good network connectivity is essential to permit access to the cloud on a continuous basis.

Campus network architecture plans must also accommodate Wi-Fi to guarantee user experience and to enable IoT support. The emergence of 5G will also continue to expand wireless capabilities.

Intent-based networks, meanwhile, will become a tool for abstraction and the introduction of automated tasks. “The network is going to have to be an enabler, not an anchor,” with greater abstraction, automation and awareness, Laliberte said.

Laliberte said he expects intent-based networks to be deployed in phases, in specific domains of the network, or to improve verification and insights. “Don’t expect your network admins to have Alexa architecting and building out your network,” he said. Although, he said, systems modeled after Alexa will become interfaces for network systems.

Explore more of Laliberte’s thoughts on networking.

BGP route selection and intent-based networking

Ivan Pepelnjak, writing in ipSpace, said pundits who favor the demise of Border Gateway Protocol (BGP) through new SDN approaches often praise the concept of intent-based networking.

Yet, the methodologies behind intent-based networks fail when it comes to BGP route selection, he said. Routing protocols were, in fact, an early approach to the intent-based idea, although many marketers now selling intent-based systems are criticizing those very same protocols, Pepelnjak said. Without changing the route algorithm, the only option is for users to tweak the intent and hope for better results.

To deal with the challenges of BGP route selection, one option might involve a centralized controller with decentralized local versions of the software for fallback in case the controller fails. Yet, few would want to adopt that approach, Pepelnjak said, calling such a strategy “messy” and difficult to get right. Route selection is now being burdened with intent-driven considerations, such as weights, preferences and communities.

“In my biased view (because I don’t believe in fairy tales and magic), BGP is a pretty obvious lesson in what happens when you try to solve vague business rules with [an] intent-driven approach instead of writing your own code that does what you want to be done,” Pepelnjak wrote. “It will be great fun to watch how the next generation of intent-based solutions will fare. I haven’t seen anyone beating laws of physics or RFC 1925 Rule 11 yet,” he added.  

Dig deeper into Pepelnjak’s ideas about BGP route selection and intent-based networking.

Greater hybridization of data centers on the horizon

Chris Drake, an analyst with Current Analysis in Sterling, Va., said rising enterprise demand for hybrid cloud systems will fuel partnerships between hyperscale public providers and traditional vendors. New alliances — such as the one between Google and Cisco — joined existing cloud-vendor partnerships like those between Amazon and VMware and Microsoft and NetApp, Drake said. New alliances are in the offing.

Moving and managing workloads across hybrid IT environments will be a key point of competition between providers, Drake said, perhaps including greater management capabilities to oversee diverse cloud systems.

Drake said he also expects a proliferation of strategies aimed at edge computing. The appearance of micro data centers and converged edge systems may decentralize large data centers. He said he also anticipates greater integration with machine learning and artificial intelligence. However, as a result of legacy technologies, actual deployments of these technologies will remain gradual.

Read more of Drake’s assessment of 2018 data center trends.

NVMe flash storage doesn’t mean tape and disk are dying

Not long ago, a major hardware vendor invited me to participate in a group chat where we would explore the case for flash storage and software-defined storage. On the list of questions sent in advance was that burning issue: Has flash killed disk? Against my better judgment, I accepted the offer. Opinions being elbows, I figured I had a couple to contribute.

I joined a couple of notable commentators from the vendor’s staff and the analyst community, who I presumed would echo the talking points of their client like overzealous high school cheerleaders. I wasn’t wrong.

Shortly after it started, I found myself drifting from the nonvolatile memory express (NVMe) flash storage party line. I also noted that software-defined storage (SDS) futures weren’t high and to the right in the companies I was visiting, despite projections by one analyst of 30%-plus growth rates over the next couple years. Serious work remained to be done to improve the predictability, manageability and orchestration of software-defined and hyper-converged storage, I said, and the SDS stack itself needed to be rethought to determine whether the right services were being centralized.

Yesterday’s silicon tomorrow

I also took issue with the all-silicon advocates, stating my view that NVMe flash storage might just be “yesterday’s silicon storage technology tomorrow,” or at least a technology in search of a workload. I wondered aloud whether NVMe — that the “shiny new thing” — mightn’t be usurped shortly by capacitor-backed dynamic RAM (DRAM) that’s significantly less expensive and faster. DRAM also has much lower latency than NVMe flash storage because it’s directly connected to the memory channel rather than the PCI bus or a SAS or SATA controller.

The vendor tried to steer me back into the fold, saying “Of course, you need the right tool for the right job.” Truer words were never spoken. I replied that silicon storage was part of a storage ecosystem that would be needed in its entirety if we were to store the zettabytes of data coming our way. The vendor liked this response since the company had a deep bench of storage offerings that included disk and tape.

I then took the opportunity to further press the notion that disk isn’t dead any more than tape is dead, despite increasing claims to the contrary. (I didn’t share a still developing story around a new type of disk with a new form factor and new data placement strategy that could buy even more runway for that technology. For now, I am sworn to secrecy, but once the developers give the nod, readers of this column will be the first to know.)

I did get some pushback from analysts about tape, which they saw as completely obsoleted in the next generation, all-silicon data center. I could have pushed them over to Quantum Corp. for another view.

The back story

A few columns back, I wrote something about Quantum exiting the tape space based on erroneous information from a recently released employee. I had to issue a retraction, and I contacted Quantum and spoke with Eric Bassier, senior director of data center products and solutions, who set the record straight. Seems Quantum — like IBM and Spectra Logic — is excited about LTO-8 tape technology and how it can be wed to the company’s Scalar tape products and StorNext file system.

Bassier said Quantum was “one of only a few storage companies [in 2016] to demonstrate top-line growth and profitability,” and its dedication to tape was not only robust, it succeeded with new customers seeking to scale out capacity. In addition to providing a dense enterprise tape library, the Scalar i6000 has 11,000 or more slots, a dual robot and as many as 24 drives in a single 19-inch rack frame, all managed with web services using representational state transfer, or RESTful API calls.

Quantum, like IBM and Spectra Logic, is articulating a product strategy that has fingers in all the popular storage buckets.

Quantum was also hitting the market with a 3U rack-mountable, scalable library capable of delivering 150 TB uncompressed LTO-7 tape storage or 300 TB uncompressed LTO-8 in storage for backup, archive or additional secondary storage for less frequently used files and objects. Add compression and you more than double these capacity numbers. That, Bassier asserted, was more data than many small and medium-sized companies would generate in a year.

Disk also has a role in Quantum’s world; its DXi product provides data deduplication that’s a significant improvement over the previous-generation model. It offers performance and density improvements through the application of SSDs and 8 TB HDDs, as well as a reduction in power consumption.

All the storage buckets

Quantum, like IBM and Spectra Logic, is articulating a product strategy that has fingers in all the popular buckets, including tape, disk and NVMe flash storage. After years of burying their story under a rock by providing OEM products to other vendors who branded them as their own, 90% of the company’s revenue is now derived from the Quantum brand.

Bottom line: We might eventually get to an all-silicon data center. In the same breath, I could say that we might eventually get that holographic storage the industry has promised since the Kennedy administration. For planning 2018, your time is better spent returning to basics. Instead of going for the shiny new thing, do the hard work of understanding your workload, then architecting the right combination of storage and software to meet your needs. Try as you might, the idea of horizontal storage technology — one size fits most — with simple orchestration and administration, remains elusive.

That’s my two elbows.

Zayo SD-WAN available as a managed or on-premises product

Zayo Group Holdings Inc., a bandwidth infrastructure provider in the U.S. and Europe, is offering SD-WAN as a managed or do-it-yourself product.

The technology, launched this week, lets companies distribute network traffic across MPLS, broadband and Long Term Evolution connections. The product is an extension of Zayo’s fiber-based IP/MPLS backbone offerings for enterprises.

Zayo, based in Boulder, Colo., provides an online portal that companies can use to monitor and modify traffic flows down to the application level. Customers have the option of outsourcing network management to Zayo.

In general, SD-WAN uses software-defined networking concepts to automatically determine the most cost-effective route to and from branch offices, data centers and cloud-based applications. Network operators manage the SD-WAN through a centralized controller.

Zayo’s metro networks

Zayo has a 126,000-mile network that provides metro connectivity to buildings and data centers. The company offers high-capacity dark fiber, Ethernet and other connectivity options, as well as carrier-neutral colocation and cloud infrastructure services in its data centers.

Zayo’s latest product launch was expected. Last November, the company told financial analysts it planned to launch an SD-WAN offering in the first quarter of 2018, according to transcripts from the business site Seeking Alpha.

Zayo is entering a consolidating market of more than 40 vendors. Last year, VMware announced plans to acquire SD-WAN vendor VeloCloud, roughly three months after Cisco completed the purchase of Viptela for $610 million. VMware expects to complete the transaction next month.

The market for SD-WAN technology, infrastructure and services will reach $6 billion by 2020, according to IDC. The research firm expects service providers to account for more than half of the market.

Most of Zayo’s network is located in North America. The company, however, has increased its footprint in the United Kingdom and Europe. In 2014, Zayo acquired Geo Networks, based in London, for an undisclosed sum. Geo provided dark fiber and open-access networks.

North Korea’s Lazarus Group sets sights on cryptocurrency

The North Korean state-sponsored hacking outfit known as Lazarus Group has moved beyond ransomware attacks and shifted its focus to cryptocurrency.

Lazarus Group stands accused of perpetrating the widespread WannaCry ransomware attacks earlier this year. Several private companies and governments, including the U.S., have attributed the attacks to the North Korean hacker group. Now, researchers from cybersecurity vendors Proofpoint, Inc., and RiskIQ say Lazarus Group has initiated attacks on cryptocurrency exchanges and owners in at least two different countries.

“Earlier this year, the activities of the Lazarus group in South Korea were discussed and analyzed, as they managed to compromise accounts on various South Korean cryptocurrency exchanges,” wrote Yonathan Klijnsma, threat researcher at RiskIQ, in a blog post. “More recently, they were seen targeting a United Kingdom-based cryptocurrency exchange.”

Several cryptocurrency exchanges have been hit by cyberattacks in recent weeks including South Korean exchange Youbit, which declared bankruptcy after it lost 17% of its assets in a breach last week. While the Youbit attack hasn’t been attributed to the Lazarus Group or other North Korean nation-state hackers, others incidents, including a massive spearphishing campaign targeting a UK-based cryptocurrency business, have been connected to the group.

“The Lazarus Group has increasingly focused on financially motivated attacks and appears to be capitalizing on both the increasing interest and skyrocketing prices for cryptocurrencies,” wrote Darien Huss, senior security researcher at Proofpoint, in the company’s report.

While Proofpoint and RiskIQ don’t name the organizations victimized by the Lazarus Group, researchers from the two vendors outlined the group’s new techniques for stealing cryptocurrency from both exchanges and owners. Proofpoint, for example, described several “multistage attacks” that lure victims into downloading malware, including a backdoored version of PyInstaller, a free application that bundles Python programs into a single executable package, and PowerShell malware known as “PowerRatankba” used for reconnaissance. After the initial infections are completed, Huss said, the attackers hit victims with a second wave of malware that harvests credentials for both individual cryptocurrency wallets and exchange accounts.

RiskIQ, meanwhile, identified a large phishing campaign that claimed to be bitcoin wallet software and featured links that impersonated the domain of Bitcoin Gold. According to RiskIQ researchers, Lazarus Group hackers abused internalized domain name registration to trick victims into believing the malicious site was genuine. In addition, Proofpoint’s report highlights a new type of point-of-sale (POS) malware, dubbed “RatankbaPOS,” that targets the POS framework of KSNET, a major South Korean payment provider.

Huss warned the Lazarus Group has a financially-motivated arm that has branched out beyond typical nation-state activity and is targeting individuals the same way that organized cybercrime outfits have.

“This group now appears to be targeting individuals rather than just organizations: individuals are softer targets,” Hess wrote, “often lacking resources and knowledge to defend themselves and providing new avenues of monetization for a state-sponsored threat actor’s toolkit.”

Looking ahead to the biggest 2018 cybersecurity trends

Jon Oltsik, an analyst with Enterprise Strategy Group in Milford, Mass., examined some of the top 2018 cybersecurity trends. While some analysts have focused on ransomware, and others made dire pronouncements about nationwide power-grid attacks, Oltsik said he’s more concerned about cloud security, where easily exploitable vulnerabilities are becoming increasingly likely.

Security teams — many of which are facing a severe lack of cybersecurity skills — are struggling with the rapid deployment of cloud technologies, such as virtual machines, microservices and containers in systems such as Amazon Web Services or Azure. Many organizations are switching to high-end security options from managed security service providers or SaaS providers. ESG research indicated 56% of organizations are interested in security as a service.

Among other 2018 cybersecurity trends, Oltsik said he foresees greater integration of security products and the continued expansion of the security operations and analytics platform architecture model. As large vendors like Cisco, Splunk and Symantec scramble to catch up, they will fill holes in existing portfolios. Although he said he sees machine learning technology stuck in the hype cycle, in 2018, Oltsik projects machine learning will grow as a “helper app” in roles such as endpoint security or network security analytics.

With the introduction of the European Union’s General Data Protection Regulation (GDPR) on May 25, 2018, Oltsik said a major fine — perhaps as much as $100 million — may serve as a wake-up call to enterprises whose security platforms don’t meet the standard.

“One U.K. reseller I spoke with compared GDPR to Y2K, saying that service providers are at capacity, so if you need help with GDPR preparation, you are out of luck. As GDPR anarchy grips the continent next summer, look for the U.S. Congress to (finally) start engaging in serious data privacy discussions next fall,” he added.

Dig deeper into Oltsik’s ideas on 2018 cybersecurity trends.

The challenges of BGP

Ivan Pepelnjak, writing in ipSpace, said when Border Gateway Protocol (BGP) incidents occur, commentators often call for a better approach. “Like anything designed on a few napkins, BGP has its limit. They’re well-known, and most of them have to do with trusting your neighbors instead of checking what they tell you,” he said.

To resolve problems with BGP, Pepelnjak recommended the following: First, IT teams need to build a global repository of who owns which address. Second, they need to document who connects to whom and understand their peering policies. And they need to filter traffic from those addresses that are obviously spoofed.

The good news, Pepelnjak, said, is most BGP issues can be solved with guidance from volume 194 of Best Current Practices — the latest update. In Pepelnjak’s perspective, internet service providers (ISPs) are often the problem. ISPs have little incentive to resolve BGP issues or reprimand customers who can easily switch to more permissive providers. An additional problem stems from internet exchange points running route servers without filters.

According to Pepelnjak, because engineers hate confrontation, they often turn to cryptographic tools, such as resource public key infrastructure, rather than fixing chaotic or nonexistent operational practices. “What we’d really need to have are (sic) driving licenses for ISPs, and some of them should be banned for good, due to repetitive drunk driving. Alas, I don’t see that happening in my lifetime,” he added.

Read more of Pepelnjak’s thoughts on BGP issues.

Artificial intelligence, low-code and abstracting infrastructure

Charlotte Dunlap, an analyst with GlobalData’s Current Analysis group in Sterling, Va., blogged about the repositioning of mobile enterprise application platforms (MEAP) to address app development and internet of things. Dunlap said advancements in AI, API management and low-code tools play into DevOps’ need for abstracted infrastructure.

GlobalData research indicated that MEAP is widely used to abstract complexity, particularly in use cases such as application lifecycle management related to AI-enabled automation or containerization.

GlobalData awarded high honors to vendors that integrated back-end data for API management, such as IBM MobileFirst and Kony AppPlatform. Dunlap said mobile service provider platform strategies have increasingly shifted to the needs of a DevOps model.

“Over the next 12 months, we’ll see continued momentum around a growing cloud ecosystem in order to stay competitive with broad platform services, including third-party offerings. Most dominant will be partnerships with Microsoft and Amazon for offering the highest levels of mobile innovation to the broadest audiences of developers and enterprises,” Dunlap said.

Explore more ideas from Dunlap on the changing nature of MEAP.

Set Office 365 group limits to avoid administrative hassles

Office 365 group limits to rein in unchecked access, which could lead to unintended consequences.

An Office 365 group not only contains the membership list for a collection of people, but also manages provisioning and access to multiple services, such as Exchange and SharePoint. At a fundamental level, this means each time a user creates a group for something — a project, or perhaps a team — they add a SharePoint site, group inbox, calendar, Planner, OneNote and more.

Groups is also the foundation behind new services such as Microsoft Teams, Office 365’s chat-based collaboration app. In addition to messaging via channels, Teams enables users to chat with colleagues over voice and video calls, collaborate on documents and use tabs to display other relevant team information. Teams uses Office 365 Groups to produce a team within Teams, not only for the membership list, but also to connect the underlying group-enabled services for data storage.

Why Office 365 group limits are crucial

By default, Office 365 users can create groups without any restrictions. While this appears to be a great idea to prompt viral adoption, it is likely to backfire.

The strength of Office 365 Groups is only one group is needed to manage a team’s calendar, share files among colleagues, and hold group video calls and chats. However, this is not immediately obvious to workers as they explore available services.

For example, a user starts work on a project and, being new to Microsoft Planner, decides to add a plan with the name Project Z Plan. The user also sees he can create a group calendar in Outlook, which he names Project Z Calendar. He feels he could also use a SharePoint site for the project, so he makes one called Project Z. Later, the user discovers Microsoft Teams and feels it can help with the project collaboration efforts, so he generates a new team named Project Z Team.

Each of those actions creates a new group in Office 365. A combined lack of guidance and structure means the worker’s actions — intended to build a seamless fabric that connects multiple Office 365 services — added multiple silos and redundant resources.

This scenario illustrates the need for administrators to develop Office 365 group limits to avoid similar issues. Users need instruction on what tool to use and when, but also some understanding of what a group is in the context of the organization.

Checklist for a proper Office 365 Groups configuration

Before enabling Office 365 Groups for widespread adoption, the administrator should adjust the basic settings to provide limits and help users adhere to corporate standards.

At a minimum, the IT department should consider the following Office 365 Groups configuration:

  • the email address policy for group Simple Mail Transfer Protocol addresses;
  • usage guidelines;
  • group creation restrictions; and
  • group classifications.

Apart from the email address policy, all other configurations require an Azure Active Directory Premium license, as documented here.

Next, define the settings to adjust:

Policy to update

Configuration to implement

Reason for the change

Email address

groupname@contoso.com

The company will use the main domain name because all the mailboxes were moved to Office 365.

Usage guideline URL

https://contoso.sharepoint.com/usage

Shows users best practices for producing Office 365 Groups.

Group creation restrictions

Enables line managers group to add Office 365 Groups

Only managers can create new Office 365 Groups.

Group classifications

Low risk, medium risk and high risk

Enables users to classify groups and be aware of the sensitivity of the information within the group.

To make these changes, we use PowerShell to change the configuration in multiple places.

For the email address policy configuration, add a new policy that applies to all groups with the New-EmailAddressPolicy cmdlet:

$UserCredential = Get-Credential

$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $UserCredential -Authentication Basic -AllowRedirection

Import-PSSession $Session

New-EmailAddressPolicy -Name GroupsPolicy -IncludeUnifiedGroupRecipients -EnabledEmailAddressTemplates “SMTP:@contoso.com” -Priority 1

For the group configuration settings, use the Azure AD preview module. After connecting to Azure AD, use this code to confirm there is a template for group settings:

Connect-AzureAD -Credential $UserCredential

$Template = Get-AzureADDirectorySettingTemplate | Where {$_.DisplayName -eq “Group.Unified”}

$Setting = $Template.CreateDirectorySetting()

Next, define the group settings based on the configuration defined in the table and apply it:

# Configure the URL for our guidelines

$Settings[“UsageGuidelinesUrl”] = “https://contoso.sharepoint.com/usage”

# Disable group creation except for the Line Managers group

$Settings[“EnableGroupCreation”] = $False

$Settings[“GroupCreationAllowedGroupId”] = (Get-AzureADGroup -SearchString “Line Managers”).ObjectID

# Create our list of classifications

$Settings[“ClassificationList”] = “Low Risk,Medium Risk,High Risk”

# Apply the settings

New-AzureADDirectorySetting -DirectorySetting $Settings

Verify those settings with the following command:

(Get-AzureADDirectorySetting -All $true).Values

Office 365 Groups configuration
Use PowerShell to check the settings for Office 365 Groups.

With those adjustments in place, the new Office 365 Groups creation process changes, as shown below.

Office 365 Groups plan
A new plan shows the configuration settings defined by the Office 365 administrator.

Now, new Groups will have appropriate email addresses assigned — existing groups remain unchanged.

Office 365 Groups email
With a configuration in place for Office 365 Groups, the proper email address gets produced automatically.

Add boundaries and reduce complications

It’s important for administrators to employ Office 365 group limits. This practice prevents unchecked access to resources in the collaboration platform, which maintains order and avoids problems with redundancy and wasted resources.

Change key settings to put basic governance in place to steer users toward usage guidelines for Office 365 Groups. This helps the administrator ensure the groups are created correctly and can be managed properly as adoption grows.

Box using Azure is now available | Box Blog

A few weeks ago at BoxWorks 2017, Scott Guthrie, EVP of Microsoft’s Cloud and Enterprise group, joined our CEO Aaron Levie to announce some exciting news: Box using Azure will be generally available in November. The day has come!

What is Box using Azure?

Box using Azure is the first product milestone in the expanded partnership between Box and Microsoft. Now customers can benefit from combining Box’s cloud content management platform with Microsoft’s global-scale Azure cloud platform, to:

  • Simplify, cross-company collaborative processes between employees and external stakeholders.
  • Securely manage content for the enterprise, with integrations for 1,400 best-of-breed SaaS apps, including Office 365 apps, while allowing users to work in their familiar productivity and line-of-business tools.
  • Bring Box cloud content management capabilities to their own custom applications that deliver new digital content experiences and streamline business processes for their employees, customers and partners.

Today thousands of businesses get work done using Box with Microsoft Office 365 including the new Microsoft Teams. This new integration with Azure is another step toward delivering a great user experience for our customers using Box with the Microsoft stack.

“Flex has successfully been using Box as our primary platform for digital content sharing, storage and collaboration globally. We also use Microsoft Azure as one of our cloud computing services for our global IT infrastructure,” said Gus Shahin, CIO of Flex. “We look forward to seeing how Box and Microsoft Azure Cognitive Services work together to deploy next generation A.I. and machine learning capabilities.”

What’s coming next?

Microsoft and Box engineering teams are working hard to build out even more capabilities over the coming months, such as:

  • Powering Box content with intelligent capabilities from Microsoft Cognitive Services, that enable customers to automatically identify and categorize content, trigger workflows and tasks and make content more discoverable for users.
  • Leveraging Azure’s broad global footprint to meet data sovereignty requirements and ensure compliance with industry regulations or corporate policies.

“The integration of Box and Azure services is a welcome development for our digital transformation journey as a company. This can help deliver a more streamlined approach to our content management and ensures that Schneider Electric employees can securely and quickly work together and with customers and partners in a much more productive way, adding more value to our use of Box and Microsoft solutions,” said Herve Coureil, Chief Digital Officer, Schneider Electric.

Box using Azure is currently available with content storage in US data centers. Box add-on packages can be used with Box using Azure, including: information governance to meet all your organization’s security requirements and compliance standards, customer-managed encryption keys to take ownership over your encryption keys, and workflow automation to streamline business processes.

How do I get started?

If you’re interested in Box using Azure, learn more or get in touch with Box Sales.