Are software-defined WAN security features sufficient to handle the demands of most enterprises? That’s the question addressed by author and engineer Christoph Jaggi, whose SD-WAN security concerns were cited in a recent blog post on IPSpace. The short answer? No — primarily because of the various connections that can take place over an SD-WAN deployment.
“The only common elements between the different SD-WAN offerings on the market are the separation of the data plane and the control plane and the takeover of the control plane by an SD-WAN controller,” Jaggi said. “When looking at an SD-WAN solution, it is part of the due diligence to look at the key management and the security architecture in detail. There are different approaches to implement network security, each having its own benefits and challenges.”
Organizations contemplating SD-WAN rollouts should determine whether prospective products meet important security thresholds. For example, products should support cryptographic protocols and algorithms and meet current key management criteria, Jaggi said.
Read what Jaggi had to say about the justification for SD-WAN security concerns.
Wireless ain’t nothing without the wire
You can have the fanciest access points and the flashiest management software, but without good and reliable wiring underpinning your wireless LAN, you’re not going to get very far. So said network engineer Lee Badman as he recounted a situation where a switch upgrade caused formerly reliable APs to lurch to a halt.
“I’ve long been a proponent of recognizing [unshielded twisted pair] as a vital component in the networking ecosystem,” Badman said. Flaky cable might still be sufficient in a Fast Ethernet world, but with multigig wireless now taking root, old cable can be the source of many problems, he said.
For Badman, the culprit was PoE-related and once the cable was re-terminated and tested anew, the APs again worked like a charm. A good lesson.
See what else Badman had to say about the issues that can plague a WLAN.
The long tail and DDoS attacks
Now there’s something new to worry about with distributed denial of service, or DDoS, attacks. Network engineer Russ White has examined another tactic, dubbed tail attacks, which can just as easily clog networking resources.
Unlike traditional DDoS or DoS attacks that overwhelm bandwidth or TCP sessions, tail attacks concentrate on resource pools, such as storage nodes. In this scenario, a targeted node might be struggling because of full queues, White said, and that can cause dependent nodes to shut down as well. These tail attacks don’t require a lot of traffic and, what’s more, are difficult to detect.
For now, tail attacks aren’t common; they require attackers to know a great deal about a particular network before they can be launched. That said, they are something network managers should be aware of, White added.
Google’s disclosure policy and Android security in general came under question after the company disclosed a flaw in the Android installer for the world’s most popular game, Fortnite. The flawed installer is only for Android users because Fortnite developer Epic Games bypassed security protections available for apps distributed through the Google Play Store, in order to maximize profits and avoid paying distribution fees to Google.
On Friday, Google disclosed the Fortnite vulnerability and described it as a risk for a man-in-the-disk attack where any “fake [Android Package Kit] with a matching package name can be silently installed” by the Fortnite installer. Google disclosed the flaw to Epic Games on Aug. 15, and Epic had produced a patch within 24 hours.
After testing the patch and deploying it to users on Aug. 16, Epic asked Google on the issue tracker page if they could have “the full 90 days before disclosing this issue so our users have time to patch their devices.” Google did not respond on the issue tracker until Aug. 24, when it noted that “now the patched version of Fortnite Installer has been available for 7 days we will proceed to unrestrict this issue in line with Google’s standard disclosure practices.”
Epic Games founder Tim Sweeney accused Google on Twitter of wanting “to score cheap PR points” by disclosing the Fortnite vulnerability because Epic Games had released the game outside of the Google Play Store.
Epic Games had previously claimed the reason for not releasing Fortnite for Android through the Play Store was twofold: to maintain a “direct relationship” with customers and to avoid the 30% cut Google would take from in-app purchases. Security experts immediately expressed skepticism about the move because of the security checks in Android that need to be turned off in order to sideload an app from outside of the Play Store and the risk of malicious fakes.
Sweeney admitted on Twitter that the Fortnite vulnerability was Epic’s responsibility, but took issue with Google’s fast disclosure.
I grant that Google finding a flaw in our software and sourcing stories about the fact of it is a valid PR strategy.
But why the rapid public release of technical details? That does nothing but give hackers a chance to target unpatched users.
Liviu Arsene, senior e-threat analyst at Romania-based antimalware firm Bitdefender, said that “from a security perspective there’s no right or wrong in this scenario.”
Liviu Arsenesenior e-threat analyst, Bitdefender
“As soon as the vulnerability was reported, Epic fixed [it] within 24 hours, which is commendable, and then Google publicly disclosed it according to their policy. Technically, users are now safe and informed regarding a potential security vulnerability that could have endangered their privacy and devices,” Arsene wrote via email. “Granted, not all users will receive and install the update instantly, but the same can be said for most security patches and updates. As long as Epic is committed to delivering patches for their apps, regardless if they’re in Google Play or not, and Google is committed to finding and responsibly disclosing vulnerabilities, security is enforced and users are the ones that benefit most.”
That’s the most common — and challenging — question that CISOs get asked by their board members, according to a recent report published by Kudelski Security. While there is no clear yes or no answer, the key is to first understand exactly what and why the board is asking, said John Hellickson, managing director of global strategy and governance at Kudelski Security.
“It is important to make it clear to the board that there is no such thing as perfect security,” Hellickson said.
The report, titled “Cyber Board Communications & Metrics — Challenging Questions from the Boardroom,” highlights top questions CISOs are asked by their board members and offers strategies to address them. For example, one idea to help facilitate an effective CISO-board communication is to bolster board presentations with metrics and visuals.
The biggest takeaway for CISOs is that boards of directors are taking more interest in the security posture of their organizations, Kudelski Security CEO Rich Fennessy said. This provides both a challenge and an opportunity for CISOs, Fennessy added.
“The challenge is that a majority of CISOs, even seasoned ones, have difficulty understanding what boards are looking for and then providing this in a way that resonates,” Fennessy said. “We feel that a new approach to communicating cyber risk is needed and this represents the opportunity.”
A new approach to CISO-board communication
One of the most important findings from the report is the need for a new approach to communication between the CISOs and their organization’s board members.
In today’s volatile security landscape, it is vital that CISOs present the need to invest in a robust and mature cybersecurity program, Fennessy stressed. A partnership between CISOs and their board of directors is crucial to this end, he added, and the effectiveness of any company’s security program depends on it.
To improve CISO-board communication, CISOs need to explain cybersecurity issues to the board in layman’s terms, according to Bryce Austin, CEO at TCE Strategy and author of Secure Enough? 20 Questions on Cybersecurity for Business Owners and Executives.
“Explain the concepts of multifactor authentication, encryption in motion and at rest, zero-day vulnerabilities and GDPR,” Austin said. “The board needs to understand what these concepts and regulations are and how they impact their company.”
But because CISOs are given limited time to interact with the board, they have to learn how to engage quickly and partner for the common cause, Hellickson said. This means getting to know their organization, its vision and mission. CISO-board communication should become easier as CISOs learn more about the board’s goals for the organization, share relevant security information and consider business needs in their presentations, he added.
“CISOs will start to create a bridge between the technology and the organizations’ broader issues and challenges; linking security with the ability of the organization to go to market, operate efficiently, minimize downtime, reduce costs and finally become a key partner to the board,” Hellickson said.
Metrics are an important tool for CISOs because they help answer key questions the board is likely to ask and help CISOs make their case, Hellickson said. Boards prefer objective, quantitative evidence, but both quantitative and qualitative metrics can be effective, he added.
Even the most seasoned CISOs find it challenging to translate security and risk information into business language that provides meaningful insight to boards and business leaders, he said.
“Traditionally, CISOs have presented boards with metrics related to technical and security operations, which are hard to understand,” he added. “Presenting them can even reduce trust in their ability as security leaders.”
Rich FennessyCEO, Kudelski Security
Boards are fact and financially driven, Austin reinforced. They want relevant data presented to them so that they can make the best decisions for their organization.
Core quantitative metrics like dwell time, details of new vulnerabilities discovered versus remediated, patch management data, number of incidents and vulnerabilities, and number of non-remediated risks should be part of the presentation, Hellickson said.
Other metrics to include are outcomes of initiatives that aimed to reduce risk; how security has integrated with application development; actions taken to improve the company’s security risk posture; risks the organization has accepted and how it aligns to company’s agreed-upon risk tolerance, he added.
“We also think it is helpful to talk about security as a journey, showing where you’re at today, where you want to get to and where you’ve made noteworthy progress,” Hellickson said.
Ivan Pepelnjak asks the question most network engineers are already deliberating: Why is networking automation so hard?
In a post on IPSpace, Pepelnjak said the challenges facing automation — such as a lack of good tools and APIs — obscure the biggest reason networking automation is so tough to do. And that’s because, as Pepelnjak termed it, “every network is a unique snowflake.”
“You can buy dozens of network management products, download numerous open source tools, and yet you won’t be a single step closer to offering service-level abstraction of your network to your users,” he said.
A better approach might be to build a customizable tool to meet your needs, or to construct a network management system based on Ansible that’s integrated with an orchestration platform.
Or you could just give up, although Pepelnjak advises his readers to consider that decision very carefully before proceeding.
Mobile carriers have been spending plenty of money constructing the frameworks necessary for 5G, the next generation of cellular technology. But telcos are still making these investments without knowing if there is a large enough business case to justify the sums they’re spending.
While many investors anticipate the enterprise networking market will be the first real use case for 5G, GlobalData analyst Josh Hewer has a better idea: gaming.
Gamers require connections with low latency and high availability, attributes that 5G connectivity offers, he said. Today, most players are harnessed to Wi-Fi networks, but that could change if higher-speed cellular is an option.
Epic Games, the developer of the smash hit Fortnite Battle Royale, is already pulling in $1 million a day from mobile users — illustrating how lucrative the market could be for 5G carriers.
“Operators are going to have to look outside of the enterprise market to justify early 5G investments,” Hewer said.
With one in three people on the planet paying for games on PC and mobile, according to Hewer, perhaps 5G investors should start to take gaming more seriously.
According to Gartner analyst Rajesh Kandaswamy, it’s simply too early to tell how Blockchain will evolve and what its impact on networking will be.
Kandaswamy tackled two related questions: Is blockchain about decentralization? And can blockchain survive without decentralization?
Alas, a definitive answer is still not possible, Kandaswamy wrote, adding that it’s possible the ledger system can survive either way.
Regardless, he remained encouraged about the role blockchain may play in future networking environments. “Who are we to assume that the paths of future innovation will be restricted to certain ways, when the technology itself is evolving and humankind’s potential for ingenuity is vast?”
The first part of this series on Storage Spaces Direct addresses the fundament question – what is it and why is it useful? S2D was originally included in Windows Server 2016 but has experienced a huge surge in adoption lately. Find out why.
Read the post here: Storage Spaces Direct Series Part 1 – What is S2D?
Are you hiring the right kind of software developer?
That might seem like a ridiculous question considering developers are in short supply around the world. In fact, a recent survey on software developer hiring highlighted the high demand for developers. On average, developers get 11 headhunter calls a year. But with the explosion of low-code/no-code development platforms, a rapidly changing technology landscape and a strong move to DevOps, it’s more important than ever that the assembled developer team fits the job description.
In the old days, it was enough to know a developer had programming chops and knowledge of the latest tools. Today, everything from communication skills to experience on the business side, an ability to integrate and a deep understanding of user experience all need to be on the résumé. And that’s just for starters.
What’s not necessarily on that résumé is a computer science degree. Jeffrey Hammond, a principal analyst at Forrester Research, pointed out that he’s seeing more and more developers at conferences without traditional development backgrounds. Part of the reason is the tools that make it easier than ever to code — low-code/no-code platforms — and part of it is that the job of the developer itself is changing.
“Salesforce is building a community of developers right now without traditional backgrounds,” he explained. So a key question to ask yourself about software developer hiring is whether those traditional backgrounds are really what will get the job done in your organization.
Also keep in mind whether you need truly custom software. In the recent Harvey Nash 2017 Technology Survey, 47% of the 3,000 participants said custom software is in decline. Even two years ago, that thought would have been dismissed as crazy. But between low-code/no-code platforms, easy access to APIs and the move to microservices, the idea of using building blocks created by others and essentially rearranging them to make new software is no longer far-fetched.
If a custom product doesn’t add value to your organization, then why choose — and pay a huge premium for — a full-fledged developer? Instead, you’ll want to look for someone who is more of an integrator, comfortable with weaving bits and pieces together to create the right solution. That person often won’t have a traditional development background, but on the flip side, you’ll avoid paying the higher salary — or “hiring tax” — for a software developer.
Sometimes, you’ll need serious development chops, but only for a particular job. That’s what happened to Lucy Warner, CEO at the U.K.’s National Health Service (NHS) Practitioner Health Programme. Her users are NHS doctors looking for physicians to treat them, but they don’t want to go to a close working colleague or a former fellow medical school student. She needed an app that would let them find local doctors but with sufficient information to ensure they’d feel comfortable with their choice. And she wanted to use the “swipe right, swipe left” interface that’s become ubiquitous.
The only problem is her IT team didn’t have the skills to make this happen. She reached out to low-code platform maker Out Systems and a third-party development team from Portugal that had already created about 70% of the type of app that she wanted. In nine weeks, the app was up and running. “It was fantastic,” Warner said. “We didn’t have to hire a single staffer or look for more space. And our patients love it; the feedback is wonderful.”
For companies in the throes of software developer hiring and needing classic development skills, it’s still important to match the business need with the skill set. Despite the custom development pessimism, the Harvey Nash survey indicated that 36% of respondents believe this type of development will continue to grow. But the survey also suggested that custom software development won’t necessarily be used across the board — 56% said it’s going to be used to drive innovation. And for those respondents already working for “highly innovative companies,” the percentage jumped to 67%.
Likewise, keep in mind that DevOps and its probable successor BizDevOps tweak developer requirements substantially. Even with software developer hiring today in DevOps shops, Forrester’s Hammond is seeing a split between front-end and back-end developers. In a BizDevOps team, the lines could be drawn even finer. His bottom line: Get ready for a time of intense specialization when it comes to the development team.
But also be ready to know software developer hiring will remain competitive, perhaps indefinitely. “Businesses embracing technology and those who choose to be digital-product focused need strong engineers to help them execute ideas,” said David Savage, associate director of Harvey Nash. “[T]he need to compete also sees increased importance on being different from peers.”
Alan Turing asked the question “can machines think?” in 1950 and it still intrigues us today. At The Alan Turing Institute, the United Kingdom’s national institute for data science in London, more than 150 researchers are pursuing this question by bringing their thinking to fundamental and real-world problems to push the boundaries of data science.
One year ago, The Turing first opened its doors to 37 PhD students, 117 Turing Fellows and visiting researchers, 6 research software engineers and more than 5,000 researchers for its workshops and events. I have been privileged to be one of these visiting fellows, helping the researchers take a cloud-first approach through our contribution of $5 million of Microsoft Azure cloud computing credits to The Turing. To be part of this world-leading center of data science research is exhilarating. Cloud computing is unlocking an impressive level of ambition at The Turing, allowing researchers to think bigger and unleash their creativity.
“We have had an exceptional first year of research at The Turing. Working with Microsoft, our growing community of researchers have been tooled up with skills and access to Azure for cloud computing and as a result they’ve been able to undertake complex data science tasks at speed and with maximum efficiency, as illustrated by some of the stories of Turing research showcased today. We look forward to growing our engagement with the Azure platform to help us to undertake even bigger and more ambitious research over the coming academic year.”
~ Andrew Blake, Research Director, The Alan Turing Institute
Human society is one of the most complex systems on the planet and measuring aspects of it has been extremely difficult until now. Merve Alanyali and Chanuki Seresinhe are graduate students from the University of Warwick who are spending a year at The Turing applying novel computational social science techniques to understand human happiness and frustration. They are using AI and deep neural networks to analyze millions of online photos with Microsoft Azure and their findings are providing deeper insights into the human condition.
Kenneth Heafield, Turing Fellow from the University of Edinburgh, has been using thousands of Azure GPUs (graphical processing units) to explore and optimize neural machine translation systems for multiple languages in the Conference on Machine Translation. Azure GPUs enabled the group to participate in more languages, producing substantially better results than last year and winning first place in some language pairs. The team is working closely with Intel on using new architectures, including FPGAs (field-programmable gate arrays) like Microsoft’s Project Catapult, to make even bigger gains in machine translation.
Microsoft is delighted to see The Alan Turing Institute setting up a deep research program around ethics, a crucial topic in data science, AI and machine learning. Our own human-centered design principles are that AI technology should be transparent, secure, inclusive and respectful, and also maintain the highest degree of privacy protection. We are pleased that Luciano Floridi is leading the Data Ethics research group at The Turing as his perspectives on areas such as healthcare are helping us to think about how we can ensure that technology is used in the most constructive ways.
The first-year at The Turing has been impressive. We look forward to another exciting year as we work together on projects in data-centric engineering, blockchain, healthcare and secure cloud computing. Along with Microsoft’s data science collaborations at University of California, Berkeley, and through the National Science Foundation Big Data Innovation Hubs, we are perhaps getting closer to answering Alan Turing’s profound question from 67 years ago.
Oracle continues to do everything it can to compete with Amazon Web Services, but the question remains whether IT pros will take the bait.
This week, the company introduced Oracle Universal Credits for cloud consumption to allow customers under one contract to spend on a pay-as-you-go, monthly or yearly basis. Oracle claims its software license agreement (SLA) will guarantee Oracle databases can run on Oracle Cloud for 50% less than on Amazon Web Services (AWS). Oracle Universal Credits can be used for infrastructure as a service and platform as a service (PaaS) across Oracle Cloud services, such as Oracle Cloud and Oracle Cloud at Customer. Customers are allowed to switch across services at any time.
Oracle also introduced a “Bring Your Own License” program for customers to use their own existing licenses for PaaS.
Oracle Universal Credits is something that piques the interest of at least one current Oracle Cloud customer.
“Universal Credits are a great program for budgeting and controlling speed, while still providing great flexibility,” said Nikunj Mehta, founder and CEO of Falkonry Inc., a Sunnyvale, Calif., startup that provides artificial intelligence for operational intelligence. “Oracle’s move toward pricing innovation parallels T-Mobile’s business model disruption and has the real potential of shaking up the industry.”
But it won’t be easy for Oracle to sustain, analysts said.
“The SLA guaranteeing that Oracle databases are cheaper than AWS by 50% is a big commitment,” said Jean Atelsek, analyst with 451 Research. “As AWS continues to cut its prices, it’ll effectively squeeze Oracle to cut its pricing.”
Few would argue that Oracle licensing methods could use some clarification. In February of this year, Oracle effectively doubled its licensing requirements for customers that run its software on other cloud platforms, such as AWS and Azure.
While Oracle continues to make strategic moves aimed at AWS, the licensing model could also signify a way for the company to bridge its on-premises and cloud business closer together in an easier transition from legacy products into the cloud.
“It’s about time Oracle figured out those of us with on-premises licenses were not going to just abandon our perpetual license and jump to the cloud,” said Brian Peasland, an Oracle database administrator and TechTarget contributor, about Oracle Universal Credits. “Oracle licensing is not cheap, and it is a long-term investment. It’s nice to know we can leverage that investment to help move to the cloud.”
Oracle autonomous database details emerge
The company has also provided more details on its autonomous database, which also aims to lower overall cloud costs. The automated database will be based on machine learning, and Oracle said it guarantees 99.995% uptime, which amounts to less than 30 minutes of planned downtime per year.
Automated operations for databases are an important part of Oracle’s effort to become a full-fledged cloud provider, both for customers looking to move work to the cloud and for providers such as Oracle that must efficiently take over day-to-day administration work from customers.
Success on the cloud is crucial to Oracle. It is, in the estimation of IDC analyst Carl Olofson, an issue of “survival.”
“The big picture for Oracle is to lead the customer to the cloud,” Olofson said, while cautioning that the move to cloud for most organizations is “still in its early days.”
The biggest challenge for Oracle going forward with its new licensing scheme is to convince customers it truly is simplifying how they pay.
“[Oracle] is notorious for complex licensing structures. … A lot of customers [and ex-customers] are very wary when it comes to Oracle licensing,” said Bob Sheldon, an analyst and TechTarget contributor. “Oracle could have an uphill battle convincing them that the company can be trusted.”
A notable feature missing from Oracle this week revolves around hybrid, Sheldon said, who noted that the new Oracle licensing structure on paper could stand to be a “boon to hybrid implementations.”
At next month’s Oracle OpenWorld event, the company is expected to provide details on Oracle 18, or 2018 — not Oracle 220.127.116.11. As recently reported, it will change the cadence of release cycles and a version-numbering scheme that had become a bit creaky.
In July, Oracle added its SaaS offering to its Oracle Cloud at Customer portfolio. Built on Oracle Database Exadata Cloud, Oracle Integration Cloud, Identity Services and more, this package is seen as a way to prepare Oracle customers for the flight to cloud.
SearchOracle Senior News Writer Jack Vaughn also contributed to this report.
That was the question security analysts at Dell SecureWorks found themselves pondering earlier this year while investigating a flurry of phishing attacks against targets in the Middle East. Analysts believed a sophisticated advanced persistent threat (APT) group was behind the attack, for two reasons. First, the emails contained PupyRAT, a cross-platform remote access Trojan that was first discovered in 2015 and had been used by an Iranian threat actor group Dell refers to as “Cobalt Gypsy” (also known as Threat Group 2889 or “OilRig”). And second, the email addresses used in the attacks weren’t spoofed.
“Many of the phishing emails were coming from legitimate addresses at other companies, which led us to believe those companies had been compromised,” Allison Wikoff, intelligence analyst at Dell SecureWorks, told SearchSecurity.
The email addresses used by the attackers belonged to Saudi Arabian IT supplier National Technology Group and Egyptian IT services firm ITWorx. But as sophisticated as the phishing attacks were, the targeted companies — which included energy, telecommunications, and financial services firms, as well as government agencies in the EMEA region — were largely successful in repelling the attacks and preventing the spread of PupyRAT in their environments.
But after the unsuccessful phishing attacks, Dell SecureWorks’ Counter Threat Unit (CTU) observed something else that alarmed them. Instead of another wave of phishing emails, CTU tracked a complex social media attack that indicated a resourceful, patient and knowledgeable nation-state threat actor.
Who is Mia Ash?
On Jan. 13, after the phishing attacks had ended, an employee at one of the companies targeted by Cobalt Gypsy received a message via LinkedIn from Mia Ash, a London-based photographer in her mid-20s, who said she was reaching out to various people as part of a global exercise. The employee, who SecureWorks researchers refer to anonymously as “Victim B,” connected to the photographer’s LinkedIn profile. To Victim B or the casual observer, Ash’s profile seemed legitimate; it contained a detailed work history and had more than 500 connections to professionals in the photography field, as well as individuals in the same regions and industries as Victim B.
After about a week of exchanged messages about photography and travel, Ash requested that Victim B add her as a friend on Facebook so the two could continue their conversation on that platform. According to SecureWorks’ new report, Victim B instead moved the correspondence to WhatsApp, a messaging service owned by Facebook, as well as email. Then on Feb. 12, Ash sent an email to Victim B’s personal email account with a Microsoft Excel file that was purportedly a photography survey. Ash requested that Victim B open the file at work in his corporate environment so that the file could run properly.
Victim B honored the request and opened the Excel on his company workstation; the Excel file contained macros that downloaded the same PupyRAT that Cobalt Gypsy used in the barrage of phishing attacks several weeks earlier. “It was the same organization that was hit before, within a month, and that was a big red flag,” Wikoff said.
Luckily, Victim B’s company antimalware defenses blocked the PupyRAT download. But the incident alarmed the company; Dell SecureWorks was asked to investigate the matter, and the CTU team soon discovered that “Mia Ash” wasn’t a professional photographer — in fact, she likely didn’t exist at all — and that another person was targeted long before Victim B.
Behind the online persona
When CTU researchers started digging into the Mia Ash online persona, they discovered more red flags. While Ash’s LinkedIn profile was populated with connections to legitimate professionals, half of the connections bore striking similarities: all male individuals, between their early 20s and 40s, who work in midlevel positions as software developers, engineers and IT administrators. In addition, these connections worked at various oil and gas, financial services and aerospace companies in countries such as Saudi Arabia, India and Israel — all of which had been targeted by the Iranian APT group Cobalt Gypsy.
“We saw a good cross section of LinkedIn connections — half of them were what looked like legitimate photographers and photography professionals, and the other half appeared to be potential targets,” Wikoff said.
This wasn’t the first time threat actors used fake social media accounts for malicious purposes, but this was one of the most complex efforts the researchers had ever seen. The CTU team discovered Mia Ash had been active long before January and that Victim B wasn’t actually the first target to fall prey to this complex social media attack. The CTU team discovered a Blogger website called “Mia’s Photography” that had been created in April 2016. They also found that two other domains apparently belonging to Ash were registered in June and September of last year using a combination of Ash’s information and that of a third party, whom CTU refers to as “Victim A.”
It’s unclear why the domains were registered — they don’t contain malware or any malicious operations — or why Victim A participated. Wikoff said there are a number of possibilities; it’s likely that either Victim A registered both domains as a friendly or romantic gesture to Ash, believing she was real, or that Victim A registered the first domain as a gift for Ash and then the attackers behind the persona registered the second on behalf of Victim A to reciprocate the gesture.
Whatever the case, it appears Victim A was used as a sort of “patient zero” from whom the attackers could establish other social media connections. Wikoff said SecureWorks made attempts to contact Victim A, who like other Mia Ash targets had worked in energy and aerospace companies in the Middle East/Asia region, but so far has not heard back from him. The ironic part is that Victim A is currently an information security manager for a large consulting company – and even he was apparently fooled by this online persona.
There was more to Mia Ash than just the LinkedIn profile and Blogger site; the persona’s Facebook account was populated with personal details (her relationship status, for example, was listed as “It’s complicated”), posts about photography and images of herself, as well as her own professional photos. However, the images were stolen from the social media accounts of a Romanian photographer (Dell SecureWorks did not disclose the woman’s identity in order to protect her privacy).
“At first pass, it looks like a legitimate Facebook profile,” Wikoff said. “The attackers spent a lot of time and effort building this persona, and they knew how to avoid detection.”
For example, Wikoff said, the threat actors rotated or flipped many of the images stolen from the Romanian woman so the pictures would not show up in a reverse image search. The attackers also kept the social media accounts active with fresh postings and content to make them appear authentic and to lure potential targets like Victim A to interact with them; in fact, Victim A interacted with Mia Ash’s Facebook page as recently as March.
Online personas as social media attacks
The CTU team determined with a high confidence level that Mia Ash was a fake online persona created by threat actors to befriend employees at targeted organizations and lure those individuals into executing malware in their corporate environments. The CTU team also believes with “moderate confidence” (according to the scale used by the U.S. Office of the Director of National Intelligence) that Mia Ash was created and managed by the Cobalt Gypsy APT group.
The Mia Ash LinkedIn account disappeared before the CTU team could contact LinkedIn; the team alerted Facebook, which removed the Mia Ash profile. The CTU team wasn’t able to determine what Cobalt Gypsy’s ultimate goal was with this social media attack; they only know the threat actors were attempting to harvest midlevel network credentials with the PupyRAT malware.
While the motive for Mia Ash campaign is still a mystery, Wikoff said it was clear the APT group had done its homework on both the organizations it was targeting, as well as what was required to build and maintain a convincing online persona. In addition, the threat actors specifically targeted employees they knew had the desired network credentials and would likely respond to and engage the Mia Ash persona.
This isn’t the first time Cobalt Gypsy has used social media attacks; in 2015, SecureWorks reported the APT group used 25 fake LinkedIn accounts in a social engineering scheme. In that case, the attackers created profiles of employment recruits for major companies like Teledyne and Northrop Grumman and used them as malicious honeypots or “honey traps.” Once victims made contact with the fake profiles, attackers would lure them into filling out fraudulent employment applications.
The Mia Ash campaign demonstrates the evolution of such social media attacks. Instead of just composing a single LinkedIn profile, the attackers expanded their online footprint with other social media accounts. And the larger the online presence, Wikoff said, the more convincing the persona becomes.
“Cobalt Gypsy’s continued social media use reinforces the importance of recurring social engineering training,” the SecureWorks report states. “Organizations must provide employees with clear social media guidance and instructions for reporting potential phishing messages received through corporate email, personal email, and social media platforms.”
But Wikoff said awareness training isn’t enough to stop advanced social engineering attacks like the Mia Ash campaign. “You can train people with security awareness, but someone is always going to click,” she said. “And the attackers know this.”
In the case of Victim A, the campaign would have been successful if not for antimalware defenses that prevented PupyRAT (which, it should be noted, was a known malware signature) from downloading. But other organizations might not be as lucky, especially if these attacks use new malware types with no known signatures.
In addition, social media services offer an enormous opportunity for threat actors. Wikoff said attacks can easily set up accounts for LinkedIn, Facebook, Twitter and other services, free of charge, and use them for malicious purposes without running afoul of the sites’ terms of service. While the Mia Ash profiles for LinkedIn and Facebook were removed after the fact, Wikoff said it’s difficult for social media services to spot APT activity like the Mia Ash campaign before a user is victimized.
SecureWorks believes that Cobalt Gypsy has more online personas actively engaged in malicious activity, but finding them before they compromise their potential targets will be a challenge.
“It shows how much bigger the threat landscape has gotten,” Wikoff said. “It’s a case study on persistent threat actors and the effort they will go to in order to achieve their goals.”
It’s a question posed by none other than Workday CEO Aneel Bhusri in a blog entry he posted on July 11, 2017. The answer appears to be October 2017. In his post, Bhusri said Workday plans to enter the platform-as-a-service (PaaS) market, opening its Workday Cloud Platform to customers, partners and independent software vendors.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Bhusri’s question has been asked before. “Workday has been asked about this for a number of years,” said Chris Pang, a Gartner research director focused on human capital management (HCM) and ERP technology. “From a technical perspective, Workday is catching up to both Oracle and SAP. This Workday PaaS has been a long time coming.”
A Workday PaaS should come as good news to businesses considering a revamp of their legacy HCM environments. As enterprises work to reduce their reliance on legacy applications developed in-house, the value of specialized, subscription cloud-based software-as-a-service (SaaS) offerings rises.
If a SaaS speed bump exists, it is the need to customize those offerings to fit each organization’s unique methods, whether it is Salesforce for sales force automation, Concur for travel and expense reporting, Slack for collaboration or Workday for HCM. This ability to customize — and integrate — these services through a PaaS to meet individual companies’ needs is essential.
In his blog post, Bhusri addressed that very point, writing, “Workday intends to enable customers and our broader ecosystem to use our platform services to build custom extensions and applications.”
According to Dan Beck, Workday’s senior vice president for technology strategy, the Workday PaaS will provide developers with the ability to work outside the confines of the Workday ecosystem, making it easier to build multi-SaaS integrations. That capability could lead to developers recommending Workday, despite its high cost, to IT decision-makers who might have been reluctant to select an HCM platform perceived as not fully open.
Why wait until now to introduce a Workday PaaS? “We’ve been busy building out our applications,” Beck said. “We’re at a point now where we can be open and support the tools that you want to use — Java, Node.js or clean RESTful APIs.” Pointing out a key difference with Oracle, he said, “Oracle is pushing its databases on people, but we don’t care how you persist data. You want to use S3 [Simple Storage Service] or Redshift — both from Amazon — knock yourself out.”
A spokeswoman for the global consultancy Accenture acknowledged the plans for a Workday PaaS, but declined to provide further insight, saying only that Workday “plans to share more details about the Workday Cloud Platform at Workday Rising in October.” Workday Rising, slated for Oct. 9 to 12 in Chicago, is the company’s customer and partner conference. A European counterpart is scheduled for Barcelona, Spain, in November.
Second-generation Workday PaaS
Though Bhusri and Beck said the time is right, it turns out this initiative is not Workday’s first PaaS go-around. In March 2011, the company launched the Workday Integration Cloud Platform, making it an early player in providing customers with a limited development platform.
From that PaaS, based on technology that would now be considered passé, Beck said thousands of integrations were built. “We took the tools that our own developer team was using to build integrations and made it available to our customers,” he said. “In 2011, that was more about system-to-system integration. Now, it’s about building apps where you can control the user interface and have access to our libraries.”
Another key generational difference is the older platform relied on SOAP technology for data exchange, whereas the new Workday PaaS is built atop thoroughly modern REST technology, Beck said.
Dan Becksenior vice president for technology strategy, Workday
To provide developers with the ability to exchange techniques and ideas, Beck said the company is gearing up a community called cloud.workday.com, expected to launch before the end of 2017. Staffed around the clock, the site is slated to feature tutorials and offer a feedback loop. “We’ll spin up dedicated environments for developers,” Beck said. “The act of getting a Workday environment, historically, was reasonably challenging. It was too much friction for developers.”
Pang said with each passing year, Workday has made its platform more configurable, allowing the creation of custom fields, but it has taken a cautious position when it came to throwing the doors wide open to custom development. “As competitors have become more sophisticated in their usage of PaaS, Workday has had to follow suit,” he said.
The company will provide a series of API tools for developers, but Beck said it was too early to provide details. To help educate developers, Workday has begun offering eight-week, intensive developer training courses tailored to the Workday platform.
HCM segment is growing
The market for HCM software — what used to be known as human resources — is soaring. According to a forecast from MarketsAndMarkets, the global market for HCM software is expected to grow from $14.5 billion in 2017 to $22.5 billion in 2022, a compound annual growth rate of 9.2%. Workday itself scored a big win in January 2017 when it landed Walmart as a client for the Workday subscription-based SaaS HCM platform, according to CNBC, which also noted that the casual-dining chain Panera Bread also signed on recently. In February 2017, Workday announced it had signed Amazon as a customer.
By 2020, Gartner predicts 30% of midmarket and large enterprises will have invested in cloud-based HCM, according to the most recent Gartner Magic Quadrant report, published in June 2016.
Joel Shore is news writer for TechTarget’s Business Applications and Architecture Media Group. Write to him at firstname.lastname@example.org or follow @JshoreTTon Twitter.