Tag Archives: exposed

CMS takes Blue Button 2.0 API offline due to coding error

A bug in the Blue Button 2.0 API codebase has potentially exposed the protected health information of 10,000 beneficiaries and caused the Centers for Medicare & Medicaid Services to pull the service offline.

Blue Button 2.0 is a standards-based API that gives Medicare beneficiaries the ability to connect their claims data to apps and services they trust.

In a blog post, CMS said a third-party application partner reported a data anomaly with the Blue Button 2.0 API on Dec. 4. CMS verified the anomaly and immediately suspended API access. The bug could cause beneficiary PHI to be shared with another beneficiary, or the wrong Blue Button 2.0 application, according to the post.

CMS said access to the API will remain closed while the agency conducts a full review, and restoration of the service is pending. The agency has not detected intrusion by unauthorized users or an outside source.

The incident is playing out against a backdrop of federal regulators like CMS pushing for healthcare organizations to use APIs that would give patients greater access to their health data. Yet a concern among healthcare CIOs is that the drive toward interoperability is ahead of app developers’ technical ability to safely facilitate that sharing of health data, said Clyde Hewitt, executive advisor for healthcare cybersecurity firm CynergisTek Inc., in Austin, Texas.

CynergisTek executive advisor Clyde HewittClyde Hewitt

“There is a massive push for data interoperability, and organizations that spend a lot of time looking at the security and privacy issues around this realize that the need to share data is probably outrunning the technical savvy of the developers to get solid interface specification,” Hewitt said.

The issue

Medicare beneficiaries authorize third-party apps to use their Medicare claims data through Blue Button 2.0, and the Blue Button 2.0 system verifies users through a CMS identity management system. The identity management system uses a code to provide randomly generated, unique user IDs, which Blue Button 2.0 uses to identify each beneficiary.

The data anomaly was “truncating” user IDs from a 128-bit user ID to a 96-bit user ID, which was too short to be sufficiently random to “uniquely identify a single user,” according to the blog post. As a result, Blue Button 2.0 began assigning the same user IDs to different beneficiaries.

The root cause of the problem is unclear. CMS said the code causing the bug was implemented Jan. 11, 2018 and that a comprehensive review of the code was not completed at the time, which may have identified the coding error.

CMS also said the identity management system code was not tested, stating that “assumptions were made” by the Blue Button 2.0 team that the identity management system code worked but was not validated.

The coding error should be a warning to healthcare organizations as they march toward interoperability and the use of APIs, according to Hewitt. They should, for example, put greater emphasis on regression testing, which is used to make sure a recent code change hasn’t negatively impacted existing software. CMS failed to do just that.

“You can’t make changes to your system without looking at how it’s going to impact other systems,” Hewitt said. “As this spider web continues to grow, doing an end-to-end test becomes more and more complicated.”

What CMS is doing now

The Blue Button 2.0 team has implemented a new review and validation process to make sure coding errors are caught before being implemented within Blue Button 2.0 or other CMS APIs, according to the blog post.

The team is also adding additional monitoring and alerting for Blue Button 2.0, and CMS is updating Blue Button 2.0 code to store full user IDs instead of shortened versions, meaning all users will be asked to re-authenticate with Blue Button 2.0 so the system can generate new user IDs.

Fewer than 10,000 beneficiaries and 30 apps were affected by the issue, CMS said, and it was contained to Blue Button 2.0 users and developers. The issue didn’t affect Medicare beneficiaries who do not use the API.

Before bringing the API back online, CMS said the Blue Button 2.0 team will be adding additional auditing layers at the API database level, as well as the API level to give more details into user activity and provide greater traceability to actions the API takes. Monitoring and alerting capabilities are also being enhanced to notify CMS of unexpected changes in data.

Constellation Research vice president and principal analyst David ChouDavid Chou

David Chou, vice president and principal analyst at Constellation Research in Cupertino, Calif., said while the PHI exposure from this incident may not be as damaging as in other incidents, if CMS discovers more security issues after it conducts its review, it will cause alarm in the industry.

“This is a learning experience and I am optimistic that CMS will get past this with a new and improved Blue Button,” he said.

Yet Chou believes the Blue Button 2.0 initiative has been a good thing overall, and said CMS should be recognized for their effort to improve interoperability in healthcare.

Go to Original Article

Session cookie mishap exposed HackerOne private reports

A researcher discovered a session cookie risk that could have exposed private bugs on HackerOne, and questions remain about if data may have been taken.

The risk for vulnerability coordination and bug bounty site HackerOne stemmed from a HackerOne security analyst accidentally including a valid session cookie in a communication with community member haxta4ok00. According to the HackerOne incident report attached to the original bug report, which was first reported by Ars Technica, the session cookie was disclosed due to human error and revoked exactly two hours and three minutes after the company learned of the issue.

“Session cookies are tied to a particular application, in this case hackerone.com. The application won’t block access when a session cookie gets reused in another location. This was a known risk. As many of HackerOne’s users work from mobile connections and through proxies, blocking access would degrade the user experience for those users,” HackerOne wrote in the incident report. “A short-term mitigation of this vulnerability is to bind the user’s session to the IP address used at initial sign-in. If an attempt is made to utilize the session from a different IP address, the session is terminated.”

HackerOne added that longer-term mitigations will include detecting session cookies and authentication tokens in user comments and blocking submission, binding sessions to devices rather than IP addresses, improving employee education, and overhauling the permission model for HackerOne security analysts.

Craig Young, computer security researcher for Tripwire’s vulnerability and exposure research team, told SearchSecurity, “The first rule of session cookies is don’t share your session cookies.”

“That being said, accidents and oversights can happen. The general idea here is to bind the session cookies with some other identifying attribute of the expected client. This is commonly done by associating session cookies with some additional fingerprint of the authorized user,” Young said. “This can be as simple as restricting session cookies based on IP address or region. More sophisticated methods might involve client-side scripting to fingerprint a specific client browser.”

After seeing “the amount of sensitive information that could have been accessed” as a result of the session cookie account takeover, HackerOne decided the submission was a critical vulnerability and awarded a $20,000 bug bounty.

Data access still in question

Haxta4ok00 wrote in the report that they had “HackerOneStaff Access” and could “read all reports” and edit private programs. However, they asserted multiple times that all actions were in the spirit of white hat hacking.

In the discussion about the issue in the bug report, Reed Loden, director of security at HackerOne, asked haxta4ok00 to “delete all screenshots, exports, etc.” and confirm they had “no other copies of vulnerability data” captured as part of the report submission. While haxta4ok00 claimed they only took screenshots, they admitted they didn’t understand how to prove such data was deleted. Even so, Loden thanked the member “for confirming your removal of all screenshots and other data you may have downloaded as part of your report submission.” 

Following this exchange, Jobert Abma, co-founder of HackerOne, joined the conversation to ask why haxta4ok00 had “opened all the reports and pages in order to validate you had access to the account,” noting the HackerOne team found the extent of the member’s actions unnecessary.

Again, the member claimed they meant no harm and that answer seemed to be accepted by HackerOne staff. The member went on to claim they had previously reported the session cookie risk and nothing was done.

Katie Moussouris, founder and CEO of Luta Security, pointed out on Twitter that the discussion between haxta4ok00 and HackerOne staff raised more questions.

Loden told SearchSecurity that “asking the reporting hacker to validate what we are seeing on our end is one of many steps in our investigation process.”

“HackerOne always conducts comprehensive investigations for all vulnerabilities reported to our own bug bounty program. In this case HackerOne’s bug bounty program operated exactly as intended, it gave us a way to identify an unknown risk fast so we could safely eliminate it,” he wrote via email. “Less than 5% of programs were impacted by this issue, the risk was eliminated within two hours of receipt and long-term fixes were pushed within days.”

Loden also clarified why action was not taken on the first report about session cookie issues.

“HackerOne’s bug bounty program is focused on identifying real-world vulnerabilities impacting the Platform, and we require hackers to provide a valid proof of concept with submissions,” Loden said. “The report in question from three years ago was a purely theoretical scenario focused on older browsers that were not, and are still not, supported by the HackerOne Platform.”

Go to Original Article

Assessing the value of personal data for class action lawsuits

When it comes to personal data exposed in a breach, assessing the value of that data for class actions lawsuits is more of an art than a science.

As interest in protecting and controlling personal data has surged among consumers lately, there have been several research reports that discuss how much a person’s data is worth on the dark web. Threat intelligence provider Flashpoint, for example, published research last month that said access to a U.S. bank account, or “bank log,” with a $10,000 balance was worth about $25. However, the price of a package of personally identifiable information (PII) or what’s known as a “fullz” is much less, according to Flashpoint; fullz for U.S. citizens that contain data such as victims’ names, Social Security numbers and birth dates range between $4 and $10.

But that’s the value of personal data to the black market. What’s the value of personal data when it comes to class action lawsuits that seek to compensate individuals who have had their data exposed or stolen? How is the value determined? If an organization has suffered a data breach, how would it figure out how much money they might be liable for?

SearchSecurity spoke with experts in legal, infosec and privacy communities to find out more about the obstacles and approaches for assessing personal data value.

The legal perspective

John Yanchunis leads the class action department of Morgan & Morgan, a law firm based in Orlando, Fla., that has handled the plaintiff end for a number of major class action data breach lawsuits, including Equifax, Yahoo and Capital One.

The 2017 Equifax breach exposed the personal information of over 147 million people, and resulted in the credit reporting company creating a $300 million settlement fund for victims (which doesn’t even account for the hundreds of millions of dollars paid to other affected parties). Yahoo, meanwhile, was hit with numerous data breaches between 2013 and 2016. In the 2013 breach, every single customer account was affected, totaling 3 billion users. Yahoo ultimately settled a class action lawsuit from customers for $117.5 million.

When it comes to determining the value of a password, W-2 form or credit card number, Yanchunis called it “an easy question but a very complex answer.”

“Is all real estate in this country priced the same?” Yanchunis asked. “The answer’s no. It’s based on location and market conditions.”

Yanchunis said dark web markets can provide some insight into the value of personal data, but there are challenges to that approach. “In large part, law enforcement now monitors all the traffic on the dark web,” he said. “Criminals know that, so what are they doing? They’re using different methods of marketing their product. Some sell it to other criminals who are going to use it, some put it on a shelf and wait until the dust settles so to speak, while others monetize it themselves.”

As a result, several methods are used to determine the value of breached personal data for plaintiffs. “You’ll see in litigation we’ve filed, there are experts who’ve monetized it through various ways in which they can evaluate the cost of passwords and other types of data,” Yanchunis said. “But again, to say what it’s worth today or a year ago, it really depends upon a number of those conditions that need to be evaluated in the moment.”

David Berger, partner at Gibbs Law Group LLP, was also involved in the Equifax class action lawsuit and has represented plaintiffs in other data breach cases. Berger said that it was possible to assess the value of personal data, and discussed a number of damage models that have been successfully asserted in litigation to establish value.

One way is to look at the value of a piece of information to the company that was breached, he said.

“In other words, how much a company can monetize basically every kind of PII or PHI, or what they are getting in different industries and what the different revenue streams are,” Berger said. “There’s been relatively more attention paid to that in data breach lawsuits. That can be one measure of damages.”

Another approach looks at the value of an individual’s personal information to that individual. Berger explained that this can be measured in multiple different ways. In litigation, economic modeling and “fairly sophisticated economic techniques” would be employed to figure out the market value of a piece of data.

Another approach to assessing personal data value is determining the cost of what individuals need to do to protect themselves from misuse of their data, such as credit monitoring services. Berger also said “benefit-of-the-bargain” rule can also help; the legal principle dictates that a party that breaches a contract must pay the victim of the breached contract an amount in damages that puts them in the same financial position they would be in if the contract was fulfilled.

For example, Berger said, say a consumer purchases health insurance and is promised reasonable data security, but if the insurance carrier was breached then “[they] got health insurance that did not include reasonable data security. We can use those same economic modeling techniques to figure out what’s the delta between what they paid for and what they actually received.”

Berger also said the California Consumer Privacy Act (CCPA), which he called “the strongest privacy law in the country,” will also help because it requires companies to be transparent about how they value user data.

“The regulation puts a piece on that and says, ‘OK, here are eight different ways that the company can measure the value of that information.’ And so we will probably soon have a bunch of situations where we can see how companies are measuring the value of data,” Berger said.

The CCPA will go into effect in the state on Jan. 1 and will apply to organizations that do business in the state and either have annual gross revenues of more than $25 million; possess personal information of 50,000 or more consumers, households or devices; or generates more than half its annual revenue from selling personal information of consumers.

Security and privacy perspectives

Some security and privacy professionals are reluctant to place a dollar value on specific types of exposed or breached personal data. While some advocates have pushed the idea of valuing consumer’s personal data as a commodities or goods to be purchased by enterprises, others, such as the Electronic Frontier Foundation (EFF) — an international digital rights group founded 29 years ago in order to promote and protect internet civil liberties — are against it.

An EFF spokesperson shared the following comment, with part of which being previously published in a July blog post titled, “Knowing the ‘Value’ of Our Data Won’t Fix Our Privacy Problems.”

“We have not discussed valuing data in the context of lawsuits, but our position on the concept of pay-for-privacy schemes is that our information should not be thought of as our property this way, to be bought and sold like a widget. Privacy is a fundamental human right. It has no price tag.”

Harlan Carvey, senior threat hunter at Digital Guardian, an endpoint security and threat intelligence vendor, agreed with Yanchunis that assessing the value of personal data depends on the circumstances of each incident.

“I don’t know that there’s any way to reach a consensus as to the value of someone’s personally identifiable data,” Carvey said via email. “There’s what the individual believes, what a security professional might believe (based on their experience), and what someone attempting to use it might believe.”

However, he said the value of traditionally low-value or high-value data might be different depending on the situation.

“Part of me says that on the one hand, certain classes of personal data should be treated like a misdemeanor, and others like a felony. Passwords can be changed, as can credit card numbers; SSNs cannot. Not easily,” Carvey said. “However, having been a boots-on-the-ground, crawling-through-the-trenches member of the incident response industry for a bit more than 20 years, I cringe when I hear or read about data that was thought to have been accessed during a breach. Even if the accounting is accurate, we never know what data someone already has in their possession. As such, what a breached company may believe is low-value data is, in reality, the last piece of the puzzle someone needed to completely steal my identity.”

Jeff Pollard, vice president and principal analyst at Forrester Research, said concerns about personal data privacy have expanded beyond consumers and security and privacy professionals to the very enterprises that use and monetize such data. There may be certain kinds of personal data that can be extremely valuable to an organization, but the fear of regulatory penalties and class action lawsuits are causing some enterprises to limit the data they collect in the first place.

“Companies may look at the data and say, ‘Sure, it’ll make our service better, but it’s not worth it’ and not collect it all,” Pollard said. “A lot of CISOs feel like they’ll be better off in the long run.”

Editor’s note: This is part one of a two-part series on class action data breach lawsuits. Stay tuned for part two.

Security news director, Rob Wright, contributed to this report.

Go to Original Article

LifeLock vulnerability exposed user email addresses to public

Symantec’s identity theft protection service, LifeLock, exposed millions of customers’ email addresses.

According to security journalist Brian Krebs, the LifeLock vulnerability was in the company’s website, and it enabled unauthorized third parties to collect email addresses associated with LifeLock user accounts or unsubscribe users from communications from the company. Account numbers, called subscriber keys, appear in the URL of the unsubscribe page on the LifeLock website that correspond to a customer record and appear to be sequential, according to Krebs, and that lends itself to writing a simple script to collect the email address of every subscriber.

The biggest threat with this LifeLock vulnerability is attackers could launch a targeted phishing scheme — and the company boasted more than 4.5 million users as of January 2017.

“The upshot of this weakness is that cyber criminals could harvest the data and use it in targeted phishing campaigns that spoof LifeLock’s brand,” Krebs wrote. “Of course, phishers could spam the entire world looking for LifeLock customers without the aid of this flaw, but nevertheless the design of the company’s site suggests that whoever put it together lacked a basic understanding of web site authentication and security.”

Krebs notified Symantec of the LifeLock vulnerability, and the security company took the affected webpage offline shortly thereafter. Krebs said he was alerted to the issue by Atlanta-based independent security researcher Nathan Reese, a former LifeLock subscriber who received an email offering him a discount if he renewed his membership. Reese then wrote a proof of concept and was able to collect 70 email addresses — enough to prove the LifeLock vulnerability worked.

Reese emphasized to Krebs how easy it would be for a malicious actor to use the two things he knows about the LifeLock customers — their email addresses and the fact that they use an identity theft protection service — to create a “sharp spear” for a spear phishing campaign, particularly because LifeLock customers are already concerned about cybersecurity.

Symantec, which acquired the identity theft protection company in 2016, issued a statement after Krebs published his report on the LifeLock vulnerability:

This issue was not a vulnerability in the LifeLock member portal. The issue has been fixed and was limited to potential exposure of email addresses on a marketing page, managed by a third party, intended to allow recipients to unsubscribe from marketing emails. Based on our investigation, aside from the 70 email address accesses reported by the researcher, we have no indication at this time of any further suspicious activity on the marketing opt-out page.

LifeLock has faced problems in the past with customer data. In 2015, the company paid out $100 million to the Federal Trade Commission to settle charges that it allegedly failed to secure customers’ personal data and ran deception advertising.

In other news:

  • The American Civil Liberties Union (ACLU) of Northern California said Amazon’s facial recognition program, Rekognition, falsely identified 28 members of Congress as people who were arrested for a crime in its recent test. The ACLU put together a database of 25,000 publicly available mugshots and ran the database against every current member of the House and Senate using the default Rekognition settings. The false matches represented a disproportionate amount of people of color — 40% of the false matches, while only 20% of Congress members are people of color — and spanned both Democrats and Republicans and men and women of all ages. One of the falsely identified individuals was Rep. John Lewis (D-Ga.), who is a member of the Congressional Black Caucus; Lewis previously wrote a letter to Amazon’s CEO, Jeff Bezos, expressing concern for the potential implications of the inaccuracy of Rekognition and how it could affect law enforcement and, particularly, people of color.
  • Researchers have discovered another Spectre vulnerability variant that enables attackers to access sensitive data. The new exploit, called SpectreRSB, was detailed by researchers at the University of California, Riverside, in a paper titled, “Spectre Returns! Speculation Attacks using the Return Stack Buffer.” “Rather than exploiting the branch predictor unit, SpectreRSB exploits the return stack buffer (RSB), a common predictor structure in modern CPUs used to predict return addresses,” the research team wrote. The RSB aspect of the exploit is what’s new, compared with Spectre and its other variants. It’s also why it is, so far, unfixed by any of the mitigations put in place by Intel, Google and others. The researchers tested SpectreRSB on Intel Haswell and Skylake processors and the SGX2 secure enclave in Core i7 Skylake chips.
  • Google Chrome implemented its new policy this week that any website not using HTTPS with a valid TLS certificate will be marked as “not secure.” In the latest version of the browser, Google Chrome version 68, users will see a warning message stating that the site in not secure. Google first announced the policy in February. “Chrome’s new interface will help users understand that all HTTP sites are not secure, and continue to move the web towards a secure HTTPS web by default,” Emily Schechter, Chrome Security product manager, wrote in the announcement. “HTTPS is easier and cheaper than ever before, and it unlocks both performance improvements and powerful new features that are too sensitive for HTTP.”

Exactis leak exposes database with 340 million records

A marketing firm exposed records on most adults in the U.S., but experts weren’t surprised at the number of people affected and said the lesson should be about the depth of data gathered.

Marketing firm Exactis, a data company based in Palm Coast, Fla., exposed 340 million records — 230 million for individuals and 110 million for business customers — via a publicly accessible server, meaning anyone who knew where to look could have taken the data. Vinny Troia, security researcher and founder of NightLion Security, headquartered in St. Louis, Mo., discovered the potential Exactis leak and wrote on Twitter that he is working with the company to determine if anyone accessed the data. Exactis has since secured the server.

The data potentially exposed in the Exactis leak added up to 2 terabytes of information, including phone numbers, home and email addresses, but Bruce Silcoff, CEO of Shyft Network International, a cybersecurity company based in Barbados, said the Exactis leak is noteworthy “not only for the number of customers impacted, but also for the depth of compromised data.”

“It’s been reported that every record includes more than 400 variables of personal characteristics,” Silcoff wrote via email. “The reality is that we live in a digitized world and all our interactions on social channels are recorded, and this isn’t stopping anytime soon. The centralized storage of user information makes institutions like Exactis hacker bait. Never has there been such urgency nor opportunity to introduce a disruptive alternative to an antiquated system and solve an urgent global problem.”

Wired’s original report on the Exactis leak noted that the personal characteristics data could include information such as personal interests and habits, if the person smokes, has pets or the number, age and gender of the person’s children.

The reality is that we live in a digitized world and all our interactions on social channels are recorded, and this isn’t stopping anytime soon.
Bruce SilcoffCEO of Shyft

Troia told Wired that he found the Exactis leak with a simple Shodan search for ElasticSearch databases on publicly accessible servers in the U.S. While there is a huge trove of personal information, the dataset does not include Social Security numbers or credit cards, so experts said it would be more useful for social engineering.

Nico Fischbach, global CTO at Forcepoint, said the highly sensitive data in the Exactis leak “could be exploited by malicious actors to carry out a number of different types of attacks.”

“If an attacker combined this intel with data from the 2015 OPM breach, they could run human intelligencetype special operations attacks against cleared personnel. It’s also a huge asset to criminals using impersonation as a tool for phishing. Further, as 110 million of the records pertain to businesses, criminals could utilize the data for spear-phishing campaigns aimed at data exfiltration,” Fischbach wrote via email. “In the case of Cambridge Analytica, attackers had to ‘steal’ this type of profile data from Facebook, but, with Exactis, the data was publicly accessible on a server with weak or no authentication. This further underscores the need for enterprises to focus on knowing how their people interact with their data, have insight to risky activity and to think ahead on how vulnerabilities like this could be mitigated against, or prevented entirely.”

Ruchika Mishra, director products and solutions at Balbix, a cybersecurity company headquartered in San Jose, Calif., said this was likely a problem of Exactis not understanding the mindset of an attacker.

“There’s no doubt in my mind that Exactis knew exactly what type of information they had and the ramifications there would be if there was a breach,” Mischra wrote via email. “But the problem with most enterprises today is that they don’t have the foresight and visibility into the hundreds of attack vectors — be it misconfigurations, employees at risk of being phished, admin using credentials across personal and business accounts — that could be exploited.”

Robert Capps, vice president and authentication strategist for NuData Security, a behavioral biometrics company based in Vancouver, British Columbia, said “if U.S. citizens did not think their personal information has ever been compromised, this should convince them it definitely is.”

“Unfortunately, breaches are here to stay, but government agencies, businesses, and organizations across the U.S. can protect users by applying a new authentication framework,” Capps wrote via email. “Multi-layered security solutions based on passive biometrics and behavioral analytics make this stolen information useless to cybercriminals, as they identify users based on their behavior instead of data such as names, last names, dates of birth, passwords, addresses, and more.”

A DHS data breach exposed PII of over 250,000 people

A data breach at the U.S. Department of Homeland Security exposed the personally identifiable information of over 250,000 federal government employees, as well as an unspecified number of people connected with DHS investigations.

DHS released a statement Jan. 3, 2018, that confirmed the exposure of “approximately 246,167” federal government employees who worked directly for DHS in 2014. It also disclosed the breach of a database for the Office of Inspector General that contained the personally identifiable information (PII) of any person — not necessarily employed by the federal government — who was associated with OIG investigations from 2002 to 2014. This includes subjects, witnesses and complainants.

In its statement, the department emphasized the DHS data breach was not caused by a cyberattack and referred to it as a “privacy incident.”

“The privacy incident did not stem from a cyber-attack by external actors, and the evidence indicates that affected individual’s personal information was not the primary target of the unauthorized unauthorized [sic] transfer of data,” DHS said.

The DHS data breach was initially found in May 2017 during a separate, ongoing DHS OIG criminal investigation in which it was discovered that a former DHS employee had an unauthorized copy of the department’s case management system.

However, individuals affected by the DHS data breach weren’t notified until Jan. 3, 2018. In its statement, DHS addressed why the notification process took so long.

“The investigation was complex given its close connection to an ongoing criminal investigation,” the department said. “From May through November 2017, DHS conducted a thorough privacy investigation, extensive forensic analysis of the compromised data, an in-depth assessment of the risk to affected individuals, and comprehensive technical evaluations of the data elements exposed. These steps required close collaboration with law enforcement investigating bodies to ensure the investigation was not compromised.”

The DHS employee data breach exposed PII that included names, Social Security numbers, dates of birth, positions, grades and duty stations of DHS employees; the DHS investigative data breach exposed names, Social Security numbers, dates of birth, alien registration numbers, email addresses, phone numbers, addresses and other personal information that was provided to the OIG during investigative interviews with its agents.

DHS is offering free identity protection and credit-monitoring services for 18 months to affected individuals. The department said it has also taken steps to improve its network security going forward, including “placing additional limitations on which individuals have back end IT access to the case management system; implementing additional network controls to better identify unusual access patterns by authorized users; and performing a 360-degree review of DHS OIG’s development practices related to the case management system.”

While the affected government employees were notified directly about the breach, DHS stated, “Due to technological limitations, DHS is unable to provide direct notice to the individuals affected by the Investigative Data.”

DHS urged anyone associated with a DHS OIG investigation between 2002 and 2014 to contact AllClear ID, the Austin, Texas, breach response service retained by DHS to provide credit-monitoring and identity protection services to affected individuals.

In other news:

  • A group of senators has introduced a bill to secure U.S. elections. The Secure Elections Act is a bipartisan bill that aims to provide federal standards for election security. One measure proposed in the bill is to eliminate the use of paperless voting machines, which are regarded by election security experts as the least secure type of voting machines in use in today. Paperless voting machines don’t allow for audits, which the proposed legislation also wants to make a standard practice in all elections. The idea is that audits after every election will deter foreign meddling in American democracy like Russia’s interference in the 2016 U.S. presidential election. “An attack on our election systems by a foreign power is a hostile act and should be met with appropriate retaliatory actions, including immediate and severe sanctions,” the bill states. The bill was sponsored by Sen. James Lankford (R-Okla.) and co-sponsors Sens. Amy Klobuchar (D-Minn.), Lindsey Graham (R-S.C.), Kamala Harris (D-Calif.), Susan Collins (R-Maine) and Martin Heinrich (D-N.M.).
  • Attackers exploited a vulnerability in Google Apps Script to automatically download malware onto a victim’s system through Google Drive. Discovered by researchers at Proofpoint, the vulnerability in the app development platform enabled social-engineering attacks that tricked victims into clicking on malicious links that triggered the malware downloaded on their computer. The researchers also found the exploit could happen without any user interaction. Google has taken steps to fix the flaw by blocking installable and simple triggers, but the researchers at Proofpoint said there are bigger issues at work. The proof of concept for this exploit “demonstrates the ability of threat actors to use extensible SaaS platforms to deliver malware to unsuspecting victims in even more powerful ways than they have with Microsoft Office macros over the last several years,” the research team said in a blog post. “Moreover, the limited number of defensive tools available to organizations and individuals against this type of threat make it likely that threat actors will attempt to abuse and exploit these platforms more often as we become more adept at protecting against macro-based threats.” The researchers went on to note that, in order to combat this threat, “organizations will need to apply a combination of SaaS application security, end user education, endpoint security, and email gateway security to stay ahead of the curve of this emerging threat.”
  • The United States federal government is nearing its deadline to implement the Domain-based Message Authentication, Reporting and Conformance (DMARC) tool. In October 2016, the DHS announced it would mandate the use of DMARC and HTTPS in all departments and agencies that use .gov domains. DHS gave those departments and agencies a 90-day deadline to implement DMARC and HTTPS, which means the Jan. 15, 2018, deadline is soon approaching. According to security company Agari, as of mid-December, 47% of the federal government domains were now using DMARC, compared to 34% the month before. Another requirement within this mandate is federal agencies are required to use the strongest “reject” setting in DMARC within a year. This means emails that fail authentication tests will be less likely to make it to government inboxes — i.e., be rejected. Agari reported a 24% increase in the use of higher “reject” settings in the last month. On the flip side, Agari noted that most agency domains (84%) are still unprotected, with no DMARC policy.

Leaked NSA Ragtime files hint at spying on U.S. citizens

Newly exposed information showed that the National Security Agency’s Ragtime intelligence gathering program was bigger than was previously thought and may have included processes targeting Americans.

Part of the cache of NSA data left exposed on an unsecured cloud server included files regarding the NSA Ragtime intelligence gathering operations. Before this leak there were four known variants of Ragtime, the most well-known of which was Ragtime P — revealed by Edward Snowdon — authorizing the bulk collection of mobile phone metadata.

The exposed data was found by Chris Vickery, director of cyber risk research at UpGuard, and the new NSA Ragtime information was first reported by ZDNet. According to ZDNet, the leaked data mentioned 11 different variants of the Ragtime program, including Ragtime USP.

This raised concern because “USP” is a term known in the intelligence community to mean “U.S. person.” Targeting U.S. citizens and permanent residents in surveillance programs is illegal, but as in the case of Ragtime P, the NSA has contended it “incidentally” collected information on Americans as part of operations targeting foreign nationals.

As yet, it is unclear what the NSA Ragtime USP program entailed or what the exposed data repository included.

An UpGuard spokesperson said, “Within the repository was data that mentioned the four known Ragtime programs, including Ragtime P, which is known to target Americans, and seven previously unknown programs, including one called USP. We have no evidence beyond this, as far as I know, about Ragtime.”

NSA Ragtime data collection and storage

Rebecca Herold, CEO of Privacy Professor, said it is possible the NSA targeted Americans, but it could be nothing more than the repository of data “incidentally” collected in other operations.

“While the stated purpose is to capture the communications of foreign nationals, the reality is that individuals who engage, or are brought into a conversation by others, are now subject to having their communications also collected, monitored and analyzed,” Herold told SearchSecurity. “So while there are different versions of Ragtime described, and only one or two that describes U.S. citizens’ and residents’ data being involved, the reality is that, based on the descriptions, all of the versions of Ragtime could easily involve U.S. residents’ and citizens’ data. This incidental collection is a result of how the Ragtime versions are publicly described as being engineered.”

Only entities that have accountability for implementing strong security controls, and establishing effective privacy controls, should be allowed to hold … such large amounts of sensitive and privacy-impacting data.
Rebecca HeroldCEO, Privacy Professor

The NSA Ragtime P metadata collection was ruled illegal by U.S. courts, but intelligence agencies were allowed to keep the data already acquired. Herold added that “another problem that has never been addressed through these surveillance programs is data retention.” And, Herold said, the recent data exposures by government agencies should lead to revisiting that decision.

“Only entities that have accountability for implementing strong security controls, and establishing effective privacy controls, should be allowed to hold such gigantic repositories that contain such large amounts of sensitive and privacy-impacting data,” Herold said. “This would likely need to be an objective, validated, and non-partisan entity, with ongoing audit oversight. The NSA has not demonstrated any of these accountabilities or capabilities to date, and the majority of government lawmakers have long enabled the NSA’s lack of security and privacy controls.”

DoD exposed data stored in massive AWS buckets

Once again, Department of Defense data was found publicly exposed in cloud storage, but it is unclear how sensitive the data may be.

Chris Vickery, cyber risk analyst at UpGuard, based in Mountain View, Calif., found the exposed data in publicly accessible Amazon Web Services (AWS) S3 buckets. This is the second time Vickery found exposed data from the Department of Defense (DoD) on AWS. The previous exposure was blamed on government contractor Booz Allen Hamilton; UpGuard said a now defunct private-sector government contractor named VendorX appeared to be responsible for building this database. However, it is unclear if VendorX was responsible for exposing the data. Vickery also previously found exposed data in AWS buckets from the Republican National Committee, World Wrestling Entertainment, Verizon and Dow Jones & Co.

According to Dan O’Sullivan, cyber resilience analyst at UpGuard, Vickery found three publicly accessible DoD buckets on Sept. 6, 2017.

“The buckets’ AWS subdomain names — ‘centcom-backup,’ ‘centcom-archive’ and ‘pacom-archive’ — provide an immediate indication of the data repositories’ significance,” O’Sullivan wrote in a blog post. “CENTCOM refers to the U.S. Central Command, based in Tampa, Fla. and responsible for U.S. military operations from East Africa to Central Asia, including the Iraq and Afghan Wars. PACOM is the U.S. Pacific Command, headquartered in Aiea, Hawaii and covering East, South and Southeast Asia, as well as Australia and Pacific Oceania.”

UpGuard estimated the total exposed data in the AWS buckets amounted to “at least 1.8 billion posts of scraped internet content over the past eight years.” The exposed data was all scraped from public sources including news sites, comment sections, web forums and social media.

“While a cursory examination of the data reveals loose correlations of some of the scraped data to regional U.S. security concerns, such as with posts concerning Iraqi and Pakistani politics, the apparently benign nature of the vast number of captured global posts, as well as the origination of many of them from within the U.S., raises serious concerns about the extent and legality of known Pentagon surveillance against U.S. citizens,” O’Sullivan wrote. “In addition, it remains unclear why and for what reasons the data was accumulated, presenting the overwhelming likelihood that the majority of posts captured originate from law-abiding civilians across the world.”

Importance of the exposed DoD data

Vickery found references in the exposed data to the U.S. Army “Coral Reef” intelligence analysis program, which is designed “to better understand relationships between persons of interest,” but UpGuard ultimately would not speculate on why the DoD gathered the data.

Ben Johnson, CTO at Obsidian Security, said such a massive data store could be very valuable if processed properly.

“Data often provides more intelligence that initially accessed, so while this information was previously publicly available, adversaries may be able to ascertain various insights they didn’t previously had,” Johnson told SearchSecurity. “What’s more of a problem than the data itself in this case is that this is occurring at all — showcasing that there’s plenty of work to do in safeguarding our information.”

What’s more of a problem than the data itself in this case is that this is occurring at all — showcasing that there’s plenty of work to do in safeguarding our information.
Ben JohnsonCTO at Obsidian Security

Rebecca Herold, president of Privacy Professor, noted that just because the DoD collected public data doesn’t necessarily mean the exposed data includes accurate information.

“Sources of, and reliability for, the information matters greatly. Ease of modifying even a few small details within a large amount of data can completely change the reality of the topic being discussed. Those finding this information need to take great caution to not simply assume the information is all valid and accurate,” Herold told SearchSecurity. “Much of this data could have been manufactured and used for testing, and much of it may have been used to lure attention, as a type of honeypot, and may contain a great amount of false information.”

Herold added that the exposed data had worrying privacy implications.

“Just because the information was publicly available does not mean that it should have been publicly available. Perhaps some of this information also ended up being mistakenly being made publicly available because of errors in configurations of storage servers, or of website errors,” Herold said. “When we have organizations purposefully taking actions to collect and inappropriately (though legally in many instances) use, share and sell personal information, and then that information is combined with all this freely available huge repositories of data, it can provide deep insights and revelations for specific groups and individuals that could dramatically harm a wide range of aspects within their lives.”

Deloitte hack compromised sensitive emails, client data

Deloitte, one of the “big four” accounting and consultancy firms, has confirmed it exposed confidential emails and client data in a targeted attack.

Deloitte, which provides high-end cybersecurity consulting services, discovered the hack in March 2017, but attackers may have had access to the company’s systems since October or November of 2016.

According to The Guardian, which broke the story of the Deloitte hack early this week, attackers were able to compromise Deloitte’s email server through an administrator account that wasn’t protected with two-factor authentication. Through this email server, The Guardian reported, the attacker likely had privileged, unrestricted access to all systems, including the Microsoft Azure cloud service that Deloitte uses to store the emails its staff sends and receives. In a statement on the incident, Deloitte confirmed attackers were able to “access data through an email platform” but didn’t provide further details on additional systems or services that may have been affected.

Deloitte provides services to major companies across the globe, including banks, multinational corporations and government agencies. The company claimed that “very few clients were impacted” by the breach. According to The Guardian, only six of the organization’s clients have been alerted that their information was compromised in the Deloitte hack. However, the hackers did potentially have access to usernames, passwords, IP addresses, health information and architectural diagrams for businesses.

The Deloitte hack focused primarily on U.S.-based operations and spurred an internal investigation that’s lasted six months so far. The responsible parties have yet to be identified, though, and Deloitte hasn’t released any specific details on how many clients were affected.

In other news

  • A bug in the most recent version of Internet Explorer exposes whatever is entered into the address bar — such as website addresses or searches — to hackers. Security researcher Manuel Caballero disclosed the flaw in a blog post this week. “When a script is executed inside an object-html tag, the location object will get confused and return the main location instead of its own,” Caballero wrote. “To be precise, it will return the text written in the address bar so whatever the user types there will be accessible by the attacker.” This means that whatever a targeted user types into the address bar can be viewed by a malicious actor. Caballero’s proof of concept shows that malicious sites can view information the user assumed was private. He also expressed his concerns about Microsoft’s handling of Internet Explorer. “In my opinion, Microsoft is trying to get rid of IE without saying it. It would be easier, more honest to simply tell users that their older browser is not being serviced like Edge,” he said.
  • The United States has asked China not to enforce its Cybersecurity Law that was passed in November 2016 and went into effect in June this year. In a document submitted to the World Trade Organization, the U.S. said “China’s measures would disrupt, deter, and in many cases, prohibit cross-border transfers of information that are routine in the ordinary course of business.” The Cybersecurity Law states that any “network operators” in China, including any local or international firms that collect data, must store all user data within China. The U.S. argued in the document that “such a broad definition” of network operators “could have a negative impact on a wide range [of] foreign companies.” It also raised concerns that “the measures, which pertain to ‘important data’ and ‘personal information,’ would severely restrict cross-border transfers unless a broad set of burdensome conditions are met.” The U.S. noted some other concerns in the document and requested that China “refrain from issuing or implementing final measures until such concerns are addressed.”
  • Oracle released out-of-band patches for the latest Apache Struts 2 vulnerability, tracked as CVE-2017-9805, a month before its scheduled quarterly Critical Patch Update. In its blog post announcing the availability of the patches, Oracle noted that a previous Apache Struts 2 vulnerability, left unpatched, was implicated in the “significant security incident” suffered by Equifax earlier this month. The patches were made available by the Apache Foundation for the popular web development framework on Sept. 5, but vendors like Oracle using the open source framework need to apply those patches to their own source code. “Oracle strongly recommends that customers apply the fixes contained in this Security Alert as soon as possible,” wrote Eric Maurice, director of security assurance at Oracle. “Furthermore, Oracle reminds customers that they should keep up with security releases and should have applied the July 2017 Critical Patch Update (the most recent Critical Patch Update release).”