Tag Archives: Department

Russian intelligence officers indicted for DNC hack

The Department of Justice announced Friday the indictment of 12 members of Russia’s GRU intelligence agency in relation to the 2016 breaches of the Democratic National Committee and Hillary Clinton’s presidential campaign.

The grand jury indictment, which is part of Special Counsel Robert Mueller’s investigation into Russian interference with the 2016 presidential election, claimed the 12 intelligence officers were engaged in a “sustained effort” to hack into the Democratic National Committee (DNC), the Democratic Congressional Campaign Committee (DCCC) and the Clinton campaign. The DNC hack led to confidential emails becoming public via WikiLeaks, which negatively impacted the Clinton campaign and Democratic Party.

The grand jury indictment alleged the Russian intelligence officers, operating under the online personas “DCLeaks” and “Guccifer 2.0,” leaked information through another entity known as “Organization 1.” The indictment does not mention WikiLeaks by name.

The Justice Department claimed that in 2016, members of Unit 26165 in the Russian government’s Main Intelligence Directorate (GRU) began spearphishing campaign officials and volunteers for Clinton’s presidential campaign; intelligence officers were able to steal usernames and passwords and use those credentials to obtain confidential emails and compromise other systems. The threat actors used similar techniques in the DNC hack as well as the breach of the DCCC’s network.

In addition, the Justice Department claimed Unit 26165, with members of the GRU’s Unit 74455, conspired to release the stolen emails and data in order to influence the election. According to the Department of Justice, Unit 74455 also “conspired to hack into the computers of state boards of elections, secretaries of state, and US companies that supplied software and other technology related to the administration of elections to steal voter data stored on those computers.”

The indictment accused the following individuals of being part of Unit 26165 and Unit 74455, and engaging in the DNC hack and other malicious activity: Viktor Borisovich Netyksho, Boris Alekseyevich Antonov, Dmitriy Sergeyevich Badin, Ivan Sergeyevich Yermakov, Aleksey Viktorovich Lukashev,  Sergey Aleksandrovich Morgachev, Nikolay Yuryevich Kozachek, Pavel Vyacheslavovich Yershov, Artem Andreyevich Malyshev, Aleksandr Vladimirovich Osadchuk, Aleksey Aleksandrovich Potemkin and Anatoliy Sergeyevich Kovalev.

The 12 GRU officers are accused of 11 criminal counts, including criminal conspiracy against the United States “through cyber operations by the GRU that involved the staged release of stolen documents for the purpose of interfering with the 2016 president election”; aggravated identity theft; conspiracy to launder money; and criminal conspiracy for attempting to hack into certain state boards of elections, secretaries of state, and vendors of U.S. election equipment and software.

The Justice Department emphasized there is “no allegation in the indictment that the charged conduct altered the vote count or changed the outcome of the 2016 election,” and no allegation that any American was a knowing participant in the alleged criminal activity.

DHS, SecureLogix develop TDoS attack defense

The U.S. Department of Homeland Security has partnered with security firm SecureLogix to develop technology to defend against telephony denial-of-service attacks, which remain a significant threat to emergency call centers, banks, schools and hospitals.

The DHS Science and Technology (S&T) Directorate said this week the office and SecureLogix were making “rapid progress” in developing defenses against call spoofing and robocalls — two techniques used by criminals in launching telephony denial-of-service (TDoS) attacks to extort money. Ultimately, the S&T’s goal is to “shift the advantage from TDoS attackers to network administrators.”

To that end, S&T and SecureLogix, based in San Antonio, are developing two TDoS attack defenses. First is a mechanism for identifying the voice recording used in call spoofing, followed by a means to separate legitimate emergency calls from robocalls.

“Several corporations, including many banks and DHS components, have expressed interest in this technology, and SecureLogix will release it into the market in the coming months,” William Bryan, interim undersecretary for S&T at DHS, said in a statement.

In 2017, S&T handed SecureLogix a $100,000 research award to develop anticall-spoofing technology. The company was one of a dozen small tech firms that received similar amounts from S&T to create a variety of security applications.

Filtering out TDoS attack calls

SecureLogix’s technology analyzes and assigns a threat score to each incoming call in real time. Calls with a high score are either terminated or redirected to a lower-priority queue or a third-party call management service.

SecureLogix built its prototype on existing voice security technologies, so it can be deployed in complex voice networks, according to S&T. It also contains a business rules management system and a machine learning engine “that can be extended easily, with limited software modifications.”

Over the last year, SecureLogix deployed the prototype within a customer facility, a cloud environment and a service provider network. The vendor also worked with a 911 emergency call center and large financial institutions.

In March 2013, a large-scale TDoS attack highlighted the threat against the telephone systems of public-sector agencies. An alert issued by DHS and the FBI said extortionists had launched dozens of attacks against the administrative telephone lines of air ambulance and ambulance organizations, hospitals and financial institutions.

Today, the need for TDoS protection has grown from on premises to the cloud, where an increasing number of companies and call centers are signing up for unified communications as a service. In 2017, nearly half of organizations surveyed by Nemertes Research were using or planned to use cloud-based UC.

Accused CIA leaker charged with stealing government property

The Department of Justice has formally charged the suspected CIA leaker with stealing government property and more in connection with the theft and transmission of national defense information.

The accused CIA leaker, Joshua Adam Schulte, has been in the custody of law enforcement since August 2017 when he was charged with possessing child pornography; the FBI reportedly thought it had enough evidence to charge him with stealing and leaking the Vault 7 files to WikiLeaks as early as January. Government prosecutors said in mid-May that there was a new indictment set to be filed and that superseding indictment was filed on Monday, June 18, by the U.S. Attorney’s Office for the Southern District of New York.

The new indictment lists 13 charges against Schulte, including charges of illegally gathering and transmitting national defense information, theft of government property, unauthorized access of a computer to obtain information from a government agency and obstruction of justice, in addition to three charges related to child pornography.

Manhattan U.S. Attorney Geoffrey S. Berman wrote in a public statement that the accused CIA leaker, Schulte, was a former employee of the CIA and “allegedly used his access at the agency to transmit classified material to an outside organization.”

“We and our law enforcement partners are committed to protecting national security information and ensuring that those trusted to handle it honor their important responsibilities,” Berman wrote. “Unlawful disclosure of classified intelligence can pose a grave threat to our national security, potentially endangering the safety of Americans.”

The Vault 7 data provided to WikiLeaks by a CIA leaker included close to 9,000 documents, including hacking tools and zero-day exploits for iOS, Android, Windows and more. The CIA has never admitted that the Vault 7 data was its own and the indictment itself does not refer to the stolen data being from the CIA.

However, the press release from the DOJ did write: “On March 7, 2017, Organization-1 released on the Internet classified national defense material belonging to the CIA (the “Classified Information”). In 2016, SCHULTE, who was then employed by the CIA, stole the Classified Information from a computer network at the CIA and later transmitted it to Organization-1. SCHULTE also intentionally caused damage without authorization to a CIA computer system by granting himself unauthorized access to the system, deleting records of his activities, and denying others access to the system. SCHULTE subsequently made material false statements to FBI agents concerning his conduct at the CIA.”

FBI fights business email compromise with global crackdown

The United States Department of Justice this week announced the arrests of 74 individuals alleged to have committed fraud by participating in business-email-compromise scams.

The arrests are the result of an international enforcement effort, coordinated by the FBI, known as Operation Wire Wire, which was designed to crack down on email-account-compromise schemes targeting individuals and businesses of all sizes.

Business email compromise (BEC) is a growing problem, accounting for the highest reported losses, according to the FBI’s “2017 Internet Crime Report.” Criminal organizations use social engineering to identify employees who are authorized to make financial transactions, and then send fraudulent emails from company executives or foreign suppliers requesting wire transfers of funds.

Some schemes are directed at individuals in human resources or other departments in an effort to collect personally identifiable information, such as employee tax records. Others target individual victims, especially those involved in real estate transactions and the elderly.

In January, according to the Department of Justice, the U.S. federal agencies worked with international law enforcement on Operation Wire Wire to find and prosecute alleged fraudsters. The six-month coordinated effort involved the U.S. Department of Homeland Security, the U.S. Department of the Treasury and the U.S. Postal Inspection Service, and it resulted in 42 arrests in the United States, 29 in Nigeria and three in Canada, Mauritius and Poland. Law enforcement recovered $14 million in financial wire fraud during the operation, and they seized close to $2.4 million.

‘Nigerian princes’ turn to BEC

The techniques and tactics of Nigerian criminal organizations have become more sophisticated, according to Agari Data Inc. The email security company captured and analyzed the contents of 78 email accounts associated with 10 criminal organizations — nine in Nigeria — and reported increased BEC activities against North American companies and individuals between 2016 and 2018.

The research involved 59,692 unique messages in email communications originating from 2009 to 2017. According to the findings, business email compromise represented the largest attack vector for email fraud at 24%, even though many of these criminal groups migrated to BEC attacks, starting in 2016. Previously, these groups had focused predominantly on “romance” fraud schemes.

Business email compromise often overlaps or has similarities with cyberfraud schemes involving romance, lotteries, employment opportunities, vehicle sales and rental scams. In some cases, money mules “hired” using romance schemes or fraudulent employment opportunities may not be aware of the BEC scams. Mules receive the ill-gotten funds stateside and transfer the monies to difficult-to-trace, off-shore accounts set up by criminals.

Since January, up to $1 million in assets has been seized domestically, and 15 alleged money mules have been identified by FBI task forces and charged “for their role in defrauding victims.”

BEC schemes are hard to detect, because they do not rely on victims downloading malicious email attachments or clicking on fake URLs. Instead, this type of cyberfraud uses identity deception — 82%, according to Agari — email spoofing or corrupted email accounts, accessed via malware or credential theft. Researchers found 3.97% of intended targets who responded to the initial emails used in business email compromise became victims.

SS7 vulnerabilities enable breach of major cellular provider

The U.S. Department of Homeland Security warned of an exploit of the Signaling System 7 protocol that may have targeted American cellphone users.

The Washington Post reported that DHS notified Sen. Ron Wyden (D-Ore.) last week that malicious actors “may have exploited” global cellular networks “to target the communications of American citizens.” The letter has not been made public, but The Washington Post obtained a copy of it and reported that it described surveillance systems that exploit Signaling System 7 (SS7) vulnerabilities. According to the report, the exploit enables intelligence agencies and criminal groups to spy on targets using nothing but their cellphone number.

SS7 is the international telecommunications standard used since the 1970s by telecommunications providers to exchange call routing information in order to set up phone connections. Cellphone providers use SS7 to enable users to send and receive calls as they move from network to network anywhere in the world. The protocol has been criticized by analysts and experts for years because of its vulnerabilities and because it enables spying and data interception.

In a different letter to Ajit Pai, chairman of the Federal Communications Commission, Wyden referenced an “SS7 breach” at a major wireless carrier and criticized the FCC for its inaction regarding SS7 vulnerabilities.

“Although the security failures of SS7 have long been known to the FCC, the agency has failed to address the ongoing threat to national security and to the 95% of Americans who have wireless service,” Wyden wrote.

He explained the SS7 vulnerabilities enable attackers to intercept people’s calls and texts, as well as hack into phones to steal financial information or get location data.

“In a prior letter to me, you dismissed my request for the FCC to use its regulatory authority to force the wireless industry to address the SS7 vulnerabilities,” Wyden wrote to Pai. “You cited the work of the [Communications Security, Reliability and Interoperability Council] as evidence that the FCC is addressing the threat. But neither CSRIC nor the FCC have taken meaningful action to protect hundreds of millions of Americans from potential surveillance by hackers and foreign governments.”

In the letter, Wyden included a call to action for Pai to use the FCC’s “regulatory authority” to address the security issues with SS7 and to disclose information about SS7-related breaches to Wyden by July 9, 2018.

In other news:

  • The U.S. government ban on using Kaspersky Lab products was upheld this week, and the security company’s lawsuits were dismissed. U.S. District Judge Colleen Kollar-Kotelly dismissed two lawsuits filed by Kaspersky Lab in response to Binding Operational Directive 17-01 and the National Defense Authorization Act (NDAA), both of which banned the company’s products from use in the federal government. Kaspersky argued the ban was unconstitutional and caused undue harm to the company, but Kollar-Kotelly dismissed the argument and said while there may be “adverse consequences” for Kaspersky, the ban is not unconstitutional. Kaspersky Lab has said it will file an appeal of the ruling.
  • The U.S. House of Representatives advanced a bill that would require law enforcement to get a warrant before collecting data from email providers. The Email Privacy Act was added as an amendment to the NDAA, which is the annual budget for the Department of Defense. The bill passed the House 351-66 and will now move to the Senate for approval. The amendment was authored by Rep. Kevin Yoder (R-Kan.) and is the latest version of the 2016 Email Privacy Act that received unanimous support in the House. If the NDAA passes with this amendment included, it will provide warrant protections to all email, chats and online messages that law enforcement might want or need for investigations. The Electronic Frontier Foundation has been a proponent of email privacy in law, saying, “The emails in your inbox should have the same privacy protections as the papers in your desk.”
  • The private equity investment firm Thoma Bravo is acquiring a majority share in the security company LogRhythm. LogRhythm offers its users a security information and event management platform that also has user and entity behavior analytics features. The company has been in business for 15 years and has more than 2,500 customers worldwide. “LogRhythm believes it has found an ideal partner in Thoma Bravo,” said LogRhythm’s president and CEO, Andy Grolnick, in a statement. “As we seek to take LogRhythm to the next level and extend our position as the market’s preeminent NextGen SIEM vendor, we feel Thoma Bravo’s cybersecurity domain expertise and track record of helping companies drive growth and innovation will make this a powerful and productive relationship.” The deal is expected to close later in 2018. Thoma Bravo owns the certificate authority company DigiCert, which recently purchased Symantec’s CA operations, and has previously invested in other cybersecurity companies, including SonicWall, SailPoint, Hyland Security, Deltek, Blue Coat Systems, Imprivata, Bomgar, Barracuda Networks, Compuware and SolarWinds.

Feds issue new alert on North Korean hacking campaigns

The FBI and the Department of Homeland Security released an alert on Tuesday regarding malware campaigns connected to a North Korean hacking group known as Hidden Cobra.

The alert, which includes indicators of compromise (IOCs) such as IP addresses, attributes two malware families to the North Korean government by way of Hidden Cobra: a remote access tool called Joanap and a worm known as Brambul, which spreads via Windows’ Server Message Block (SMB) protocol. Both malware families were first identified by Symantec in 2015 and were observed targeting South Korean organizations. Other cybersecurity vendors later attributed the two malware campaigns to the nation-state hacking group Hidden Cobra, also known as Lazarus Group.

However, Tuesday’s alert, which was issued by US-CERT, marks the first time U.S. authorities publicly attributed the malware families and their activity to North Korean hacking operations.

“FBI has high confidence that HIDDEN COBRA actors are using the IP addresses — listed in this report’s IOC files — to maintain a presence on victims’ networks and enable network exploitation,” US-CERT said. “DHS and FBI are distributing these IP addresses and other IOCs to enable network defense and reduce exposure to any North Korean government malicious cyber activity.”

The alert also claimed that, “according to reporting of trusted third parties,” Joanap and Brambul have likely been used by the North Korean hacking group since at least 2009 to target organizations in various vertical industries across the globe. The FBI and DHS didn’t identify those trusted parties, but the alert cited a 2016 report, titled “Operation Blockbuster Destructive Malware Report,” from security analytics firm Novetta, which detailed malicious activity conducted by the Lazarus Group.

DHS’ National Cybersecurity and Communications Integration Center conducted an analysis of the two malware families, and the U.S. government discovered 87 network nodes that had been compromised by Joanap and were used as infrastructure by Hidden Cobra. According to the US-CERT alert, those network nodes were located in various countries outside the U.S., including China, Brazil, India, Iran and Saudi Arabia.

The FBI and DHS attribution case for Brambul and Joanap represents the latest evidence connecting the North Korean government to high-profile malicious activity, including the 2014 breach of Sony Pictures. Last December, the White House publicly attributed the WannaCry ransomware attack to the North Korean government; prior to the U.S. government’s accusation, several cybersecurity vendors had also connected the WannaCry source code, which also exploited the SMB protocol, with the Brambul malware.

The US-CERT alert also follows tense, back-and-forth negotiations between President Donald Trump and North Korean leader Kim Jong Un regarding a U.S.-North Korea summit. Last week, Trump announced the U.S. was withdrawing from the summit, but talks have reportedly resumed.

A DHS data breach exposed PII of over 250,000 people

A data breach at the U.S. Department of Homeland Security exposed the personally identifiable information of over 250,000 federal government employees, as well as an unspecified number of people connected with DHS investigations.

DHS released a statement Jan. 3, 2018, that confirmed the exposure of “approximately 246,167” federal government employees who worked directly for DHS in 2014. It also disclosed the breach of a database for the Office of Inspector General that contained the personally identifiable information (PII) of any person — not necessarily employed by the federal government — who was associated with OIG investigations from 2002 to 2014. This includes subjects, witnesses and complainants.

In its statement, the department emphasized the DHS data breach was not caused by a cyberattack and referred to it as a “privacy incident.”

“The privacy incident did not stem from a cyber-attack by external actors, and the evidence indicates that affected individual’s personal information was not the primary target of the unauthorized unauthorized [sic] transfer of data,” DHS said.

The DHS data breach was initially found in May 2017 during a separate, ongoing DHS OIG criminal investigation in which it was discovered that a former DHS employee had an unauthorized copy of the department’s case management system.

However, individuals affected by the DHS data breach weren’t notified until Jan. 3, 2018. In its statement, DHS addressed why the notification process took so long.

“The investigation was complex given its close connection to an ongoing criminal investigation,” the department said. “From May through November 2017, DHS conducted a thorough privacy investigation, extensive forensic analysis of the compromised data, an in-depth assessment of the risk to affected individuals, and comprehensive technical evaluations of the data elements exposed. These steps required close collaboration with law enforcement investigating bodies to ensure the investigation was not compromised.”

The DHS employee data breach exposed PII that included names, Social Security numbers, dates of birth, positions, grades and duty stations of DHS employees; the DHS investigative data breach exposed names, Social Security numbers, dates of birth, alien registration numbers, email addresses, phone numbers, addresses and other personal information that was provided to the OIG during investigative interviews with its agents.

DHS is offering free identity protection and credit-monitoring services for 18 months to affected individuals. The department said it has also taken steps to improve its network security going forward, including “placing additional limitations on which individuals have back end IT access to the case management system; implementing additional network controls to better identify unusual access patterns by authorized users; and performing a 360-degree review of DHS OIG’s development practices related to the case management system.”

While the affected government employees were notified directly about the breach, DHS stated, “Due to technological limitations, DHS is unable to provide direct notice to the individuals affected by the Investigative Data.”

DHS urged anyone associated with a DHS OIG investigation between 2002 and 2014 to contact AllClear ID, the Austin, Texas, breach response service retained by DHS to provide credit-monitoring and identity protection services to affected individuals.

In other news:

  • A group of senators has introduced a bill to secure U.S. elections. The Secure Elections Act is a bipartisan bill that aims to provide federal standards for election security. One measure proposed in the bill is to eliminate the use of paperless voting machines, which are regarded by election security experts as the least secure type of voting machines in use in today. Paperless voting machines don’t allow for audits, which the proposed legislation also wants to make a standard practice in all elections. The idea is that audits after every election will deter foreign meddling in American democracy like Russia’s interference in the 2016 U.S. presidential election. “An attack on our election systems by a foreign power is a hostile act and should be met with appropriate retaliatory actions, including immediate and severe sanctions,” the bill states. The bill was sponsored by Sen. James Lankford (R-Okla.) and co-sponsors Sens. Amy Klobuchar (D-Minn.), Lindsey Graham (R-S.C.), Kamala Harris (D-Calif.), Susan Collins (R-Maine) and Martin Heinrich (D-N.M.).
  • Attackers exploited a vulnerability in Google Apps Script to automatically download malware onto a victim’s system through Google Drive. Discovered by researchers at Proofpoint, the vulnerability in the app development platform enabled social-engineering attacks that tricked victims into clicking on malicious links that triggered the malware downloaded on their computer. The researchers also found the exploit could happen without any user interaction. Google has taken steps to fix the flaw by blocking installable and simple triggers, but the researchers at Proofpoint said there are bigger issues at work. The proof of concept for this exploit “demonstrates the ability of threat actors to use extensible SaaS platforms to deliver malware to unsuspecting victims in even more powerful ways than they have with Microsoft Office macros over the last several years,” the research team said in a blog post. “Moreover, the limited number of defensive tools available to organizations and individuals against this type of threat make it likely that threat actors will attempt to abuse and exploit these platforms more often as we become more adept at protecting against macro-based threats.” The researchers went on to note that, in order to combat this threat, “organizations will need to apply a combination of SaaS application security, end user education, endpoint security, and email gateway security to stay ahead of the curve of this emerging threat.”
  • The United States federal government is nearing its deadline to implement the Domain-based Message Authentication, Reporting and Conformance (DMARC) tool. In October 2016, the DHS announced it would mandate the use of DMARC and HTTPS in all departments and agencies that use .gov domains. DHS gave those departments and agencies a 90-day deadline to implement DMARC and HTTPS, which means the Jan. 15, 2018, deadline is soon approaching. According to security company Agari, as of mid-December, 47% of the federal government domains were now using DMARC, compared to 34% the month before. Another requirement within this mandate is federal agencies are required to use the strongest “reject” setting in DMARC within a year. This means emails that fail authentication tests will be less likely to make it to government inboxes — i.e., be rejected. Agari reported a 24% increase in the use of higher “reject” settings in the last month. On the flip side, Agari noted that most agency domains (84%) are still unprotected, with no DMARC policy.

DoD exposed data stored in massive AWS buckets

Once again, Department of Defense data was found publicly exposed in cloud storage, but it is unclear how sensitive the data may be.

Chris Vickery, cyber risk analyst at UpGuard, based in Mountain View, Calif., found the exposed data in publicly accessible Amazon Web Services (AWS) S3 buckets. This is the second time Vickery found exposed data from the Department of Defense (DoD) on AWS. The previous exposure was blamed on government contractor Booz Allen Hamilton; UpGuard said a now defunct private-sector government contractor named VendorX appeared to be responsible for building this database. However, it is unclear if VendorX was responsible for exposing the data. Vickery also previously found exposed data in AWS buckets from the Republican National Committee, World Wrestling Entertainment, Verizon and Dow Jones & Co.

According to Dan O’Sullivan, cyber resilience analyst at UpGuard, Vickery found three publicly accessible DoD buckets on Sept. 6, 2017.

“The buckets’ AWS subdomain names — ‘centcom-backup,’ ‘centcom-archive’ and ‘pacom-archive’ — provide an immediate indication of the data repositories’ significance,” O’Sullivan wrote in a blog post. “CENTCOM refers to the U.S. Central Command, based in Tampa, Fla. and responsible for U.S. military operations from East Africa to Central Asia, including the Iraq and Afghan Wars. PACOM is the U.S. Pacific Command, headquartered in Aiea, Hawaii and covering East, South and Southeast Asia, as well as Australia and Pacific Oceania.”

UpGuard estimated the total exposed data in the AWS buckets amounted to “at least 1.8 billion posts of scraped internet content over the past eight years.” The exposed data was all scraped from public sources including news sites, comment sections, web forums and social media.

“While a cursory examination of the data reveals loose correlations of some of the scraped data to regional U.S. security concerns, such as with posts concerning Iraqi and Pakistani politics, the apparently benign nature of the vast number of captured global posts, as well as the origination of many of them from within the U.S., raises serious concerns about the extent and legality of known Pentagon surveillance against U.S. citizens,” O’Sullivan wrote. “In addition, it remains unclear why and for what reasons the data was accumulated, presenting the overwhelming likelihood that the majority of posts captured originate from law-abiding civilians across the world.”

Importance of the exposed DoD data

Vickery found references in the exposed data to the U.S. Army “Coral Reef” intelligence analysis program, which is designed “to better understand relationships between persons of interest,” but UpGuard ultimately would not speculate on why the DoD gathered the data.

Ben Johnson, CTO at Obsidian Security, said such a massive data store could be very valuable if processed properly.

“Data often provides more intelligence that initially accessed, so while this information was previously publicly available, adversaries may be able to ascertain various insights they didn’t previously had,” Johnson told SearchSecurity. “What’s more of a problem than the data itself in this case is that this is occurring at all — showcasing that there’s plenty of work to do in safeguarding our information.”

What’s more of a problem than the data itself in this case is that this is occurring at all — showcasing that there’s plenty of work to do in safeguarding our information.
Ben JohnsonCTO at Obsidian Security

Rebecca Herold, president of Privacy Professor, noted that just because the DoD collected public data doesn’t necessarily mean the exposed data includes accurate information.

“Sources of, and reliability for, the information matters greatly. Ease of modifying even a few small details within a large amount of data can completely change the reality of the topic being discussed. Those finding this information need to take great caution to not simply assume the information is all valid and accurate,” Herold told SearchSecurity. “Much of this data could have been manufactured and used for testing, and much of it may have been used to lure attention, as a type of honeypot, and may contain a great amount of false information.”

Herold added that the exposed data had worrying privacy implications.

“Just because the information was publicly available does not mean that it should have been publicly available. Perhaps some of this information also ended up being mistakenly being made publicly available because of errors in configurations of storage servers, or of website errors,” Herold said. “When we have organizations purposefully taking actions to collect and inappropriately (though legally in many instances) use, share and sell personal information, and then that information is combined with all this freely available huge repositories of data, it can provide deep insights and revelations for specific groups and individuals that could dramatically harm a wide range of aspects within their lives.”

How to bring Azure costs down to earth

The migration of virtual machines to the cloud sounds great — until your IT department is hit with a huge bil…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

l.

For every minute the VM runs and byte it uses, Microsoft adds charges to a monthly tab. How do you manage Azure costs? The formula is relatively simple — admins should understand the approximate price tag before workloads move to Azure and right-size VMs to reduce wasteful expenses.

Find the right Azure region

The first step is to select the proper Azure region. Each region has different resources, capabilities and services; these facets — and its relative location compared to the business — produce the cost per region. Not every region is available — it depends on the organization’s location or subscription. For example, users in the United States cannot use Australian data centers without an Australian billing address.

A move to a less expensive Azure region makes a noticeable difference when it involves several dozen servers. However, a migration to a different Azure region affects the end-user experience with increased latency if applications move further from users and customers. Admins use Microsoft’s Azure latency test site to understand network performance per region.

Don’t make one-size-fits-all VMs

To further reduce Azure costs, align VMs to the proper performance level. For example, differentiate between production and dev/test environments, and build VMs accordingly. Dev/test VMs don’t usually need the production specifications as they rarely require high availability. Reduce the resources — and their associated costs — for dev/test VMs so they get only what they need.

Look at infrastructure as a service (IaaS) servers

In the web-based GUI wizard admins use to create servers, Azure presents the high-performance VMs as the default. Click on “View All” in the top right-hand corner of the dialog to reveal the range of server sizes. A0 is small and costs significantly less than Microsoft’s suggested options, which makes it ideal for experimentation.

Range of server sizes
Figure 1: The A0 server size is the smallest and least expensive option.

A0 is also oversubscribed, which means CPU performance varies based on other workloads in the node. The lower tiers also do not support load balancing and have other limitations, but the VMs in those levels make for ideal inexpensive test machines.

Admins also have a disk choice to limit Azure costs. To build an IaaS VM, there are two options: hard disk drives or solid-state drives (SSDs). Standard disks are good enough for most workloads with speeds up to 500 IOPS, depending on the configuration. If speed is not a concern, avoid the more expensive SSD choices.

Aside from IaaS, there are other options that many users are unaware of or fail to understand.

Implement services as a service

Some administrators new to the cloud see it as pure IaaS where everything needs to run on its own VM. This is an option — but an expensive one.

A move to a less expensive Azure region makes a noticeable difference when it involves several dozen servers.

Instead, think of that SQL Server and all the associated costs for compute, storage and licensing. Why deal with the price and deployment headaches, and instead just use the SQL Server as a service? It’s cheaper — a Standard_B4ms VM (four cores, 16 GB of RAM) with SQL standard costs about $383 a month while an Azure setup for multiple databases costs $224 a month on a standard tier. Plus, SQL as a service saves the administrator from the patch and update process.

Check your company’s security requirements to see if it clears the use of database servers in the cloud. Because these databases are on a shared resource with potentially hundreds of other companies, an exploit or misconfiguration could leak data outside the organization.

Analyze the cost of cloud resources

Admins must understand business requirements and know what costs they bring before a move to the cloud. On-premises compute has inefficiencies and sprawl that add expenses, but the lack of a monthly bill for most environments lets those costs fly under the radar.

By the same token, it’s vital to know the cloud environment’s requirements and the expenses for applications and infrastructure. Use Microsoft’s Azure calculator to work out the potential price tag.

Bundle resources for easier management

Admins should tap into resource groups to further control Azure costs. This feature collects the service resources, such as the VM, database and other assets, into a unit. Once the business no longer needs the service, the admins remove the resource group. This avoids a common housekeeping problem where the IT staff missed an item and the charges for it show up in the next bill.

Efficient code makes a difference

In an on-premises scenario, admins overcome inefficient code with additional resources. In the cloud, where every item has a cost per transaction or per second, better programming lowers expenses.

For example, an inexperienced database programmer who builds an additional temporary database, costs the company more money each time a new one spins up in the cloud. As this inefficient practice multiplies with each deployed instance, so does the cost. A better programmer with a more thorough understanding of SQL avoids this waste and builds code that takes less time to run.

Good programmers require higher salaries, but for a company that uses the cloud to scale out, that expense is worth it. The business saves more in the long run because lower resource utilization — thanks to better code — results in a smaller bill from Microsoft.

Next Steps

How Azure users can avoid higher Oracle licensing bills

Five steps to control cloud storage costs

Azure Stack adopters might need a hand

DHS cyberinsurance research could improve security

The Department of Homeland Security has undertaken a long-term cyberinsurance study to determine if insurance can help improve cybersecurity overall, but experts said that will depend on the data gathered.

The DHS began researching cyberinsurance in 2014 by gathering breach data into its Cyber Incident Data and Analysis Repository (CIDAR). DHS uses CIDAR to collect cyber incident data along 16 categories, including the type, severity and timeline of an incident, the apparent goal of the attacker, contributing causes, specific control failures, assets compromised, detection and mitigation techniques, and the cost of the attack.

According to the DHS, it hoped to “promote greater understanding about the financial and operational impacts of cyber events.”

“Optimally, such a repository could enable a novel information sharing capability among the federal government, enterprise risk owners, and insurers that increases shared awareness about current and historical cyber risk conditions and helps identify longer-term cyber risk trends,” the DHS wrote in a report about the value proposition of CIDAR. “This information sharing approach could help not only enhance existing cyber risk mitigation strategies but also improve and expand upon existing cybersecurity insurance offerings.”

The full cyberinsurance study by the DHS could take 10 to 15 years to complete, but Matt Shabat, strategist and performance manager in the DHS Office of Cybersecurity and Communications, told TechRepublic that he hopes there can be short-term improvements to cybersecurity with analysis of the data as it is gathered.

Shabat said he hopes the added context gathered by CIDAR will improve the usefulness of its data compared to other threat intelligence sharing platforms. Experts said this was especially important because as Ken Spinner, vice president of global field engineering at Varonis, told SearchSecurity, “A data repository is only as good as the data within it, and its success will likely depend on how useful and thorough the data is.”

“Sector-based Information Sharing and Analysis Centers have been implemented over a decade ago, so creating a centralized cyber incident data repository for the purpose of sharing intelligence across sectors is a logical next step and a commendable endeavor,” Spinner added. “A data repository could have greater use beyond its original intent by helping researchers find patterns in security incidents and criminal tactics.”

Philip Lieberman, president of Lieberman Software, a cybersecurity company headquartered in Los Angeles, said speed was the key to threat intel sharing.

“The DHS study on cyberinsurance is a tough program to implement because of missing federal laws and protocols to provide safe harbor to companies that share intrusion information,” Lieberman told SearchSecurity. “The data will be of little use in helping others unless threat dissemination is done within hours of an active breach.”

Many organizations may be reluctant to share meaningful data because of the difficulty in anonymizing it and the potential for their disclosure to be used against them.
Scott Petryco-founder and CEO of Authentic8

Scott Petry, co-founder and CEO of Authentic8, a secure cloud-based browser company headquartered in Mountain View, Calif., said the 16 data elements used by the DHS could provide “a pretty comprehensive overview of exploits and responses, if a significant number of organizations were to contribute to CIDAR.”

“The value of the data would be in the volume and its accuracy. Neither feel like short term benefits, but there’s no question that understanding more about breaches can help prevent similar events,” Petry told SearchSecurity. “But many organizations may be reluctant to share meaningful data because of the difficulty in anonymizing it and the potential for their disclosure to be used against them. It goes against their nature for organizations to share detailed breach information.”

The DHS appears to understand these concerns and outlined potential ways to overcome the “perceived obstacles” to enterprises sharing attack data with CIDAR, although experts noted many of the suggestions offered by the DHS may not be as effective as desired because they tend to boil down to working together with organizations rather than offering innovative solutions to these longstanding issues.

DHS did not respond to requests for comment at the time of this post.

Using cyberinsurance to improve security

Still, experts said if the DHS can gather quality data, the cyberinsurance study could help enterprises to improve security.

Spinner said cyberinsurance is a valid risk mitigation tool.

“Counterintuitively, having a cyberinsurance policy can foster a culture of security. Think of it this way: When it comes to auto insurance, safer drivers who opt for the latest safety features on their vehicles can receive a discount,” Spinner said. “Similarly, organizations that follow best practices and take appropriate steps to safeguard the data on their networks can also be rewarded with lower a lower rate quote.”

Lieberman said the efficacy of cyberinsurance on security is limited because the “industry is in its infancy with both insurer and insured being not entirely clear as to what constitutes due and ordinary care of IT systems to keep them free of intruders.”

“Cyberinsurance does make sense if there are clear definitions of minimal security requirements that can be objectively tested and verified. To date, no such clear definitions nor tests exist,” Lieberman said. “DHS would do the best for companies and taxpayers by assisting the administration and [the] legislative branch in drafting clear guidelines with both practices and tests that would provide safe harbor for companies that adopt their processes.”

Petry said the best way for cyberinsurance to help improve security would be to require “an organization to meet certain security standards before writing the policy and by creating an ongoing compliance requirement.”

“It’s a big market, and insurers are certainly making money, but that doesn’t mean it’s a mature market. Many organizations require their vendors to carry cyberinsurance, which will continue to fuel that growth, but the insurers aren’t taking reasonable steps to understand the exposure of the organizations they’re underwriting. When I get health insurance, they want to know if I’m a smoker and what my blood pressure is. Cyberinsurance doesn’t carry any of the same real-world assessments of ‘the patient.'”

Spinner said the arrangement between the cybersecurity industry and cyberinsurance is “very much still a work in progress.”

“The cybersecurity market is evolving rapidly, to some extent it is still in the experimental phase in that providers are continuing to learn what approach works best, just as companies are trying to figure out just how much insurance is adequate,” Spinner said. “It’s a moving target and we’ll continue to see the industry and policies evolve. The industry needs to work towards a standard for assessing risk so they can accurately determine rates.”