Tag Archives: whether

I7 6700k GTX 970

Keep Changing my mind whether I want to sell this or not but here we go again.

View attachment 990957

Collection only from Lichfield

Its a PC specialist gaming pc. The clear screen on the side panel of the pc is cracked but it is still in place ok.

I7 6700k
GTX 970
ASUS Z170-P
Corsair CS650 650w psu
16gb DDR4
240 GB SSD
2TB HDD
Windows 10

Price and currency: 595
Delivery: Goods must be exchanged in person
Payment method: BT or Cash
Location:

I7 6700k GTX 970

Smartphone push-to-talk apps poised for enterprise growth

With nothing but their voice, shopkeepers can check whether the backroom has that denim blouse in a size medium, and housekeepers can query which hotel rooms still need tidying. Contractors can see which plumber is closest to a homeowner’s leaky pipe, and co-workers from different countries can speak to one another in their native tongues in real time.

And it has all been made possible by the reinvention of a century-old tool — and toy: the walkie-talkie.

Startups and legacy phone companies alike are looking to remake the role of voice in the workplace with the introduction of push-to-talk (PTT) technologies that require nothing more than a smartphone or desktop application. Perhaps more critical, PTT vendors are incorporating artificial intelligence and data trackers to ease the workflows of corporate workers that historically have relied only on hardware.

While PTT cellphone technology has existed since the time of flip phones and Nextel, the reliability of 4G and LTE networks has made smartphone push-to-talk apps more attractive in recent years. AT&T, Verizon and Sprint offer PTT products for businesses over their networks, while Motorola Solutions Inc., which dominates the old-fashioned land mobile radio (LMR) market, sells a PTT platform that connects smartphones, desktops, radios and landlines.

Smaller vendors are also winning contracts — with hotels, hospitals, retailers, contractors, developers and recreation departments — by offering PTT apps improved by bots, location services, and the ability to integrate with business applications and collaboration tools. These platforms run over the top of cellular networks and Wi-Fi, connecting users over unlimited distances.

Orion Labs, based in San Francisco, sells small wearable devices that function as walkie-talkies by syncing with a smartphone app. Last week, the company upgraded its service, so workers using those tools can speak with colleagues equipped with only a smartphone or desktop app. Orion competes with vendors such as Voxer and Zello, which in mid-January extended a free-trial offering of its ZelloWork platform.

“The most important thing to understand is that this type of voice communication, walkie-talkies, really haven’t evolved very much in the last 20 years,” said Jesse Robbins, the co-founder and CEO of Orion Labs. The market, he said, is “long overdue for reinvention.”

Screenshot of smartphone push-to-talk app of Orion Labs
Screenshot of Orion Labs’ smartphone push-to-talk app

Cellular push-to-talk market poised for enterprise growth

In the United States, the PTT-over-cellular market will include 5.6 million business users by 2019, up from 4.2 million business users in 2016, according to projections by VDC Research, based in Natick, Mass.

First responders still depend on LMRs for their reliability. But other government agencies and industries with mobile workforces are increasingly adopting cellular PTT technologies to improve workflows and bring more workers into the PTT fold, said David Krebs, executive vice president of enterprise mobility and connected devices at VDC Research.

“In many ways, it is allowing organizations with significant investments in LMR to augment that user population,” Krebs said. “So that you can combine workers who have their LMR radios to other people who may be communicating with them.”

Companies and public agencies value being able to more closely integrate business applications with their PTT hardware, a functionality that isn’t possible with LMRs, Krebs said. Others choose smartphone push-to-talk apps to avoid the maintenance costs associated with aging LMR hardware.

Smartphone push-to-talk apps facilitate more than just conversations

Location services could be vital to inspiring more companies to adopt PTT smartphone apps, said Rob Arnold, an analyst with Frost & Sullivan, a global research and consulting firm headquartered in Santa Clara, Calif. The artificial intelligence tools included in most offerings are also a draw, he said.

PTT smartphone and desktop apps typically pinpoint the location of users on an interactive map, a tool that could allow managers to monitor and dispatch workers from a command center easily. Smart Walkie Talkie, based in Singapore, has software that automatically shifts airport crews into and out of PTT groups based on flight plans.

The bots built into the smartphone push-to-talk app and wearable device of Orion Labs can translate languages among users in a PTT group and tap into inventory and other databases. The platform also integrates with the cloud-based collaboration tool Slack, allowing text-to-voice and voice-to-text communication among teams.

“The industries that we sell to, they have a large portion of workers who do not have a desk at work,” said Zhou Wenhan the founder and CEO of Smart Walkie Talkie, which manufactures smartphones designed for PTT, selling primarily to markets in Southeast Asia. “So, it’s the kind of workers that are currently still invisible to most IT companies.

“I think the potential is basically reaching a market that traditionally doesn’t have much technology,” Wenhan said.

Still, some analysts are skeptical industries that never used PTT in the past will adopt the smartphone model in the future.

“There are lots of other applications that are competing with this for the carpeted office worker,” Arnold said. “The guys that traditionally haven’t had these devices in their hands probably aren’t going to get them just because they can load them onto their own smartphone.”

5G networks advance in U.S. with expanded trials

In recent telecom news, the subject is 5G networks almost all the time, whether that means fixed broadband 5G mobile services trials. Telecom operators and equipment vendors have almost daily updates about 5G trials and rollout plans. AT&T and Verizon 5G have recently announced more specifics about their paths to 5G services in the U.S.

Beyond next-generation wireless, public Wi-Fi networks will get shored-up security with the release of the new WPA3 standard later this year. And on the subject of security, Verizon recently acquired a threat detection startup that uses machine learning to detect compromised equipment within an organization.

Here’s a closer look at the details.

Operators move forward on 5G networks

Verizon plans to launch 5G fixed wireless service in three to five cities later this year, but the launch is only “one slice” of its broad 5G and overall network plans, Verizon CTO Hans Vestberg said at a recent investor relations event. 

The 5G networks economics of Verizon’s fixed broadband service are good because the company is planning to move from operating seven vertical networks to one horizontal network that will work with a unified core, transport and fiber transport. Vestberg said Verizon will deploy an intelligent edge network that will be able to serve a particular customer whether the customer is on Verizon’s FiOS fiber service or its wireless LTE network. That will make the economics of Verizon’s 5G fixed broadband services better because most of the 5G network assets will be shared, Vestberg said.

The network evolution at Verizon will take years to complete, but it will be a major part of how Verizon reaches its target of saving $10 billion over the next four years, Vestberg added.

Looking toward 5G mobile services, AT&T plans to launch what it describes as mobile 5G services this year in 12 U.S. cities by using small cells deployed closer to the ground than the radios that support LTE placed at the top of towers. According to RCRWireless, AT&T’s first round of mobile 5G will use millimeter wave spectrum (between 30 GHz and 300 GHz) that offers higher capacity rates than low-band spectrum doesn’t propagate over long distances, so the radios need to be closer together than in LTE deployments. AT&T’s VP of network architecture Hank Kafka said millimeter wave can be placed on telephone poles, building rooftops or on towers but at a lower height than a macrocell because of the propagation characteristics. Out of 23 cities slated to receive AT&T’s 5G Evolution infrastructure — described as a foundation to AT&T’s evolution to full 5G while 5G standards are being finalized — AT&T hasn’t specified which cities will roll out the mobile services this year. Kafka said the rollouts will require significant zoning and permit negotiations.

Wi-Fi security upgrade incoming in 2018

Wi-Fi security is getting a long-awaited upgrade in 2018 later this year. The Wi-Fi Alliance recently announced plans for WPA3, a new security standard that will replace WPA2, a security protocol almost two decades old that is built in to protect almost every wireless device.

According to ZDNet, the move to WPA3 will make open Wi-Fi networks found in places like airports and coffee shops safer by applying individualized data encryption that will scramble the connection between each device and the router. The security will also block an attacker after excessive failed password guesses.

Verizon acquires autonomous threat detection startup

Verizon recently acquired Niddel, an autonomous threat detection service company that uses machine-learning to detect compromised or infected devices inside an organization. The acquisition price of the company was not disclosed. Founded in 2014, the company’s primary product, Niddel Magnet, is a subscription-based automated service that reduces the need for organizations to hire qualified security analysts when dealing with compromised machines.

According to TechCrunch (a publication owned by Verizon), Niddel uses a variety of information from more than 50 internal and external sources to track security threats that could affect machines in customer organizations.

“Using machine learning to improve information accuracy significantly reduces false positives and significantly improves our detection and response capabilities,” Alexander Schlager, Verizon’s executive director of security services, said in a statement. Verizon has said it will look to incorporate this Niddel’s technology into Verizon products and services in the coming months.

Salesforce databases remain Oracle, for now

Speculation has grown around whether or not Salesforce plans to move off of Oracle databases. This may, or may not, be the case, according to two analysts.

In 2013, Salesforce CEO Marc Benioff and Oracle CEO Larry Ellison held a joint phone call lauding the nine-year licensing agreement in which Salesforce would use Oracle’s database to host its core products. Many analysts were on the call, but no questions were allowed, according to John Rymer, a Forrester Research vice president and principal analyst who was on the call. Instead, the analysts were instant messaging among themselves, prophesizing what this partnership may mean for the coming years.

“A bunch of us were chatting in the background, saying Benioff hung up the phone, walked down the hall and said to his tech people, ‘You have nine years to get me off Oracle,'” Rymer recalled.

Salesforce has consistently needed to outsource its databases and infrastructure, providing its competitors — namely, Oracle — with a substantial licensing fee and a brand-name customer. In early January, CNBC reported that Salesforce and Amazon had made “significant progress” moving away from Oracle technology. While the report was poorly sourced and “mixes up things,” according to Rymer, he did agree with the sentiment that Salesforce is certainly working to rid itself of its reliance on one of its main competitors.

Marc Benioff, CEO, Salesforce
Marc Benioff, CEO, Salesforce

“[Salesforce] wants control of their own tech stack and to the extent they can get what they need from a lower-cost option or an open source option,” Rymer said. “I’m quite confident they’re working on that.”

Whether or not Salesforce is working on a proprietary relational database depends on who you ask. Founders of both Oracle and Salesforce have scoffed at the notion that Salesforce is leaving its database.

Larry Ellison, CEO, OracleLarry Ellison

“Salesforce isn’t moving off of Oracle,” Ellison said during Oracle’s December earnings call. “Our competitors, who they have no reason to like us very much, continue to invest in and run their entire business on Oracle … Salesforce runs entirely on Oracle. I mean, go ahead. You tell me who’s moving off of Oracle.”

That earnings call was a little more than two weeks before the public reports trickled out about Salesforce and Amazon working to move off of Oracle. The reports stated that Amazon has been building its own infrastructure, called Redshift, since 2000, and Salesforce is building out infrastructure with the code name Sayonara — Japanese for “goodbye.”

Denis Pombriant, founder and managing principal at Beagle Research, said he believes it would be a waste of resources to build Salesforce databases and go against the current direction Salesforce is moving.

“The best investment is in a disruptive innovation, and building a database is not that,” Pombriant said. “A disruptive innovation is adding analytics, or building out a better mobile product or innovating around Trailhead — all things Salesforce has done and poured decent money into — and those will provide returns. But building a database to use internally and maybe try to sell in the marketplace does the opposite. It’s a drain of resources.”

Salesforce’s new partnerships

About speculation that Salesforce is working toward leaving Oracle, the company said only that “Salesforce does not comment on rumors.”

Regardless of Salesforce’s stance on Oracle, the San Francisco-based company has partnered with several other infrastructure leaders over the past year, with an Amazon Web Services (AWS) partnership stemming from 2017 and a newer partnership with Google Cloud announced at Dreamforce in November. That, combined with Salesforce’s influx of startup technology through its string of purchases in recent years, signals that Salesforce is hoping to modernize its infrastructure, according to Rymer.

“Startups tend to not base their technology on Oracle databases,” Rymer said. “Since 2013, Salesforce has made a bunch of acquisitions, launched a lot of products, and a lot of those newer products are running on AWS.”

Considerations with or without Oracle

Whether a potential switch to Salesforce databases affects its customers depends on how seamless the data migration is.

“If [building out Salesforce databases] impacts customers, then Salesforce has failed,” Rymer said. “Salesforce customers don’t see the Oracle database. So, any movement has to be seamless.”

Pombriant, however, who asked Salesforce co-founder Parker Harris if Salesforce is moving on from Oracle, said it is more likely Salesforce is building capabilities to work with Oracle’s database.

If [building out Salesforce databases] impacts customers, then Salesforce has failed.
John Rymervice president and principal analyst, Forrester

“Parker Harris told me directly that won’t be [Salesforce’s] direction,” Pombriant said. He added that Harris said Salesforce is working on technologies that add capabilities around the edges — and it’s those projects that could be the Sayonara products to which the public reports refer. “It’s quite possible — I know that Oracle and Salesforce admit they have a relationship, and Oracle developers work with Salesforce to better understand what each is doing.”

Building out proprietary Salesforce databases — while possible — just doesn’t seem like a feasible option to Pombriant.

“I’m not skeptical that [Salesforce] can pull it off; I’m skeptical they intend to,” Pombriant said. “There’s the old joke that if you give a million monkeys typewriters, in a million years, someone will write a Shakespearian play — great. You can say the same thing about building databases. It’s a bad business move and doesn’t produce tangible, bankable results.”

Celebrate the Season of Giving with Xbox

This holiday season is all about giving. Whether you’re giving time to loved ones, giving gifts to family and friends, or giving your stomach more food than it can handle, Xbox wants to make it easy to play together and give together. From December 21 to January 4, you can join Xbox Game Pass and Xbox Live Gold for $1 each! Also, you can share your love of gaming by gifting an Xbox Game Pass membership.

Not only are we giving our fans great deals, we have a great opportunity for the Xbox community to give back to kids across the world. Xbox is partnering with GameChanger to make a difference for children in hospitals. For each Xbox Game Pass membership purchased or gifted from December 21 to January 4, Xbox will donate $10 of Xbox Game Pass to hospitals around the world through GameChanger.

Here at Xbox we believe in accessibility, diversity, and inclusion. We also believe that our passionate and awesome community of Xbox fans love to make a difference in people’s lives. That is why we are incredibly excited and humbled to team up with GameChanger this holiday to positively impact the lives of children facing life-threatening illnesses. Over the past several years, Microsoft has partnered with GameChanger and other organizations to place hundreds of thousands of Xbox consoles in hospitals around the world with the goal of providing entertainment to hospitalized children. Now we want to give our great community a chance to get involved as well.

Play 100+ Games for $1

Get your first month of Xbox Game Pass for $1 and spend your holiday with unlimited access to over 100 Xbox One and Xbox 360 games on the Xbox One family of devices. With new games added every month, there is always a new adventure waiting for you. All purchases benefit Season of Giving with Xbox. You must be signed in to get this offer (not available for existing members).

Go Gold for $1

Get your first month of Xbox Live Gold for $1! Go Gold for exclusive savings and join the best community of gamers on the most advanced multiplayer network. All purchases benefit Season of Giving with Xbox. You must be signed in to get this offer (not available for existing members).

Already own Xbox Game Pass or Xbox Live Gold?

Here’s how you can still get involved with the cause! Give the gift of 100+ games this holiday season by gifting Xbox Game Pass at full price. Also, you can add additional time to your Xbox Game Pass or Xbox Live Gold membership at full price.

For every Xbox Game Pass or Xbox Live Gold membership you buy or gift, Xbox will donate $10 of Xbox Game Pass to hospitals around the world through Season of Giving with Xbox.

Let’s play together & give together to bring joy to hospitalized children around the world. We hope we can make it a bit easier to make a difference this holiday season. Thank you for reading and have a wonderful end to the year.

Radiology AI and deep learning take over RSNA 2017

CHICAGO — As the medical imaging world debates whether machines are supplanting humans, veteran radiology AI thinker Curtis Langlotz, M.D., offered what is becoming a widely held view of the profession’s technology future.

“To the question, will AI replace radiologists, I say the answer is no. But radiologists who do AI will replace radiologists who don’t,” Langlotz, professor of radiology and biomedical informatics at Stanford University School of Medicine, said to a packed hall at the RSNA 2017 conference.

The setting was a scientific panel during the 103rd Scientific Assembly and Annual Meeting of the Radiological Society of North America, held at the McCormick Place conference center.

RSNA show vigorous in its second century

RSNA, with more than 54,000 members from around the world, annually stages what is the biggest healthcare conference and exposition on the continent. This year, the event attracted some 50,000 attendees, with nearly half of them medical imaging professionals, and 667 exhibitors — mostly vendors.

In addition to artificial intelligence and various forms of machine learning, RSNA 2017 was more deeply immersed than ever before in value-based imaging, the pursuit of quality over volume, as the U.S. healthcare system moves in that direction.

The RSNA 2017 exposition floor at the McCormick Place conference center in Chicago.
The RSNA 2017 exposition floor at the McCormick Place conference center in Chicago.

Deconstructing PACS

Also as strong as ever were picture archiving and communications systems (PACS) and vendor-neutral archive (VNA) technologies and systems for storing and viewing complex medical images, including the increasingly popular strategy of “deconstructing PACS” — stitching together parts of PACS from various vendors.

But radiology AI and deep learning — a subset of machine learning that uses advanced statistical techniques to enable computers to improve at tasks with experience — were probably the hottest topics at RSNA 2017.

Indeed, Langlotz’s session — and dozens of other panels on AI, deep learning and machine learning in radiology and other imaging-intensive specialties — drew overflow crowds.

Radiology AI excitement and reality

To the question, will AI replace radiologists, I say the answer is no. But radiologists who do AI will replace radiologists who don’t.
Curtis LanglotzM.D., professor of radiology and biomedical informatics at Stanford University School of Medicine

“We’re definitely right in the eye of the storm of the hype cycle,” Rasu Shrestha, M.D., chief innovation officer at University of Pittsburgh Medical Center, told SearchHealthIT on the busy “technical exhibition,” or show, floor. “Having said that, that hype is being driven by an immense amount of hope. Could AI and machine learning solve for the complexities of healthcare?”

Langlotz acknowledged that radiology AI has already been through a number of hype-bust cycles in recent decades, but his work and that of colleagues at the Mayo Clinic and The Ohio State University, among others, shows that AI and machine learning have made dramatic progress.

Luciano Prevedello, M.D., division chief for medical imaging informatics at The Ohio State University Wexner Medical Center, said at the same deep learning session that “from 2014 to 2015 is when the algorithms started surpassing the human ability to classify” medical image data.

Experts say AI can aid imaging now

The radiology AI and deep learning experts said the software technologies, which require supercomputer-level computing power, can help radiologists and other imaging professionals on a practical basis.

For example, today, AI and deep learning can help physicians more efficiently produce images, improve quality of images, triage and classify images, serve in computer-aided detection of medical problems, and perform automated report drafting, Langlotz said.

As for value-based imaging, one radiology IT expert, Jim Whitfill, M.D., chief medical officer at Innovation Care Partners, a physician-led accountable care organization in Scottsdale, Ariz., said radiologists have opportunities to benefit financially from value-based care if they take on financial risk as ACOs do.

Value-based care and imaging not going away

During a panel on ACOs and value-based care, Whitfill noted that despite recent moves by the administration of President Donald Trump to trim several value-based care programs, federal healthcare officials are still behind the healthcare reimbursement approach, which Whitfill said has firm supporters.

“It’s absolutely critical that radiologists bring their talent around leadership, information technology and the larger healthcare system to bear as organizations begin to make this shift” toward value-based care, Whitfill said.

In an interview, Whitfill said one of the biggest technological advances in medical imaging that will help in the move toward value-based area is enterprise imaging.

“Historically we’ve been very focused on radiology in the PACS system,” Whitfill said. “But now, organizations are not only adding cardiology images, but also ophthalmology images, dermatology images and others, so we’re seeing a revolution in terms of the imaging platforms moving all these images into one place.”

Cloud-hosted apps catching on to meet user demand

With a variety of available services, now’s a good time for IT administrators to consider whether cloud-hosted apps are a good option.

Offerings such as Citrix XenApp Essentials and Amazon Web Services (AWS) AppStream allow IT to stream desktop applications from the cloud to users’ endpoints. Workflow and automation provider IndependenceIT also has a new offering, AppServices, based on Google Cloud Platform. Organizations adopt these types of services to get benefits around centralized management, scalability and more.

More organizations are considering cloud-hosted apps, because IT needs to become a service provider to meet the growing application demands of both external customers and internal users, said Agatha Poon, research director at 451 Research.

“You get requirements from different teams, and all want to have quicker ways to get applications, quicker ways to deploy services,” Poon said. “So, then, you need some sort of mechanism … to support that.”

What application hosting services offer

Application streaming services are an alternative to on-premises application virtualization, in which organizations host applications in their own data centers.

XenApp Essentials and AppStream place an application in the cloud and let the IT admins assign a group of users to it. But just delivering applications through the cloud is not enough, and the app hosting service should also provide a way to manage the lifecycle of the app publishing. Thus, IT is left with connecting data assets to the app and has to set controls for where users are allowed to move the data.  

Some app streaming services require organizations to use another set of tools for those management tasks. For example, in the case of AWS, IT must manually configure storage using the Amazon Simple Storage Service and connect it back to AppStream if they want additional storage.  

Swizznet, a hosting provider for accounting applications, adopted Independence IT’s AppServices in September 2016 to deliver apps internally and to customers. The company moved away from XenApp Essentials because IndependenceIT provided more native management capabilities, said Mike Callan, CEO of Swizznet, based in Seattle.

We wanted the ability to automatically scale and spin additional servers.
Mike CallanCEO, Swizznet

“We wanted the ability to automatically scale and spin additional servers, where we could essentially have that capability automated instead of paying engineers to do that,” Callan said.

Citrix’s business problems over the past few years were also a factor in making the switch, Callan said.

“Citrix is, unfortunately, just a company more or less in disarray, so they haven’t been able to keep up with the value proposition that they once had,” he added.

Application streaming services can also help organizations deliver apps that they don’t have the resources to host on-premises. Cornell University has used Amazon AppStream 2.0 since early 2017 and took advantage of the new GPU-powered features that aim to reduce the cost of delivering graphics-intensive apps.

These features have opened up more kinds of software that Cornell can deliver to students, said Marty Sullivan, a DevOps cloud engineer at the university in Ithaca, N.Y. Software such as ANSYS, Dassault Systemes Solidworks, and Autodesk AutoCAD and Inventor help students and faculty run simulations and design mechanical parts, but they only perform well when a GPU is available.

“[Departments] will be able to deliver these specialized pieces of software without having to redevelop them for another platform,” Sullivan said.

Google Cloud Platform
The different services within Google Cloud Platform

Cloud market pushes app hosting forward

When it comes to cloud infrastructure services, Google ranked third behind AWS and Microsoft in Gartner’s 2017 Magic Quadrant. But Google Cloud Platform made the deployment of AppServices easy, Callan said. He was able to go through the auto-deployment quick-start guide and set it up himself in just a couple days.

The increasing reliance on cloud services and the rise of workers using multiple devices to get their jobs done are driving the app streaming trend. Providing company-approved cloud-hosted apps for such employees makes deployment and management easier. IT admins don’t have to physically load any apps on the machines, nor do the employees with the machines need to be present for IT to keep tabs on the usage and security of those apps.

New VEP Charter promises vulnerability transparency

The White House wants more transparency in how federal agencies determine whether or not to disclose software vulnerabilities, but there are early questions regarding how it might work.

The Vulnerabilities Equities Process (VEP) was designed to organize how federal agencies would review vulnerabilities and decide if a flaw should be kept secret for use in intelligence or law enforcement operations or disclosed to vendors. The new VEP Charter announced by Rob Joyce, special assistant to the President and cybersecurity coordinator for the National Security Council, aims to ensure the government conducts “the VEP in a manner that can withstand a high degree of scrutiny and oversight from the citizens it serves.”

“I believe that conducting this risk/benefit analysis is a vital responsibility of the Federal Government,” Joyce wrote in a public statement. “Although I don’t believe withholding all vulnerabilities for operations is a responsible position, we see many nations choose it. I also know of no nation that has chosen to disclose every vulnerability it discovers.”

Joyce laid out the “key tenets” of the new VEP Charter, including increased transparency and an annual report, improved standardization of the process regarding the interests of various stakeholders and increased accountability.

“We make it clear that departments and agencies with protective missions participate in VEP discussions, as well as other departments and agencies that have broader equities, like the Department of State and the Department of Commerce. We also clarify what categories of vulnerabilities are submitted to the process and ensure that any decision not to disclose a vulnerability will be reevaluated regularly,” Joyce wrote. “There are still important reasons to keep many of the specific vulnerabilities evaluated in the process classified, but we will release an annual report that provides metrics about the process to further inform the public about the VEP and its outcomes.”

Questions about the VEP Charter

The VEP has previously been criticized by experts for being optional rather than being codified into law, but the VEP Charter does not include language making the process a requirement nor does it acknowledge the PATCH Act, a bill proposed in Congress which would enforce a framework for using the VEP.

Heather West, senior policy manager and Americas principal at Mozilla, noted in a blog post that “many of the goals of the PATCH Act [are] covered in this process release, [but] our overarching goal in codifying the VEP in law to ensure compliance and permanence cannot be met by unilateral executive action.”

Early readings of the VEP Charter have revealed what some consider a conflict of interest, in that the NSA is designated as the VEP Executive Secretariat with the responsibility to “facilitate information flow, discussions, determinations, documentation, and recordkeeping for the process.”

However, the VEP Charter also states that any flaw found in NSA-certified equipment or systems should be “reported to NSA as soon as practical. NSA will assume responsibility for this vulnerability and submit it formally through the VEP Executive Secretariat.”

Additionally some have taken issue with the following clause in the VEP Charter: “The [U.S. government’s] decision to disclose or restrict vulnerability information could be subject to restrictions by foreign or private sector partners of the USG, such as Non-Disclosure Agreements, Memoranda of Understanding, or other agreements that constrain USG options for disclosing vulnerability information.”

Edward Snowden said on Twitter that this could be considered an “enormous loophole permitting digital arms brokers to exempt critical flaws in U.S. infrastructure from disclosure” by using an NDA.

Following Equifax breach, CEO doesn’t know if data is encrypted

Equifax’s interim CEO said during a congressional hearing that he doesn’t know whether or not the company now encrypts customer data.

Equifax alerted the public in September 2017 to a massive data breach that exposed the personal and financial information — including names, birthdays, credit card numbers and Social Security numbers — of approximately 145 million customers in the United States to hackers. Following the Equifax breach, the former CEO Richard Smith and the current interim CEO Paulino do Rego Barros Jr. were called to testify before the Committee on Commerce, Science, and Transportation this week for a hearing titled “Protecting Consumers in the Era of Major Data Breaches.”

During the hearing, Sen. Cory Gardner (R-Colo.) questioned Smith and Barros about Equifax’s use of — or lack of — encryption for customer data at rest. Smith confirmed that the company was not encrypting data at the time of the Equifax breach, and Gardner questioned whether or not that was intentional.

“Was the fact that [customer] data remained unencrypted at rest the result of an oversight, or was that a decision that was made to manage that data unencrypted at rest?” Gardner asked Smith.

Smith pointed out that encryption at rest is just one method of security, but eventually confirmed that a decision was made to leave customer data unencrypted at rest.

“So, a decision was made to leave it unencrypted at rest?” Gardner pushed.

“Correct,” Smith responded.

Gardner moved on to Barros and asked whether he has implemented encryption for data at rest since he took over the position on Sept. 26.

Barros began to answer by saying that Equifax has done a “top-down review” of its security, but Gardner interrupted, saying it was a yes or no question. Barros stumbled again and said it was being reviewed as part of the response process and Gardner pushed again.

“Yes or no, does the data remain unencrypted at rest?”

“I don’t know at this stage,” Barros responded.

Gardner appeared stunned by Barros’ answer and pointed out that a lack of encryption was essentially what caused this massive Equifax breach. Smith attempted to make the situation better.

“Senator, if I may. It’s my understanding that the entire environment [in] which this criminal attack occurred is much different; it’s a more modern environment with multiple layers of security that did not exist before. Encryption is only one of those layers of security,” Smith said.

Also testifying at the hearing was a panel of experts in security and privacy, as well as the former CEO of Yahoo Inc., which revealed in September 2017 that its data breach in 2013 affected 3 billion user accounts.

Gardner deferred to Todd Wilkinson, the president and CEO of Entrust Datacard who was a member of the panel, and asked Wilkinson whether it is irresponsible not to encrypt customer data at rest. Wilkinson pointed out that industry standards such as PCI DSS require retailers and others to encrypt precisely the kind of information that Equifax did not encrypt.

Equifax still faces over 240 class action suits following the data breach, including lawsuits from multiple classes of consumers, as well as shareholders and financial institutions that claim to be affected by the breach.

In other news

  • A group of researchers from the University of Florida have discovered several vulnerabilities in IEEE’s P1735 cryptography standard. P1735 is supposed to encrypt intellectual property used in chip design so they can’t be reverse engineered and taken advantage of for free. Animesh Chhotaray, Adib Nahiyan, Domenic Forte, Mark Tehranipoor and Thomas Shrimpton took a closer look at the P1735 standard in their paper, “Standardizing Bad Cryptographic Practice.” “We find a surprising number of cryptographic mistakes in the standard,” the researchers said. “In the most egregious cases, these mistakes enable attack vectors that allow us to recover the entire underlying plaintext [intellectual property] IP.” Some of the flaws found in the standard enable hackers to decrypt the IP protected by P1735 and alter it to inject hidden malware. “Some of these attack vectors are well-known, e.g. padding-oracle attacks,” the research group said. “Others are new, and are made possible by the need to support the typical uses of the underlying IP; in particular, the need for commercial system-on-chip (SoC) tools to synthesize multiple pieces of IP into a fully specified chip design and to provide syntax errors.”
  • Security experts have found a faster, more affordable way to exploit chips from Infineon Technologies. The flaw, known as Return of Coppersmith’s Attack, or ROCA, is in Infineon’s key generation library for RSA encryption and could enable attackers to steal the keys of vulnerable devices. ROCA was first made public in October 2017 by researchers from Czech Republic, the U.K. and Italy. However, this week, two other security researchers, Daniel J. Bernstein and Tanja Lange, published a blog post that showed a faster, cheaper way to exploit the flaw, which was originally dismissed as being too difficult and too expensive to register as a major threat. The affected devices include Gemalto’s IDPrime .NET smart cards and Estonia’s national ID cards. The primary concern is that the flaw in the chips could result in voter fraud as Estonia’s citizens use their ID cards to vote. Previously, it was thought that hacking just one ID card would cost $80,000, but the new research shows a way to do it for $20,000.
  • It’s now known that a previously disclosed kill switch in the code of Intel’s Management Engine (ME) can be exploited via USB port. The Intel ME is an embedded subsystem on most Intel chips manufactured since 2008 and functions as its own CPU, separate from the CPU and operating system of the device. It was previously assumed that the Intel ME was secured against attacks, but in August 2017, Positive Technologies disclosed its findings that there is a way to disable the Intel ME. Now, it’s been revealed that recent versions of Intel ME feature Joint Test Action Group (JTAG) debugging ports that can be reached through USB ports. JTAG gives hackers access to the code running on the chip and thus the firmware. This means Intel ME is less secure than previously thought as access to firmware can grant hackers access to any number of security vulnerabilities that can be exploited.