Tag Archives: topic

Jake Braun discusses the Voting Village at DEF CON

Election security continues to be a hot topic, as the 2018 midterm elections draw closer. So, the Voting Village at DEF CON 26 in Las Vegas wanted to re-create and test every aspect of an election.

Jake Braun, CEO of Cambridge Global Advisors, based in Arlington, Va., and one of the main organizers of the DEF CON Voting Village, discussed the pushback the event has received and how he hopes the event can expand in the future.

What were the major differences between what the Voting Village had this year compared to last year?

Jake Braun: The main difference is it’s way bigger. And we’ve got, end to end, the voting infrastructure. We’ve got voter registration, a list of voters in the state of Ohio that are in a cyber range that’s basically like a county clerk’s network. Cook County, Illinois, their head guy advised us on how to make it realistic [and] make it like his network. We had that, but we didn’t have the list of voters last year.

That’s the back end of the voter process with the voter infrastructure process. And then we’ve got machines. We’ve got some new machines and accessories and all this stuff.

Then, on the other end, we’ve got the websites. This is the last piece of the election infrastructure that announces the results. And so, obviously, we’ve got the kids hacking the mock websites.

What prompted you to make hacking the mock websites an event for the kids in R00tz Asylum?

Braun: It was funny. I was at [RSA Conference], and we’ve been talking for a long time about, how do we represent this vulnerability in a way that’s not a waste of time? Because the guys down in the [Voting Village], hacking websites is not interesting to them. They’ve been doing it for 20 years, or they’ve known how to do it for 20 years. But this is the most vulnerable part of the infrastructure, because it’s [just] a website. You can cause real havoc.

I mean, the Russians — when they hacked the Ukrainian website and changed it to show their candidate won, and the Ukrainians took it down, fortunately, they took it down before anything happened. But then, Russian TV started announcing their candidate won. Can you imagine if, in November 2020, the Florida and Ohio websites are down, and Wolf Blitzer is sitting there on CNN saying, ‘Well, you know, we don’t really know who won, because the Florida and Ohio websites are down,’ and then RT — Russian Television — starts announcing that their preferred candidate won? It would be chaos.

Anyway, I was talking through this with some people at [RSA Conference], and I was talking about how it would be so uninteresting to do it in the real village or in the main village. And the guy [I was talking to said], ‘Oh, right. Yeah. It’s like child’s play for them.’

I was like, ‘Exactly, it’s child’s play. Great idea. We’ll give it to R00tz.’ And so, I called up Nico [Sell], and she was like, ‘I love it. I’m in.’ And then, the guys who built it were the Capture the Packet guys, who are some of the best security people in the planet. I mean, Brian Markus does security for … Aerojet Rocketdyne, one of the top rocket manufacturers in the world. He sells to [Department of Defense], [Department of Homeland Security] and the Australian government. So, I mean, he is more competent than any election official we have.

The first person to get in was an 11-year-old girl, and she got in in 10 minutes. Totally took over the website, changed the results and everything else.

How did it go with the Ohio voter registration database?

Braun: The Secretaries of State Association criticized us, [saying], ‘Oh, you’re making it too easy. It’s not realistic,’ which is ridiculous. In fact, we’re protecting the voter registration database with this Israeli military technology, and no one has been able to get in yet. So, it’s actually probably the best protected list of voters in the country right now.

Have you been able to update the other machines being used in the Voting Village?

Braun: Well, a lot of it is old, but it’s still in use. The only thing that’s not in use is the WinVote, but everything else that we have in there is in use today. Unlike other stuff, they don’t get automatic updates on their software. So, that’s the same stuff that people are voting on today.

Have the vendors been helpful at all in providing more updated software or anything?

Braun: No. And, of course, the biggest one sent out a letter in advance to DEF CON again this year saying, ‘It’s not realistic and it’s unfair, because they have full access to the machines.’

Do people think these machines are kept in Fort Knox? I mean, they are in a warehouse or, in some places, in small counties, they are in a closet somewhere — literally. And, by the way, Rob Joyce, the cyber czar for the Trump administration who’s now back at NSA [National Security Agency], in his talk [this year at DEF CON, he basically said], if you don’t think that our adversaries are doing exactly this all year so that they know how to get into these machines, your head is insane.

The thing is that we actually are playing by the rules. We don’t steal machines. We only get them if people donate them to us, or if we can buy them legally somehow. The Russians don’t play by the rules. They’ll just go get them however they want. They’ll steal them or bribe people or whatever.

They could also just as easily do what you do and just to get them secondhand.

Braun: Right. They’re probably doing that, too.

Is there any way to test these machines in a way that would be acceptable to the manufacturers and U.S. government?

Braun: The unfortunate thing is that, to our knowledge, the Voting Village is still the only public third-party inspection — or whatever you want to call it — of voting infrastructure.

The unfortunate thing is that the only time this is done publicly by a third party is when it’s done by us. And that’s once a year for two and a half days. This should be going on all year.
Jake BraunCEO of Cambridge Global Advisors

The vendors and others will get pen testing done periodically for themselves, but that’s not public. All these things are done, and they’re under [nondisclosure agreement]. Their customers don’t know what vulnerabilities they found and so on and so forth.

So, the unfortunate thing is that the only time this is done publicly by a third party is when it’s done by us. And that’s once a year for two and a half days. This should be going on all year with all the equipment, the most updated stuff and everything else. And, of course, it’s not.

Have you been in contact with the National Institute of Standards and Technology, as they are in the process of writing new voting machine guidelines?

Braun: Yes. This is why DEF CON is so great, because everybody is here. I was just talking to them yesterday, and they were like, ‘Hey, can you get us the report as soon as humanly possible? Because we want to take it into consideration as we are putting together our guidelines.’ And they said they used our report last year, as well.

How have the election machines fared against the Voting Village hackers this year?

Braun: Right, of course, they were able to get into everything. Of course, they’re finding all these new vulnerabilities and all this stuff. 

The greatest thing that I think came out of last year was that the state of Virginia wound up decommissioning the machine that [the hackers] got into in two minutes remotely. They decommissioned that and got rid of the machine altogether. And it was the only state that still had it. And so, after DEF CON, they had this emergency thing to get rid of it before the elections in 2017.

What’s the plan for the Voting Village moving forward?

Braun: We’ll do the report like we did last year. Out of all the guidelines that have come out since 2016 on how to secure election infrastructure, none of them talk about how to better secure your reporting websites or, since they are kind of impossible to secure, what operating procedures you should have in place in case they get hacked.

So, we’re going to include that in the report this year. And that will be a big addition to the overall guidelines that have come out since 2016.

And then, next year, I think, it’s really just all about, what else can we get our hands on? Because that will be the last time that any of our findings will be able to be implemented before 2020, which is, I think, when the big threat is.

A DEF CON spokesperson said that most of the local officials that responded and are attending have been from Democratic majority counties. Why do you think that is?

Braun: That’s true, although [Neal Kelley, chief of elections and registrar of voters for] Orange County, attended. Orange County is pretty Republican, and he is a Republican.

But I think it winds up being this functionally odd thing where urban areas are generally Democratic, but because they are big, they have a bigger tax base. So then, the people who run them have more money to do security and hire security people. So, they kind of necessarily know more about this stuff.

Whereas if you’re in Allamakee County, Iowa, with 10,000 people, the county auditor who runs the elections there, that guy or gal — I don’t know who it is — but they are both the IT and the election official and the security person and the whatever. You’re just not going to get the specialized stuff, you know what I mean?

Do you have any plans to try to boost attendance from smaller counties that might not be able to afford sending somebody here or plans on how to get information to them?

Braun: Well, that’s why we do the report. This year, we did a mailing of 6,600 pieces of mail to all 6,600 election officials in the country and two emails and 3,500 live phone calls. So, we’re going to keep doing that.
 
And that’s the other thing: We just got so much more engagement from local officials. We had a handful come last year. We had several dozen come this year. None of them were public last year. This year, we had a panel of them speaking, including DHS [Department of Homeland Security].

So, that’s a big difference. Despite the stupid letter that the Secretary of State Association sent out, a lot of these state and local folks are embracing this.

And it’s not like we think we have all the answers. But you would think if you were in their position and with how cash-strapped they are and everything, that they would say, ‘Well, these guys might have some answers. And if somebody’s got some answers, I would love to go find out about those answers.’

Challenges of blockchain muddle understanding of the technology

I have read many articles and generalized argle-barg on the topic of blockchain and cryptocurrencies, and a couple things stand out: Nobody has a great definition for either, and the two are often so thoroughly conflated that most attempts at cogent definitions are pointless..

Nobody seems to agree on what a blockchain is — or isn’t — except in some loose, arm-flapping way. That lack of understanding represents one of the most significant challenges of blockchain. Cryptocurrencies are mostly understood in that “I’ll know it when I see it” way, where we agree on a vague idea without understanding the core of the idea. If you ask the average person about blockchain or cryptocurrencies, and to the extent that she is aware of either, the answer you’ll probably get is simple: bitcoin.

Conceptually, of course, the idea of a blockchain is like the idea of one of its main components: cryptography. Cryptography is understood as a monolithic thing in only the most abstracted macro sense possible, where different types of cryptography — among them symmetrical, asymmetrical or public key — are all implemented in vastly different manners.

Blockchains are the same: The bitcoin blockchain underpinning the popular currency bearing its name is not the same as the Ethereum blockchain upon which the cryptocurrency Ether sits. And this is where not having good definitions of what we’re talking about hurts the larger conversation around how this technology can add value outside of currency applications.

What are the challenges of blockchain in an evolving marketplace?

At its core, a blockchain is a distributed system of recording and storing transaction records. Think of it as a superdatabase — one where each participant maintains, calculates and updates new entries. More importantly, nodes work together to verify the information is truthful, thus providing security and a permanent audit trail.

How blockchain works

So, with that concept in mind, we can broaden our understanding of the technology. Ethereum wasn’t designed as a one-trick pony to run a monetary application like its bitcoin cousin; rather, it’s more of an application platform, using virtual tokens in the place of cash. Instead of simply trading currency, one might trade cat videos, for instance.

And if those cat videos were traded on a blockchain platform, everyone would be able to verify who first introduced a particular video into the system and each person who modified that video or reintroduced a bad copy. You could have the most distributed and verifiable cat video platform ever created. Luckily, there are many more use cases for this technology than just a cat video distribution platform.

In world of medical records — or, for argument’s sake, network configuration changes — we see a unique opportunity to improve many aspects of the use, transfer and security of those records.

What if we could guarantee the records introduced to the system are authentic? Note that I didn’t say accurate, because you can easily authenticate bad data when it’s introduced to a system. But let’s say we introduce our records, and they are accurate.

We go to a new doctor who would normally need to have paperwork from us to authorize the retrieval of those records, and transferring the records often takes longer than would be ideal. Waiting for the requesting doctor to send the forms to the records’ holder and get a response back can take significant effort. And even if the systems are the same, with easy access afforded to the requisite records, those records could have been tampered with or have errors and incomplete data.

The same thing could happen with a blockchain-based system, but there would be a distributed record of the tampering — something that would be all but impossible to hide. In this way, your records could be made available to everyone with a reason to see them, with each view, change and movement recorded in a permanent and tamper-resistant system.

There are challenges even beyond the implementation of our hypothetical configuration system on top of blockchain. Disparate systems would have to be combined into a ubiquitous and fairly homogeneous platform. There would have to be standards applied to the introduction of data in the first place: Bad data in; bad data out. But, in this case, it would potentially become a permanent fixture. There are also challenges in the blockchain implementation itself — the applications on top of it notwithstanding.

And those challenges are substantial.

Adding more nodes and records makes the ledger more complex

I have to agree that today’s applicability of the technology to networking is just not there. Where I disagree, however, is in the assertion that there is no need for it, or that it cannot, or should not, ever be applied to the problem of network management.

One of the challenges of blockchain is in its very nature: the distributed ledger. Because every endpoint has to have a copy of the entire blockchain, and that blockchain is constantly growing as more things are added, the system gets slower, taking up more space. If the same sort of system was implemented under a medical records system, you can see where it would become untenable very quickly.

Each blockchain implementation is different, but derivative, so this is a problem that is likely fixable. But it has to be accounted for in the beginning. Different implementations have already begun to solve this inherent weakness.

That brings us to another issue: Changing blockchains after the fact is not an easy task. Imagine a time where our network configuration data sits on a system in which a significant bug is found.

How does that system get patched? How does the blockchain adapt? And how do the requisite changes affect the integrity of the records’ data sitting on the system? These are difficult problems to solve, and they’re even harder to anticipate upfront. Most of the major blockchain implementations have gone through some amount of retrograde “shoulda-woulda-coulda,” and it’s likely we’ll fail to anticipate every possible problem in the initial rollout of any system.

Networking and the challenges of blockchain: Can they be overcome?

Blockchain technology, as it applies to networking, is very much a work in progress. GlobalData analyst Mike Fratto rejected blockchain technology, saying the ledger is “untested, unproven and overly complex, making it unsuitable for networking.”

While I disagree with Fratto’s assessment in a broad sense, I have to agree that today’s applicability of the technology to networking is just not there. Where I disagree, however, is in the assertion that there is no need for it, or that it cannot, or should not, ever be applied to the problem of network management.

I was prepared for the inevitable conclusion there are no viable production-ready blockchain implementations out in the wild. Had I come to that conclusion, however, I would have been almost entirely incorrect. I have talked with several large companies that are either developing or actively using blockchain technologies of one type or another — most seem to be based on the Linux Foundation’s Hyperledger platform, in close orbit with IBM. Most of the applications I was able to get information on are related to supply chain security in one way or another.

Tracking the ingredients used in a product from field to store shelf is one popular example. Securing critical manufacturing parts from creation through shipping and onto final build is another. These use cases are not from hyperbolic tech startups or boutique manufacturers; they are from large, established, blue-chip and Fortune 500 companies not given to flights of fancy in their supply chain. As these installations become more widespread, I imagine we will start to see more published case studies, leading to more installations. For now, however, a lot of these remain in the shadows, happily ensconced behind nondisclosure agreements.

The hype today may be all around the various cryptocurrencies that exist in the market, from the bitcoins and Ethers of the world to the nascent and opaque world of boutique vanity coins. The real excitement and potential lies not in the coins, however, but in the application of the underlying technology — including overcoming the challenges of blockchain — to everyday IT challenges.

Edge computing helps IT execs in managing large data sets

Data was a hot topic at the “Building the Intelligent Enterprise” panel session at the recent MIT Sloan CIO Symposium in Cambridge, Mass. As panelists discussed, changing market trends, increased digitization and the tremendous growth in data usage are demanding a paradigm shift from traditional, centralized enterprise models to decentralized, edge computing models.

All the data required for an intelligent enterprise has to be collected and processed somehow and somewhere — sometimes in real-time, presenting a challenge for companies.

Here, four IT practitioners break down best practices and architectures for managing large data sets and how they’re taking advantage of edge computing. This was in response to a question posed by moderator Ryan Mallory, senior vice president of global solutions enablement at data center provider Equinix.

Here is Mallory’s question to the panel: Having an intelligent enterprise means dealing with a lot of data. Can you provide some best practices for managing large data sets?

Alston Ghafourifar
CEO and co-founder of AI communication company Entefy Inc.

“I think it really depends on the use cases. We live in a multimodal world. Almost everything we do deals with multiple modalities of information. There are lots of different types of information, all being streamed to the same central areas. You can think about it almost like data lake intelligence.

Alston Ghafourifar, CEO and co-founder, EntefyAlston Ghafourifar

“The hardest part of something like this is actually getting yourself ready for the fact that you actually don’t know what information you’re going to need in order to predict what you want to predict. In some cases, you don’t even necessarily know what you want to predict. You just know you want it to be cheaper, faster, safer — serve some cost function at the very end.

“So, what we tend to do is design the infrastructure to pool as much diverse information as possible to a centralized core and then understand when it finds something that predicts something else — and there’s a lot of techniques upon which to do that.

“But when the system is looking through this massively unstructured information, the moment it gets to something where it says, ‘Oh, I think this is reliable, since I’m getting this over and over again,’ it’ll take that and automatically pull it out and put it into production at the edge, because the edge is processing the application of information. [The edge] is processing enterprise information in transit, almost like a bus. It doesn’t have the benefit of you cleaning it properly, or of you knowing exactly what you’re looking for.

The hardest part of something like this is actually getting yourself ready for the fact that you actually don’t know what information you’re going to need in order to predict what you want to predict.
Alston GhafourifarCEO and co-founder, Entefy Inc.

“Making that transaction and that transition automatic and intelligent is what takes an enterprise further. [An enterprise] could have petabytes of information, but could be bottlenecked in their learning by the 50 or 100 data scientists looking at it. Now, it could say, ‘I’m going to create the computing power of 5,000 data scientists to [do] that job for me,’ and just automatically push it out. It’s almost like a different type of cloud orchestration.”

Stephen Taylor
Global head of analytics, reporting, integration and software engineering at oil and natural gas exploration company Devon Energy

Stephen Taylor, global head of analytics, reporting, integration and software engineering at Devon EnergyStephen Taylor

“Let me build on that and say the one thing that we’re starting to do is use more of what the industry calls a Lambda architecture, where we’re both streaming and storing it. It’s having something that’s pulling data out of your stream to store it in that long-term data store.

“What we’re doing in areas like northwest Texas or the panhandle of Oklahoma, where you have extremely limited communication capability, is we’re caching that data locally and streaming the events that you’re detecting back over the network. So, you’re only streaming a very small subset of the data back, caching the data locally and physically moving that data to locations, up to the cloud and doing that big processing, and then sending the small processing models back to the edge.

“One of the things I think you have to do, though, is understand that — to [Ghafourifar’s] point — you don’t know what you don’t know yet. And you don’t even know what questions you’re going to get yet, and you don’t know what business problems you’re going to have to solve yet. The more you can do to capture all of the data — so then when you do your data science work, you have it all — the better. But differentiate what you need for processing versus what you need for storage and for data science work. Those are two different workloads.”

Michael Woods
Vice president of information technology at engineering and construction firm CDM Smith

Michael Woods, vice president of information technology at CDM SmithMichael Woods

“I can use construction as a primary example. We have large streams of data that we want to analyze as part of the construction process, because we want to know what’s happening in real time. We might have remotely operated vehicles driving or flying around doing LIDAR or radar activity, monitoring things from a visualization standpoint, etc., and that data maybe is getting streamed somewhere and kept. But, in real time, we just want to know what’s changing and when it’s changing — like weather patterns and other things going on. We want to analyze all that in real time.

“Now, at the end of the project, that’s when the data scientists might say, ‘We want to improve our construction process. So, what can we do with that data to help us determine what will make our next construction projects be more successful, take less time and be more cost-effective?'”

Hugh Owen
Senior vice president of product marketing at business intelligence software provider MicroStrategy

Hugh Owen, senior vice president of product marketing, MicroStrategyHugh Owen

“In terms of [managing large data sets], we try and push down as much of the processing into the Hadoop data structure — into the database — as possible. So, we’re always pulling as small an amount of data back as possible, rather than push as much data as possible to the edge, which ties into some of the points we’ve already made.

“I think you should always try to optimize and reduce the amount of information that comes back. For us, we’re doing that because we want the response to come back faster.”