Tag Archives: Engineer

A roundup of the Cisco certification changes in 2020

As network engineer skills become increasingly generalized, Cisco aims to match its certifications to the skills network engineers need in their daily lives.

Announced at Cisco Live 2019, the new Cisco certification changes rolled out on Feb. 24, 2020. Experts have touted the relevant material and the myriad topics Cisco’s certifications cover with these changes and potential benefits for network engineers. With more focus on automation and software skills and less on infrequently used coding languages, Cisco aims to spring its certification tracks forward into the new decade.

The Cisco Certified Network Associate (CCNA), Cisco Certified Network Professional (CCNP) and Cisco Certified Internetwork Expert (CCIE) certifications all expanded the breadth of topics covered, yet all shrunk in size. Cisco also introduced new DevNet certifications among the other Cisco certification changes.

How did existing Cisco certifications change?

Cisco’s standard certification tracks — CCNA, CCNP and CCIE — all added new material that aims to be more relevant to current job roles and help advance the careers of network engineers. In addition to new material, the certifications also include fewer track options than before.

Cisco Certified Network Associate. CCNA is an entry-level certification for network engineers early in their careers. Formerly, Cisco issued the Cisco Certified Entry Networking Technician (CCENT) certification, which was the step before CCNA. After CCENT, CCNA offered different certifications for various career tracks, including CCNA Routing and Switching and CCNA Collaboration.

Now, CCENT is gone, and the recent Cisco certification changes transformed the CCNA from 10 separate tracks into a single unified exam, apart from the CCNA CyberOps track. Cisco author Wendell Odom said most topics in the new CCNA exam come from the former CCNA Routing and Switching track, with about one-third of new material.

A CCNA certification isn’t a prerequisite for higher certifications, yet it provides fundamental networking skills that network engineers require for current job roles.

Cisco Certified Network Professional. CCNP is an intermediate-level certification and a step up from CCNA. Similar to the CCNA changes, Cisco consolidated the CCNP certification tracks, although less drastically than with CCNA. Cisco cut CCNP from eight to five tracks, which, like CCNA, reflect holistic industry changes to bring more relevant material to Cisco’s certifications.

According to Cisco, the new CCNP tracks — which are also the new CCIE tracks — are the following:

  1. Enterprise
  2. Security
  3. Service Provider
  4. Collaboration
  5. Data Center

While these are the five core exams a network engineer can take, they must also take a concentration exam within the core topic to attain a CCNP certification. If a person solely takes the core exam and passes, she receives a Cisco Certified Specialist certification in that topic area.

Network engineers can take several core or concentration exams and receive a Cisco Certified Specialist certification upon passing, which can prove to employers the engineer has those specific skills.

Authors Brad Edgeworth and Jason Gooley said these changes didn’t remove much material, but they added more width to the knowledge and skills network engineers should have in their careers.

Cisco Certified Internetwork Expert. CCIE is an expert-level certification and a step up from CCNP. The CCIE and CCNP tracks fall under the same umbrellas and shrunk to the aforementioned five tracks. To become CCIE-certified, network engineers must take and pass one core exam — Enterprise, Security, etc. — and that topic’s corresponding lab.

Formerly, CCIE exams focused more on highly advanced skills and less on critical knowledge in areas such as network design skills. After the Cisco certification changes, the CCIE exams now include more practical knowledge for advanced network engineers.

network engineer skills
The recent Cisco certification changes aim to sharpen relevant network engineer skills, including management and automation capabilities.

What are the new Cisco certifications?

In Cisco’s new DevNet track, the company added three certifications that reflect the certification pyramid for standard Cisco certifications. The DevNet certifications are the following:

  1. Cisco Certified DevNet Associate
  2. Cisco Certified DevNet Specialist
  3. Cisco Certified DevNet Professional

The DevNet tracks encompass network automation, software and programmability skills that Cisco certifications previously lacked and that the industry has deemed increasingly important.

While DevNet lacks a CCIE-equivalent track, the requirements for a DevNet certification reflect those of its equivalent in Cisco’s standard certifications. For example, a person must pass one core and one concentration exam to receive a Cisco Certified DevNet Professional certification.

The DevNet track’s goal is to give network engineers a certification path for skills the industry says they need and help them adapt to newer, advanced technologies — such as network automation — that employers increasingly seek out. And, as the industry continues to change, so will Cisco’s certifications.

Go to Original Article
Author:

AWS leak exposes passwords, private keys on GitHub

An Amazon Web Services engineer uploaded sensitive data to a public GitHub repository that included customer credentials and private encryption keys.

Cybersecurity vendor UpGuard earlier this month found the exposed GitHub repository within 30 minutes of its creation. UpGuard analysts discovered the AWS leak, which was slightly less than 1 GB and contained log files and resource templates that included hostnames for “likely” AWS customers.

“Of greater concern, however, were the many credentials found in the repository,” UpGuard said in its report Thursday. “Several documents contained access keys for various cloud services. There were multiple AWS key pairs, including one named ‘rootkey.csv,’ suggesting it provided root access to the user’s AWS account.”

The AWS leak also contained a file for an unnamed insurance company that included keys for email and messaging providers, as well as other files containing authentication tokens and API keys for third-party providers. UpGuard’s report did not specify how many AWS customers were affected by the leak.

UpGuard said GitHub’s token scanning feature, which is opt-in, could have detected and automatically revoked some of the exposed credentials in the repository, but it’s unclear how quickly detection would have occurred. The vendor also said the token scanning tool would not have been able to revoke exposed passwords or private keys.

The documents in the AWS leak also bore the hallmarks of an AWS engineer, and some of the documents included the owner’s name. UpGuard said it found a LinkedIn profile for an AWS engineer that matched the owner’s exact full name, and the role matched the types of data found in the repository; as a result, the vendor said it was confident the owner was an AWS engineer.

While it’s unclear why the engineer uploaded such sensitive material to a public GitHub repository, UpGuard said there was “no evidence that the user acted maliciously or that any personal data for end users was affected, in part because it was detected by UpGuard and remediated by AWS so quickly.”

UpGuard said at approximately 11 a.m. on Jan. 13, its data leaks detection engine identified potentially sensitive information had been uploaded to the GitHub repository half an hour earlier. UpGuard analysts reviewed the documents and determined the sensitive nature of the data as well as the identity of the likely owner. An analyst contacted AWS’ security team at 1:18 p.m. about the leak, and by 4 p.m. public access to the repository had been removed. SearchSecurity contacted AWS for comment, but at press time the company had not responded.

Go to Original Article
Author:

Using Azure and AI to Explore the JFK Files

This post is by Corom Thompson, Principal Software Engineer at Microsoft.

On November 22nd, 1963, the President of the United States, John F. Kennedy, was assassinated. He was shot by a lone gunman named Lee Harvey Oswald while driving through the streets of Dallas in his motorcade. The assassination has been the subject of so much controversy that, 25 years ago, an act of Congress mandated that all documents related to the assassination be released this year. The first batch of released files has more than 6,000 documents totaling 34,000 pages, and the last drop of files contains at least twice as many documents. 

We’re all curious to know what’s inside them, but it would take decades to read through these. We approached this problem of gaining insights by using Azure Search and Cognitive Services to extract knowledge from this deluge of documents, using a continuous process that ingests raw documents, enriching them into structured information that enables you to explore the underlying data.

Today, at the Microsoft Connect(); 2017 event, we created the demo web site* shown in Figure 1 below – this is a web application that uses the AzSearch.js library and designed to give you interesting insights into this vast trove of information.


Figure 1 – JFK Files web application for exploring the released files

On the left you can see that the documents are broken down by the entities that were extracted from them. Already we know these documents are related to JFK, the CIA, and the FBI. Leveraging several Cognitive Services, including optical character recognition (OCR), Computer Vision, and custom entity linking, we were able to annotate all the documents to create a searchable tag index.

We were also able to create a visual map of these linked entities to demonstrate the relationships between the different tags and data. Below, in Figure 2, is the visualization of what happened when we searched this index for “Oswald”.


Figure 2 – Visualization of the entity linked mapping of tags for the search term “Oswald”

Through further investigation and linking, we were able to even identify that the entity linking Cognitive Service annotated this term with a connection to Wikipedia, and we quickly realized that the Nosenko who was identified in the documents was actually a KGB defector interrogated by the CIA, and these are audio tapes of the actual interrogation. It would have taken years to figure out these connections, but we were instead able to do this in minutes thanks to the power of Azure Search and Cognitive Services.

Another fun fact we learned is that the government was actually using SQL Server and a secured architecture to manage these documents in 1997, as seen in the architecture diagram in Figure 3 below.


Figure 3 – Architecture diagram from 1997 indicating SQL Server was used to manage these documents

We have created an architecture diagram of our own to demonstrate how this new AI-powered approach is orchestrating the data and pulling insights from it – see Figure 4 below.

This is the updated architecture we used to apply the latest and greatest Azure-powered developer tools to create these insightful web apps. Figure 4 displays this architecture using the same style from 54 years ago.


Figure 4 – Updated architecture of Azure Search and Cognitive Services

We’ll be making this code available soon, along with tutorials of how we built the solution – stay tuned for more updates and links on this blog.

Meanwhile, you can navigate through the online version of our application* and draw your own insights!

Corom

* Try typing a keyword into the Search bar up at the top of the demo site, to get started, e.g. “Oswald”.

Guinness World Record for running a marathon in a sari – Asia News Center

A Microsoft engineer has made it into the Guinness book of world records – for running the fastest marathon dressed in a sari.

Jayanthi Sampathkumar competed in a 42-kilometer event in Hyderabad in traditional Indian attire on 20 August this year and crossed the finish line in four hours, 57 minutes and 44 seconds. She was officially awarded a Guinness World Record this week. 

Sampathkumar is a Principal Engineering Manager at Microsoft in Hyderabad. When she’s not working on the Bing Knowledge Graph, she follows her passion for collecting saris painstakingly made by artisans on handlooms.

She got the idea to compete in a sari after reading about a man who set a record for the fastest marathon wearing a business suit. “If he could do it in a business suit, perhaps I could achieve this feat wearing a sari,” she says. “I do not want women to have any limitations in their head when it comes to wearing a sari. A sari is a piece of clothing. Women can continue to be traditional, but that should not stop them from achieving their goals.”

A sari is a wide strip of cotton cloth that is wrapped around the body. Usually saris are around six yards long, but Sampathkumar chose a nine-yard one that gave her more freedom to move. Her garment was made by Project ReWeave, a traditional crafts group that is supported by Microsoft Philanthropies. Set up in April 2016, it focuses on reviving the handloom-weaving ecosystem across various clusters in India with the help of technology.

[embedded content]

ALSO READ: The whole nine yards: For the love of handlooms, this Microsoftie traded tracksuit for a sari to run a marathon

Undercapitalization is the disease, developer burnout the symptom

Imagine a DevOps engineer named Pat. You’re the vice president of engineering, and Pat has been a superstar in your organization for years. She’s always pleasant in standups. Any criticism she makes is positive and supportive. She’s always reliable when on call, and Pat makes few mistakes.

Then, something changes. She becomes snippy in standups. It’s taking her longer to answer emails. Last month, she altered a deployment script that caused the Amazon bill to jump. You sense something is wrong, so you go to her boss.

Pat’s boss reports having a similar experience. Pat, who used to be the poster child for an exemplary DevOps engineer, is dramatically regressing. You’re both mystified.

Something is obviously amiss. You ask to see her work schedule over the last year and the tickets assigned to her. In addition, you take a trip to HR and request the budget history of the group Pat works in, as well as the head count history.

As you review the reports, certain facts pop out. First, Pat has not had a vacation in the last year. Also, her last raise was only 3% due to company revenue issues. Half of Pat’s past work tickets involved issues related to the new automated container-provisioning framework the company implemented last year.

Pat is burnt out. Now, conventional wisdom has a way to prevent burnout. Just give employees enough time to rest, refresh and acquire the skills necessary to do the work required of them. This is the route organizations typically take to addressing developer burnout. And this is the flaw: Organizations are addressing the symptoms.

The disease is undercapitalization.

Allow me to elaborate.

No capital? No profit

Capital is anything that enhances a person’s or organization’s ability to perform economically useful work. Capital can take the form of money, time, machinery, information or real estate, for example. Businesses require capital in order to make goods and provide services. The mistake many businesses make is to not have enough capital on hand to meet objectives. This is particularly true of startups. I’ve experienced this personally.

Earlier in my life, I wanted to be in the restaurant business. I had the necessary expertise. So, I saved some money and found some investors to pitch in to cover the startup costs and projected operating expenses for a year.

However, my business plan had a serious flaw: I overestimated revenue growth. I thought my cash flow would start to cover expenses within three months of operation. Turns out I was wrong. I was not getting the number of customers needed within the time frame required.

I started to run out of money. I fell behind paying my bills. I had to cut back on staff. I found myself working seven days a week to make up for the staff I had to lay off.

I didn’t have the capital — in this case, time — to meet my objective.

Eventually, the business closed its doors. I was a mess physically and emotionally. Upon reflection, I came to realize I had just run out of time. My customer rate was growing, and the business was becoming more efficient. The shortcoming was I didn’t have the capital — in this case, time — to meet my objective.

Let’s go back to Pat and her burnout.

Pat had not had a vacation in a year. She had been given a small raise and was working with technology new to her and the organization. How did this come about?

Pat had not had a vacation in a year because the department is short-staffed. She had been given a pittance of a raise because the company couldn’t afford more. And she is struggling with new technology because the company needed to implement automated provisioning in order to meet the growth requirements necessary to stay competitive.

To put it succinctly:

No vacation = not enough staff = undercapitalization

Small raise = not enough money = undercapitalization

Struggling with new technology = not enough time = undercapitalization

The business does not have the capital required to meet its objective. And, thus, burnout sets in.

So, how does a company avoid burnout?

The answer is to make sure it meets its capital requirements.

This is easier said than done. Most companies think they have enough capital. Not surprisingly, most companies are overly optimistic, particularly small to medium-sized tech companies that have growing DevOps departments.

These companies get the value of DevOps, but underestimate the capital requirements necessary for success. Many follow the lean startup mentality — fewer employees using more automation, while getting more back massages at their desks and free food at the snack bar.

Providing automation, back massages and free food are not necessarily the best tactics for ensuring adequate capitalization. Having adequate capital on hand is a continuous activity that requires ongoing, dedicated attention. Just look at AT&T.

AT&T executives understood from its inception that the company was engaged in a capital-intensive business. Its leadership kept raising capital. In the beginning, the capital was needed to lay landlines. By the 1950s, the capital was used to put telecommunication satellites into space. Today, with the acquisition of DirecTV, the company is moving into on-demand video streaming. The company has a voracious appetite for capital, and it’s become quite good at acquiring it.

Developer burnout
Logz.io’s 2017 DevOps Pulse survey found that 70% of its respondents could see themselves burning out.

This is the lesson to be learned. Burnout, in general, and developer burnout, in particular, can be traced back to undercapitalization. Undercapitalization is rarely a temporary condition. Rather, it results from of a business failing to plan from the start to ensure its capital needs are always met. This means making sure there is enough money, time and staff to meet the demands at hand. Doing more with less rarely works for a long period of time. Eventually, a company will pay the price. One of the first signs is employee burnout.

We in DevOps know there is no way automation will make bad code better once it’s out the door. You need to get a new version out as soon as possible. The same is true of adequate capitalization. Once the symptoms set in, the only way to beat the disease is to release a new version of the business, with plans to continuously meet the business’s demand for the capital required to satisfy its objectives.

Hortonworks extends IaaS offering on Azure with Cloudbreak

This blog post is co-authored by Peter Darvasi, Engineer, Hortonworks.

We are excited to announce the availability of Cloudbreak for Hortonworks Data Platform on Azure Marketplace. Hortonworks Data Platform (HDP) is an enterprise-ready, open source Apache Hadoop distribution. With Cloudbreak, you can easily provision, configure, and scale HDP clusters in Azure. Cloudbreak is designed for the following use cases:

  • Create clusters which you can fully control and customize to best fit your workload
  • Create on-demand clusters to run specific workloads, with data persisted in Azure Blob Storage or Azure Data Lake Store
  • Create, manage, and scale your clusters intuitively using Cloudbreak UI, or automate with Cloudbreak Shell or API
  • Automatically configure Kerberos and Apache Knox to secure your cluster

When you deploy Cloudbreak, it installs a “controller” VM which runs the Cloudbreak application. You can use the controller to launch and manage clusters. The following diagram illustrates the high-level architecture of Cloudbreak and HDP on Azure:

image

Cloudbreak lets you manage all your HDP clusters from a central location. You can configure your clusters with all the controls that Azure and HDP have to offer, and you can automate and repeat your deployments with:

  • Infrastructure templates for specifying compute, storage, and network resources in the cloud
  • Ambari blueprints for configuring Hadoop workload
  • Custom scripts that you can run before or after cluster creation

In addition, Cloudbreak on Azure features the following unique capabilities:

  • Easily install Cloudbreak by following a UI wizard on Azure Marketplace
  • Choose among Azure Blob Storage, Azure Data Lake Store, as well as Managed Disks attached to the cluster nodes to persist your data
  • Follow a simple Cloudbreak wizard to automate the creation of an Azure Active Directory Service Principal for Cloudbreak to manage your Azure resources
  • Enable high availability with Azure Availability Set
  • Deploy clusters in new or existing Azure VNet

Getting started

  • Go to Azure Marketplace and follow the wizard to install Cloudbreak. 
  • Once deployment is succeeded, retrieve the public DNS name for the Cloudbreak VM. 

image

  • Open https with the DNS name, and you will see a browser warning. This is because by default there is no certificate set for this https site. You can still continue to your Cloudbreak web UI and follow the wizard to provision clusters. We recommend that you set up a valid certificate and disable public IP in a production environment. 

Additional resources