Tag Archives: project

Microsoft Project

Choosing to use Microsoft Project as your team’s dedicated project management app makes sense only when a number of stars align. First, you really must have a certified project manager on board to drive the software. Second, time has to be on your side and your certified project manager can’t be rushed to learn to use the tool. Third, your team should already be a Microsoft house, or it should be willing to become one. Fourth, the number of projects your team manages and their level of complexity should be quite high. If your organization meets these criteria, Microsoft Project may prove to be an invaluable tool. If not, you’re better served by another option, and there are many.

Similar Products

If you’ve read this far and realized that Microsoft Project isn’t right for your team, I recommend three other options. For small businesses, Zoho Projects and Teamwork Projects are the PCMag Editors’ Choices. Both are reasonably priced and very easy to learn to use, even if you’re not a project management master yet. The other tool that earns the Editors’ Choice is LiquidPlanner, a high-end tool that’s ideal for larger teams managing not just projects but also people and other resources.

A Few Caveats

Microsoft Project takes a long time to learn to use and even longer to master. I am writing this review from the point of view of someone who has not mastered it (not even close) but who has experimented with it for some weeks and asked questions of Microsoft representatives to learn more. My point of view includes comparison testing with dozens of other project management apps, from lightweight ones designed for small businesses to enterprise-grade options.

Because Microsoft Project is something of a bear, I would recommend complementing my article with user reviews by people who have worked with the tool extensively and can provide different insights into how it holds up in the long term.

Pricing and Plans

There are two ways to buy Microsoft Project. You can add it to an Office 365 subscription or you can buy a standalone version for on-premises deployment. The options get confusing, so let me go through them piece by piece.

Office add-on. When you add Microsoft Project to an Office subscription, you get the cloud-based version of the app. There are three pricing levels for this type of purchase: Project Online Professional, Premium, and Essentials.

Project Online Professional costs $30 per person per month. With this level of service, each person gets to use the Microsoft Project desktop app on up to five computers for project management only, not portfolio management. Even though it’s a desktop app, it still runs in the cloud (i.e., it requires an internet connection to use). Access via web browsers is also included.

Project Online Premium costs $55 per person per month. It offers everything in the Professional account, plus portfolio management tools. It comes with advanced analytics and resource management features that you don’t get in the Professional account.

The third level, Essentials, is not a tier of service so much as a role type you can choose for team members who have fairly limited responsibilities in the app. It costs $7 per person per month. You have to have a Professional or Premium membership first to utilize the Essential option. Essential users can only access Microsoft Project via a web browser or mobile device. They can only update task statuses, work with timesheets, share documents, and communicate with colleagues. They don’t get desktop apps or other functionality.

Standalone on-premises deployment. If you don’t want to use the cloud-hosted version of Microsoft Project, you can host it locally, and there are three options for how to do it.

One is Project Standard, which costs $589.99 charged as a one-time flat fee. With this version, you get one piece of software installed locally on one computer, and only one person can use it. It’s old-school software in the sense that it doesn’t have any collaboration features. You get project management tools, but nothing for resource management.

The next option is Project Professional for $1,159.99. Each license is good for only one computer. It has everything in Project Standard, plus the ability to collaborate via Skype for Business, resource-management tools, timesheets, and the option to sync with Project Online and Project Server.

Project Server, the last option, is a version of Microsoft Project that enterprises can get with SharePoint 2016. I could go into detail about how to get SharePoint 2016 and the three tiers of enterprise service for Office involved, but I’ll assume that if this option is of interest to you, you already have a support person at Microsoft you can ask for more information.

Comparison Prices

If we use the $30 or $55 per person per month price for Project Online Professional as our base for comparison, which are the tiers of service I imagine are in your wheelhouse if you’re reading this article, then Microsoft’s prices are on the high end for small to medium businesses.

TeamGantt is a good place to start for comparison. It offers service ranging from a Free account to an Advanced membership that costs $14.95 per person per month. It’s a web-based tool that includes collaboration and is much easier to learn to use than Project.

A comparable plan with Zoho Projects costs a flat rate of $50 per month, regardless of how many people use it. Teamwork Projects offers a similar flat monthly rate ($69 per month for as many team members as you need), as does Proofhub ($150 per month).

If we turn to more high-end tools, LiquidPlanner starts at $599.40 per year for a small business account of up to five people. That price is based on a rate of $9.99 per person per month, but this particular plan is only sold in a five-seat pack. LiquidPlanner’s most popular plan, Professional, is better for medium to large businesses. It works out to be $45 per person per month, with a ten person minimum. Like Microsoft Project, LiquidPlanner takes time to master in part because it offers so many tools for both project management and resources management.

Other project management platforms that are suitable for larger organizations include Clarizen (from $45 per person per month), Celoxis ($25 per person per month; five-person minimum), and Workfront (about $30, depending on setup).

Getting Started

I can’t stress enough the fact that Microsoft Project is meant to be used by experienced, or more precisely trained, project managers. It’s not designed for learning on the fly. It doesn’t come with clear tutorials for getting started. It assumes familiarity with both big concepts and fine details of project management. If you’re thinking you might use this software but you (or the lead person who will be using the app) don’t know what a burndown report is, I would seriously advise you to consider a different tool.

The app itself looks a lot like Excel. It has the same familiar tabbed ribbon interface seen in other Microsoft Office apps. The spreadsheet portion of the app holds all the data related to tasks or resources. To the right of the cells is a Gantt chart reflecting the schedule as you build it.

Microsoft Project supports all the typical things you’d want to do in a project management app. For every task, you can enter a lot of detail, such as a description, notes, start date, task duration, and so forth. Recurring events are supported, as are dependencies, custom fields, and baselines for tracking actual progress versus planned progress.

The bars in the Gantt chart are interactive, so as you adjust them, the information in the cells updates as well. When a task is in progress, you can indicate the percent that it’s done by sliding a smaller line inside its associated spanner bar toward the right.

In addition to having a Gantt chart view, Microsoft Project offers calendar and diagram views as well. The calendar view is self-explanatory, while the diagram view is similar to the Gantt view, only it contains additional details about the task. If you follow a timeline better when there’s some sense of a narrative behind it, the diagram view could be useful.

As mentioned, the first time you use the app, there isn’t much coaching on how to get started. Some apps provide interactive on-screen tutorials. Others start you out in a sample project. Still others point you early to a channel of help videos for getting started. Microsoft Project has none of that. In fact, the little that Project does provide may merely add to your confusion, such as this little nugget of information that I saw on day one:

“To be clear, Project Online is NOT a web-based version of Project Professional. Project Online is an entirely separate service that offers full portfolio and project management tools on the web. It includes Project Web App, and can, depending on your subscription, also include Project Online Desktop Client, which is a subscription version of Project Professional.”

Even after having gone through all the pricing and plan options in detail, those words still make my head spin.

Features and Details

Microsoft Project is powerful when it comes to the more detailed aspects of project management, such as resource management, reports, and timesheets. Powerful doesn’t mean easy or simple, of course.

In Microsoft Project, with the tiers of service that include resource management, you can manage work (which includes both generic people and specific people, as well as other “work” related resources), materials, and costs. You can do a lot with these elements if you have the time and the inclination.

For example, you can add detail to materials resources, such as a unit of measure, and if you want to get really detailed, you can enter costs for materials. What if the costs of a material changes over time? In Microsoft Project, an additional detail panel allows you to track and account for changes in cost over time.

With work resources, I mentioned you can track specific people or generalized people. Depending on the work you’re tracking, you may need to assign general human resources, such as a “front-end programmer” or “QA tester,” rather than a specific person. It all depends on what you’re managing and how.

Reports are highly customizable, although, like the rest of the app, it takes time to learn how to use them. Some of the more rudimentary features are neat and surprisingly simple to use, however. You can generate a report by navigating to the report section and selecting what data you want to appear in different modules on the page. Using a field selection box on the right, you can make the topmost element the project, and below it you might add a table showing how much of each phase of the project is already complete, and so forth.

All the elements you add to the report are stylized, and they don’t automatically adjust to accommodate one another. For example, if text from one element runs long, it can crash into another. Other minor visual elements often need finessing, too. You can end up wasting a lot of time resizing boxes and nudging elements left and right to make it look decent, which probably isn’t what you’re getting paid to do. That’s a designer’s job, really.

That said, styling the reports in this way has a purpose. Once you finish with all the adjustments, the final product looks ready to export to a presentation directly (in PowerPoint, no doubt), so you can go from generating reports to sharing them without many additional steps.

Within the timesheets section, for those versions of the app that include it, you can have team members fill out weekly time sheets for whatever duration you need, such as weekly or monthly. Team members can report not only time spent on tasks related to projects, but they can also indicate what time of work it was, such as research and development or fulfillment. Another option lets people add time to their time sheets for tasks aren’t specifically related to a project. For example, if Julia drives to meet with a client, the team might want to record that time and bill for it, even though the travel doesn’t appear as a task on a project.

Room for Improvement

I’ve already alluded to the fact that Microsoft Project could offer more assistance in helping people get started with it and learn to use it.

Additionally, Project is weak when it comes to in-app communication. The problem is that Microsoft is a kingdom, and within its realm it already has plenty of tools for communicating. You can fire off an email with Outlook, or schedule a meeting in Calendar, or pop into Microsoft Teams for chat, or Yammer for conversations, or Skype for video calls, and so forth. But sometimes, when you’re working on a project, you just want to @ message someone or ping them in a chat and ask a question without breaking the context of your work by navigating to another app. Seeing as these tools already exist, why duplicate them in Project? (Some might refer to Microsoft as having an “ecosystem” rather than kingdom. An ecosystem can’t help but be what it is, but a kingdom chooses its boundaries.)

Indeed, traveling around the kingdom annoyed me to no end while I was testing Microsoft Project. A desire to share information might result in the app whisking me away to Outlook. A need to update something about a meeting scheduled in my project could leave my computer loading a new tab for Calendar without my consent. Many times, I wanted the ability to adjust all the details related to my project from within the project management app, not somewhere else.

While Microsoft has plenty of its own apps that work with Project, many organizations rely on tools that come from somewhere else, Salesforce being a prime example. Project does not integrate with many other tools. It’s not supported by Zapier either, which is an online tool that can sometimes connect apps and services that don’t natively talk to one another. If you’re hoping to loop your project management application into other online services that your team already uses, whether Slack or Trello or Salesforce, then Microsoft Project is not a good tool to choose.

A Powerful Tool Within Its Realm

While powerful and thorough in many respects, Microsoft Project fits only very specific companies. More and more, this is the case with many Microsoft apps. Your team needs to already be invested in Microsoft products for Project to make sense. It also works best for medium to large organizations, but not small ones. Plus, you need a qualified and experienced project manager on the team to be the person driving the app.

If Microsoft Project isn’t an ideal candidate for your project management needs, I suggest small outfits look into Zoho Projects and Teamwork Projects, whereas larger organizations managing many more projects and resources take a dive into LiquidPlanner. All three earned the PCMag Editors’ Choice.

SDS, HCI and CDP are key to dream enterprise storage system

What better way to start a new year than to plan and build something brand new? My pet project would be an enterprise storage system. (OK, I really had my eye on that do-it-yourself Ferrari Portofino kit, but this is a storage column after all.)

Unfortunately, my extremely limited engineering abilities and all-thumbs hands mean I won’t be able to cobble my dream enterprise storage system together myself. My imagination doesn’t have those limitations, however, so I’ll describe it here. I want my new storage system to do everything and to do it well.

Some of the newer storage techs that have gotten a foot in the data center door over the past few years share a common theme: Storage has been too hard — hard to buy, hard to set up and configure, hard to use and hard to maintain. Easy, the new industry byword, might be a difficult concept for some storage pros to wrap their heads around. However, today, LUNs are shunned and provisioning is no longer a three-day turnaround, but rather a menu pick for end users.

HCI to the max

Nowadays, if you can’t call your storage hyper-converged or software-defined, it’s probably not really storage — at least not in the 21st-century sense. Given that, my custom-built enterprise storage system would be built around a hyper-converged infrastructure architecture, but with a couple of variations on the HCI theme.

My system would integrate the key components that make HCI, well, HCI — storage, servers and networking. But it would have such closely integrated software-defined storage management with software-defined networking and software-defined servers, everything could be throttled up and down and configured and reconfigured on the fly, so storage performance and capacity could be rejiggered as needed. Every component — CPU, memory, network interface card, network, whatever — can be manipulated. That kind of endless flexibility would allow my system to morph into whatever was necessary at any given moment — to make sophisticated decisions like what data to tier, when to tier it and what to tier it to. It will be one big whole software-defined enchilada.

My HCI system will also let servers outside the architecture access its storage resources. That way legacy gear can be part of the new world order as well. And the enterprise storage system will allow you to carve out storage regions that provide custom media configurations based on need. This means the system would accommodate all kinds of media: really fast flash; fairly fast flash; and pokey, cheap and commodious hard drives.

Cloud access would be built in — of course! — to allow access to multiple cloud providers for live, backup or archive data. This would be enabled by a file system that works like an automatic transmission, shifting transparently among protocols — block, NFS, SMB or object — as required by the apps accessing data. Seriously, users shouldn’t have to go under the hood of the storage machine to get that kind of protocol flexibility. It should just happen.

Built-in backup

If you can’t call your storage hyper-converged or software-defined, it’s probably not really storage — at least not in the 21st-century sense.

I’d also add continuous data protection-based backup, which has been languishing too long on the storage sidelines. It’s time for ongoing data protection that backs up data automatically to other storage systems or the cloud or Venus. Wherever it makes the most sense and recovery is easiest, just as long as you don’t have to do anything but point the system in the right direction. While it really doesn’t matter how it gets done, whether it’s via native software or third-party apps, we definitely want to do away with proprietary formats. So we can recover data with other tools or simple copy commands.

Encryption — in flight and at rest — will be the default. And if I can’t incorporate full-fledged security tools, the system will at least alert users when anything doesn’t quite look kosher, such as off-hours access, sudden activity or unexpected access to secondary data.

Storage, manage thyself

This dream enterprise storage system would also support mega-metadata. It’s an amped-up level of metadata that enables data to be smarter than us, or at least more on the ball than we typically are. So the data knows what to do with itself, what level of protection it needs based on its sensitivity or usefulness, how and when it should be tiered, who can read it and copy it, and when it should self-destruct by pressing its own delete button.

And, of course, all of this should have endless scalability to grow to those elusive n numbers of nodes or whatever — storage, servers, network, you name it. Your starter kit might be the size of a Cracker Jack box, but it should be able to expand to Google-ish dimensions using easy, in-place upgrades to ensure you’re always running on the latest technology.

Truth or fiction?

A lot of the stuff described here is actually available, so it’s not all pipe dream stuff. The problem is you’d be hard pressed to find a single product that has it all. The first vendor that gets there will corner the storage market for sure. OK, maybe not the whole market; maybe just me.

For Sale – 512GB M2 SSD / 32GB SODIMM RAM

Selling the following – neither have been used as were bought for a project that I never actually got around to starting. (I’m unable to find any of receipts unfortunately)

512GB SanDisk X400, M.2 (22×80) SSD, SATA III 6Gb/s, 1.5mm, Read 540MB/s, Write 520MB/s, 93.5k/75k IOPS – £125

32GB (2x16GB) Corsair DDR4 SO-DIMM Vengeance Performance, PC4-21300 (2666), Non-ECC Unbuffered, CAS 18-19-19-39, 1.2V – £300

delivery is included in the price

Also – 4GB hynix

DDR3-1866 PC3-14900E Dual Channel ECC Unbuffered – £20

Price and currency: £125 / £250
Delivery: Delivery cost is included within my country
Payment method: B.T. / PPG
Location: Cheltenham
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Microsoft announces private preview, partnerships for AI-powered health bot project

Today, we’re pleased to announce the private preview of a new AI-powered project from Microsoft’s Healthcare NExT initiative  which is designed to enable our healthcare partners to easily create intelligent and compliant healthcare virtual assistants and chatbots. These bots are powered by cognitive services and enriched with authoritative medical content, allowing our partners to empower their customers with self-service access to health information, with the goal of improving outcomes and reducing costs. So, if you’re using a health bot built by one of our partners as part of our project, you can interact in a personal way, typing or talking in natural language and receiving information to help answer your health-related questions.

Our partners, including Aurora Health Care, with 15 hospitals, over 150 clinics and 70 pharmacies throughout eastern Wisconsin and northern Illinois, Premera Blue Cross, the largest health plan in the Pacific Northwest, and UPMC, one of the largest integrated health care delivery networks in the United States, are working with us to build out bots that address a wide range of healthcare-specific questions and use cases. For instance, insurers can build bots that give their customers an easy way to look up the status of a claim and ask questions about benefits and services. Providers, meanwhile, can build bots that triage patient issues with a symptom checker, help patients find appropriate care, and look up nearby doctors.

At Aurora Health Care, patients can use the “Aurora Digital Concierge” to determine what type of care they might need and when they might need it. Patients interact with the bot in natural language – answering a set of questions about their symptoms – and then the bot suggests what could be possible causes and what type of doctor they might want to see and when. They can also schedule an appointment with just a few clicks. This is an example of how AI can have direct impact on people’s everyday lives, helping patients find the most relevant care and helping doctors focus on the highest-priority cases.

“Aurora Health Care is focused on delivering a seamless experience for our consumers and the health bot allows us to introduce technology to make that happen. The use of AI allows us to leverage technology to meet consumers where they are; online, mobile, chat, text, and to help them navigate the complexity of healthcare,” said Jamey Shiels, Vice President Digital Experience, Aurora Health Care.

At Microsoft, we believe there is an enormous opportunity to use intelligent bots to make healthcare more efficient and accessible to every individual and organization. Our goal is to amplify human ingenuity with intelligent technology, and we’re doing that in healthcare by infusing AI into solutions that can help patients, providers, and payers. 

We are incubating the health bot project as part of Healthcare NExT, a new initiative at Microsoft to dramatically transform healthcare by deeply integrating greenfield research and health technology product development, in partnership with the healthcare industry’s leading players. Through these collaborations, our goal is to enable a new wave of innovation and impact in healthcare using Microsoft’s deep AI expertise and global-scale cloud.

Today, for instance, it can be particularly difficult for our healthcare partners to build bots that address the stringent compliance and regulatory requirements of the healthcare industry, and to integrate complex medical vocabularies. Our health bot project is designed to make this simple by providing an easy to use visual editor tool that partners can use to build and extend their bots, an array of healthcare-specific configuration options, out-of-the-box symptom checker content, as well as easy integration with partner systems and with our set of cognitive services.

We are introducing a private preview program that will allow new partners to participate in the project; partners will be able to sign up on our website. The program includes built-in Health Insurance Portability and Accountability Act (HIPAA) compliance – a prerequisite for usage by any covered entity. It also includes access to the visual editor tools that partners can easily use to customize and extend bot scenarios, documentation and code samples published on Microsoft Docs, and pre-built integration with the Health Navigator symptom checker.

We’re extremely excited for the potential of our project to help people get better care and navigate the healthcare process more efficiently. Following the private preview, we will have more information to share for general availability.

For more information on the health bot project, please visit: https://www.microsoft.com/en-us/research/project/health-bot/

For Sale – 512GB M2 SSD / 32GB SODIMM RAM

Selling the following – neither have been used as were bought for a project that I never actually got around to starting. (I’m unable to find any of receipts unfortunately)

512GB SanDisk X400, M.2 (22×80) SSD, SATA III 6Gb/s, 1.5mm, Read 540MB/s, Write 520MB/s, 93.5k/75k IOPS – £125

32GB (2x16GB) Corsair DDR4 SO-DIMM Vengeance Performance, PC4-21300 (2666), Non-ECC Unbuffered, CAS 18-19-19-39, 1.2V – £300

delivery is included in the price

Also – 4GB hynix

DDR3-1866 PC3-14900E Dual Channel ECC Unbuffered – £20

Price and currency: £125 / £250
Delivery: Delivery cost is included within my country
Payment method: B.T. / PPG
Location: Cheltenham
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – 512GB M2 SSD / 32GB SODIMM RAM

Selling the following – neither have been used as were bought for a project that I never actually got around to starting. (I’m unable to find any of receipts unfortunately)

512GB SanDisk X400, M.2 (22×80) SSD, SATA III 6Gb/s, 1.5mm, Read 540MB/s, Write 520MB/s, 93.5k/75k IOPS – £125

32GB (2x16GB) Corsair DDR4 SO-DIMM Vengeance Performance, PC4-21300 (2666), Non-ECC Unbuffered, CAS 18-19-19-39, 1.2V – £300

delivery is included in the price

Also – 4GB hynix

DDR3-1866 PC3-14900E Dual Channel ECC Unbuffered – £20

Price and currency: £125 / £250
Delivery: Delivery cost is included within my country
Payment method: B.T. / PPG
Location: Cheltenham
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Apache Impala gets top-level status as open source Hadoop tool

Apache Impala SQL-on-Hadoop software this week gained the status of Top-Level Project within the Apache Software Foundation. That is a key step in the data engine’s progress, according to the open source standards group.

The Impala massively parallel processing (MPP) query engine is one of several — the list includes Hive, Spark SQL, Drill, HAWQ, Presto and others — that strive to bring SQL-style interactivity to distributed big data applications. In effect putting SQL on Hadoop, it was originated at Hadoop distribution provider Cloudera, which has contributed considerable resources to the Apache Impala effort.

Companies such as Caterpillar, Cox Automotive and the New York Stock Exchange have used Impala, which employs a data architecture that separates query processing from storage management.

Marcel Kornacker, founder Impala project Marcel Kornacker

That separation of processing and storage was a deliberate part of the software’s initial conception, according to Marcel Kornacker, who created the Impala technology as a software engineer at Cloudera, based in Palo Alto, Calif., where he worked until last June.

Cloudera originally released Impala, supporting an Apache license, in 2012. Impala formally entered the Apache Incubator process in December 2015. That year, Impala was opened up to community contributions, and it has had four releases since then. Google, Oracle, MapR and Intel are among the vendors that have developed integrations with Impala.

“Graduation from Incubator to Top-Level Project status is recognition of the strong team behind the Impala project,” Kornacker said. “Along the way, Impala’s stability has increased. The community made that possible.”

Cluster at scale

“With Impala, you can perform analytics on a cluster at scale,” said Brock Nolan, chief architect and co-founder of phData, a Minneapolis-based managed data service and consulting firm that works specifically with the Cloudera big data platform. Queries run more quickly, he said, and data scientists don’t have to take those analytical jobs home with them.

Graduating to being a Top-Level Project has really been about the ability to develop a community of contributors.
Brock Nolanchief architect and co-founder, phData

He said Apache Impala benefited from efficient integration with other Hadoop ecosystem components. A useful integration he noted in particular is with Kudu. This column-oriented data store is also part of the Hadoop ecosystem, also an Apache project, also originated at Cloudera and also named after a species of antelope.

Nolan said he has found Kudu, together with Impala, useful in dealing with fast-moving internet-of-things data. Like much of the data that is fodder for big data tools these days, unstructured, nonrelational data is a big part of what is being gathered, but SQL support is still vital for analytics on that data.

“Graduating to being a Top-Level Project has really been about the ability to develop a community of contributors,” he said. “When something becomes a Top-Level Project, it means there is traction, and it is mature, that there are multiple companies that are using it and contributing, not just customers of Cloudera.”

Sting like a butterfly, query like an Impala

Software like Apache Impala moves the Hadoop ecosystem, originally based on the MapReduce processing scheme, deeper into the realm of real-time processing, according to Mike Matchett, an analyst and consultant at Taneja Group in Hopkinton, Mass.

“When you look at MapReduce, the original Hive and the like, you see a batch approach to analytics. They are not designed as interactive tools,” he said. Meanwhile, he continued, data managers want to deliver interactivity to their audience in the corporation.

Impala, like other emerging tools, is intended to bring SQL capabilities, widely supported in organizations, to distributed data processing. These tools may find different fits within big data pipelines.

Brock Nolan, phDataBrock Nolan

“We see Impala getting broad adoption,” said Nolan, who also employs SparkSQL for different use cases as part of phData’s practice.

“Spark SQL is really good at ETL [extract, transform, load],” Nolan said. “At the same time, Impala is our go-to SQL on Hadoop for presenting results to data scientists and business analysts. We use both, but Impala is what broadens the user base of the data.”

For his part, project founder Kornacker said the design of Apache Impala puts special emphasis on low latency and high concurrency, characteristics that have sometimes eluded Hadoop-style applications when they go into actual production, and armies of users start to ask questions of their data trove.

Making the grade

Impala advances are part of a broader move, one that sees much of the innovation in data processing tools centered on open source, rather than proprietary tooling. That move has also seen “Hadoop” data processing gain a wider definition.

“These days, Hadoop is a collection of things,” Nolan said. “Two years ago, it meant something specific. But because of its open architecture, it is different now.”

Such open architecture has led to a sudden rival of a host of data options. It allows users to, in Nolan’s words, “bring in different layers of software now to enable new functions.”

Software such as Impala drives forward the notion that data integration these days has more aspects, according to Rick Sherman, founder of consultancy Athena IT Solutions in Maynard, Mass. Sherman, who also teaches big data analytics at Northeastern University, counts Impala, as well as Hive, among the tools his students employ as part of their education.

“They have to learn that data integration isn’t just relational,” he said. “Today, there are different use cases for Hadoop, for NoSQL or for relational and columnar processing. Figuring out where the best uses of these tools are — that is what you have to learn to do.”

Microsoft expansion is a welcome boost to Puget Sound region

Microsoft’s breathtaking expansion project affirms the Puget Sound region’s capacity for continued growth and reinvention.

The company announced Wednesday a multibillion expansion that includes 18 new buildings, fields and parking on its main Redmond campus east of Highway 520.

This should not be taken for granted.

Cities, states and nations are desperate for this sort of economic development, especially by a leading company in a clean industry. Heads of state plead for Microsoft to open a single office and could not imagine the company building a small city’s worth.

Microsoft will add the capacity for 8,000 additional employees on its campus, although much of the new space may be filled by people now working in offices that Microsoft currently leases in Bellevue.

The next challenge for the Eastside and the rest of the Puget Sound region will be to nurture other companies to grow into the next giants and continue attracting other companies.

Comparisons are inevitable with Amazon’s building spree in Seattle.

One way to look at it is to marvel at how both Microsoft and Amazon continue to innovate and sustain their growth by creating new, multibillion-dollar businesses.

Microsoft’s construction project is a physical manifestation of this process.

As it continues its transition into a provider of online services, Microsoft is building more open and expansive offices. They’ll replace the formerly futuristic, X-shaped buildings where it produced software that dominated the personal-computer era.

It’s also enlightening to look at Amazon’s decision to open a second headquarters elsewhere.

That happened in part because of concerns about the business climate in Seattle proper, including the city’s poor job planning for growth that was clearly coming when Amazon submitted its development proposals.

Microsoft also brought disruption and community transformation, particularly when it surged in the 1990s.

Eastside officials learned from this experience. Microsoft’s more recent expansions, over the last decade, were more easily absorbed because of thoughtful planning and regional coordination. That includes road, highway and transit expansion concurrent with growth and careful placement of new housing density within urban hubs.

Such efforts are now paying off. Redmond has infrastructure and housing capacity in place or in process for all the potential growth Microsoft’s new project will generate. Construction of the 18 new buildings is expected to finish around the same time a light-rail station is scheduled to open at its campus in 2023.

The region is blessed to have such companies. They create opportunity and employment for residents and newcomers attracted by their dynamism and success creating world-changing products.

With good infrastructure and gleaming campuses on both sides of Lake Washington, municipal boundaries become less important and all meld into the Greater Seattle region.

New ONAP architecture provides network automation platform

Eight months after its inception, the Open Network Automation Platform project released its first code, dubbed Amsterdam. The ONAP architecture is a combination of two former open source projects — AT&T’s Enhanced Control, Orchestration, Management and Policy and the Open-Orchestrator project.

ONAP’s November release targets carriers and service providers by creating a platform for network automation. It includes two blueprints — one for Voice over Long Term Evolution and one governing virtual customer premises equipment. Additionally, Amsterdam focuses on automating the service lifecycle management for virtual network functions (VNFs), said Arpit Joshipura, general manager of networking and orchestration at The Linux Foundation, which hosts the ONAP project.

The ONAP architecture includes three main components: design time, run time and the managed environment. Users can package VNFs according to their individual requirements, but Amsterdam also offers a VNF software developer kit (SDK) to incorporate third-party VNFs, Joshipura said.

Once services are live, the code — a combination of existing Enhanced Control, Orchestration, Management and Policy, or ECOMP, and Open-O with new software — can manage physical and virtual network functions, hypervisors, operating systems and cloud environments. The ONAP architecture integrates with existing operational and billing support systems through an external API framework.

VNF automation is a key component, Joshipura said.

“The network is constantly collecting data, analytics, events, security, scalability — all the things relevant to closed-loop automation — and then it feeds it [the data] back to the service lifecycle management,” he said. “If a VNF needs more VMs [virtual machines] or more memory or a change in priority or quality of service, all that is automatically done — no human touch required.”

Because ONAP is a collection of individual open source projects, some industry observers and potential users expressed doubts about how easy it would be to put Amsterdam to use — particularly since AT&T was originally the main ECOMP contributor. But Joshipura said ONAP reworked the code to reduce the complexity and make Amsterdam usable for the majority of users, not just specific contributors.

“Originally, yes, it was complex because it was a set of two monolithic codes. One was Open-O and the other was ECOMP,” he said. “Then, what we did was we decoupled and modularized it and we removed all the open source components. We refactored a lot of code when we added new code.”

The result is a modular platform — not a product, he said — that has many parts doing several different things. This modularity means carriers and service providers can pick and choose from the Amsterdam code or use the platform as a whole.

ONAP’s next release — Beijing, expected in 2018 — will focus on support for enterprise workloads, including 5G and internet of things (IoT).

MEF releases 3.0 framework aimed at automation, orchestration

MEF has released a new framework governing how service providers deploy network automation and orchestration.

MEF 3.0 Transformational Global Services Framework is the latest effort by the organization to move beyond its carrier Ethernet roots. MEF is shifting its focus toward creating a foundation that service provider members can use as they move toward cloud-native environments.

MEF 3.0 is developed around four main components: standardized and orchestrated services, open lifecycle service orchestration (LSO) APIs, certification programs and community collaboration.

With the new framework, MEF is defining network services, like wavelength, IP and security, to help service providers move to cloud environments and network automation, according to Pascal Menezes, CTO at MEF, based in Los Angeles.

“A service is defined like a language that everybody can understand, whether it be a subscriber ordering it or a provider implementing it. They all agree on that language,” he said. “But how they actually implement it and what technology they use is independent and was never really defined in any specs. It defines SLA objectives, performance objectives and different classes of performances, but it doesn’t tell you how to implement.”

MEF has previously worked on orchestrating connectivity services, like wavelength and IP, and intends to deliver that work early next year, Menezes said. MEF has started developing SD-WAN orchestration standards, as well, citing its role as a bridge between connectivity layer services and other services, like security as a service and application performance, he added.

These services are automated and orchestrated via MEF LSO APIs. MEF released two LSO APIs earlier this year and will continue to develop more within MEF’s LSO reference orchestration framework. The certification programs will correlate with upcoming releases and are subscription-based, he said.

The fourth MEF 3.0 component involves what MEF calls community collaboration. This involves open source contributions, an enterprise advisory council, hackathons and partnerships with other industry groups. MEF and ONAP, for example, announced earlier this year they are working together to standardize automation and orchestration for service provider networks.

In a separate announcement this week, MEF said it plans to combine its efforts to determine how cloud applications connect with networks with work conducted by the International Multimedia Telecommunications Consortium (IMTC) to define application performance. According to Menezes, MEF will integrate existing IMTC work into its new applications committee and will take over any active projects as part of the MEF 3.0 framework. 

“IMTC has been focused on real-time media application performance and interoperability. It made a lot of sense to bring that work into MEF,” Menezes said.