Former Veeam CEO looks back, company looks ahead

As Veeam leadership vows to stay the course with its strategy following an unexpected executive shake-up, its former co-CEO Peter McKay said he aims to lead a software company again by the end of 2019. But for now, McKay said he is ready for some downtime.

Last week, co-founder Andrei Baronov became the new Veeam CEO. McKay left the data protection company after two and a half years as an executive, including 18 months as president and co-CEO.

“The experience was awesome,” McKay said in an interview. “I’m really going to miss the team.”

Dan Thompson, research director at 451 Research, called McKay’s departure from Veeam “a shock.”

“He was a really solid leader,” Thompson said.

McKay reflects on Veeam work, next steps

McKay joined the company as COO and president in July 2016, following nearly three years in executive roles at VMware. Previously, he was president and CEO at Desktone, which was acquired by VMware in 2013. He had shared Veeam CEO duties with Baronov since May 2017.

Veeam, which is based in Switzerland, grew from a $400 million company to one with nearly $1 billion in annual revenue during McKay’s tenure. He said Veeam was already successful when he started and joining the organization was a “great opportunity for me to come in and scale it.”

Headshot of former Veeam co-CEO Peter McKayPeter McKay

In the last two and a half years, Veeam grew its enterprise base considerably, added several high-profile partnerships and acquired AWS backup company N2WS.

McKay seemed disappointed that the company hadn’t quite hit that long-discussed billion-dollar goal before he left.

“The timing was less than desirable, but there were a lot of factors that went into it,” he said of his departure.

McKay listed the nonstop nature of the job, wear and tear of the work, and the time away from family as major reasons for leaving. He said there were a few other factors, but declined to elaborate. When asked if it was entirely his decision to leave, McKay declined to comment. He said it was mutually agreed that it was the right time to exit.

Ratmir Timashev, Veeam’s other founder and now executive vice president of worldwide sales and marketing, said he was surprised by McKay’s departure.

“That was Peter McKay’s decision, driven by a desire to pursue some other opportunities,” Timashev said last week when the Veeam CEO news became public. He said he couldn’t comment further.

McKay said a breach two months ago did not play into his decision to leave and the issue was resolved to his satisfaction. Veeam said human error caused a database with 4.5 million unique email addresses to be accessible to third parties for two weeks.

Timashev also said the issue has been resolved and McKay’s departure does not have to do with the breach. Though Veeam does not believe the breach resulted in any damage, the company plans to provide more follow-up information soon, as a third party performed an analysis, Timashev said.

McKay also said he didn’t have a problem with sharing the Veeam CEO duties. As co-CEO, McKay oversaw Veeam’s “go-to-market,” finance and human resources functions, while Baronov led research and development and product management.

McKay did acknowledge that the co-CEO setup can become tough for an organization because of the possibility of it becoming two companies. In the short term, though, McKay said it wasn’t a challenge and he got along with Baronov.

“I probably wouldn’t look for another co-CEO position, but we made it work at Veeam,” McKay said.

He said he’ll miss the people at Veeam the most, noting that he has worked with some of them for close to 20 years. More than 100 people responded with commendations to his LinkedIn post last week that discussed the end of his tenure.

“In one sense, I’m humbled, I’m proud,” McKay said. “In another sense, I’m sad.”

McKay said he is looking forward to relaxing for a while. The time off so far included a trip to the World Series victory parade for the Boston Red Sox last week. He said he will spend more time at his children’s sporting events and with family in general, work around the house and enjoy leisure travel. He also plans to devote more effort working on boards, including the board of email marketing calendar company Coherent Path.

McKay said he does not have a new job lined up, and will begin his job search around the beginning of 2019. He said he doesn’t know the exact field yet, but it will be in software.  

McKay has given himself a timeframe of between the second and fourth quarters of 2019 to be back as a CEO again.

“I still have a ways to go,” McKay said. “I’m not retiring, that’s for sure.”

Veeam moves forward with ‘same plan’

Timashev said he and William Largent, a former Veeam CEO who is now executive vice president of operations, will split McKay’s duties. Timashev was Veeam’s CEO from 2006 until 2016, and has remained active in executive roles since then.

Thompson, of 451 Research, said he doesn’t anticipate major strategy changes following the Veeam CEO switch. Both founders and Largent have served as CEO and have all played major roles in shaping Veeam.

Thompson said Veeam may want to get back to its roots and focus more resources on the IT professional.

“What has made Veeam popular is the software works amazing and IT pros love it,” he said, noting that Veeam’s recent marketing of “Intelligent Data Management for the Hyper-Available Enterprise” is geared more toward leadership.

Edwin Yuen, senior analyst at Enterprise Strategy Group, said Veeam has done a good job moving beyond its initial focus on protecting virtual machines. Now it provides more comprehensive data protection, including support for physical machines and public clouds.

“I think that’s absolutely critical for them to go forward,” Yuen said.

Veeam’s biggest challenge may be maintaining its growth rate as it approaches $1 billion in revenue. In October, Veeam said the third quarter was its 41st consecutive quarter of double-digit growth, claiming bookings increased by more than 20% year over year. Veeam’s cloud business has seen a huge surge, growing in the third quarter by 26% year over year, while enterprise business increased close to 25%.

Veeam claims 320,000 customers, 59,000 channel partners, more than 20,000 cloud and service providers, and about 3,600 employees.

The strategy and tactics are not changing.
Ratmir Timashevco-founder, Veeam

“They’ve always been on track,” Yuen said. “They’ve been growing in a good direction.”

Yuen also noted the stability of Veeam’s leadership team, including founders Baronov and Timashev. Baronov had no comment at this time.

Timashev said the company has a strong roadmap, including an update to its flagship Veeam Availability Suite scheduled for January. Veeam also said Monday that Daniel Fried rejoined the company as general manager and senior vice president of Europe, the Middle East and Africa (EMEA) and will oversee the strategic and operational direction of the area. Olivier Robinne, formerly senior vice president of EMEA sales, has left the company after seven years to pursue new opportunities, according to Veeam.

In a highly competitive market, Timashev cited Dell EMC, IBM, Commvault and Veritas among Veeam’s top rivals. He said Veeam also occasionally competes for deals with startups Cohesity and Rubrik.

Although Veeam’s largest competitors offer an integrated appliance option for their software, Timashev insists Veeam will never produce its own hardware. He said it will continue to partner with hardware vendors on appliances, pointing to deals with Hewlett Packard Enterprise, Lenovo and Cisco.

He also said the company will remain private “for now.”

“The strategy and tactics are not changing,” Timashev said. “We’re going with the same plan.”

For Sale – Complete RGB Gaming PC 7700K, GTX1080

Hi guys,

New to AVF selling but long time member on a members market of a well known computer parts retailer. I’m flying off to a Ski season on the 5th November so I’ve decided to sell my pride and joy so I can fund ski gear!

I built this computer myself from individual parts in September 2017 and so it is about 13 months old (a few components e.g. SSD and PSU are older as they came from a previous build).

The complete specs are:

Intel i7 7700K
Corsair Hydro H100i GTX 240mm
Asus Maximus IX Formula
Team Group Night Hawk RGB 16GB (2x8GB) DDR4 PC4-25600C16 3200MHz
Asus Strix GTX1080
Superflower Leadex 1000W Platinum
Phanteks Enthoo Luxe Glass Midi Tower Case
(w/ additional Phanteks Multicolor Magnetic RGB LED Strip which I have placed around the outside of the window)
Crucial M500 250GB SSD
Samsung F1 1TB HDD
DVD-RW Optical Drive

The case, GPU, RAM and motherboard are all RGB and sync with Asus’s software so any effects are perfectly synced and so it looks gorgeous.

The CPU has had a mild overclock but never pushed significantly.
I have a lot (but not all) of the boxes for the individual components should you wish to sell/change any parts.

I would never post this so payment on collection/meet-up only. Willing to drive up to 1hr from my post code of KT21 2LW (I will require a small deposit of £20 for driving just to avoid any time wasting).

Price and currency: 1000GBP
Delivery: Goods must be exchanged in person
Payment method: Cash, Paypal (F&F) or Bank Transfer
Location: Ashtead, Surrey
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

XAML Islands – A deep dive – Part 2 – Windows Developer Blog

Welcome to the 2nd post of our Xaml Islands deep dive adventure! On the first blog post, we talked a little bit about the history of this amazing feature, how the Xaml Islands infrastructure works and how to use it, and also a little bit of how you can leverage binding in your Island controls.
On this second blog post, we’ll take a quick look on how to use the wrappers NuGet packages and how to host your custom controls inside Win32 Apps.

Creating custom wrappers around UWP controls can be a cumbersome task, and you probably don’t want to do that. For simple things such as Buttons, it should be fine, but the moment you want to wrap complex controls, it can take a long time. To make things a little bit less complicated, some of our most requested controls are already wrapped for you! The current iteration brings you the InkCanvas, the InkToolbar, the MapControl and the MediaPlayerElement. So now, if your WPF app is running on a Windows 10 machine, you can have the amazing and easy-to-use UWP InkCanvas with an InkToolbar inside your WPF App (or WinForms)! You could even use the InkRecognizer to detect shapes, letters and numbers based on the strokes of that InkCanvas.
How much code does it take to integrate with the InkCanvas? Not much, at all!

Most of it is just the Grid definition so, in fact, we added very few lines of code (only 2). And that would give your users an amazing experience that is enabled by XAML Islands and the new UWP Controls.

Everything I explained so far is for platform controls, but what if you want to wrap your own custom UWP UserControl and load it using WindowsXamlHost? Would it work? Yes! XAML controls, when instantiated in the context of an Island, handle resources in a very smart way, meaning that the ms-appx protocol just works, even if you are not running you Win32 process inside a packaged APPX. The root of the ms-appx protocol will map its path to your executable path.
As of right now, you can’t just create a UWP Library and reference it on your WPF or WinForms project, so the whole process of using a custom control is manual. When you develop a UWP App (C#, for example) you are compiling using a UWP flavor of the .NET Core Framework, not the .NET Full Framework. In order for your custom control to work on a WPF or WinForms App that is based on the .NET Full Framework, you must recompile the artifacts of the UWP Library using the .NET Full Framework toolset, by coping them to your WPF/WinForms project. There is a very good documentation about this right here that describes all the necessary steps. Remember that your WPF/WinForms project does not target, by default, any specific Windows 10 version, so you need to manually add references to some WinMD and DLLs files. Again, this is all covered in Enhance your desktop application for Windows 10, which describes how to use Windows 10 APIs on your Desktop Bridge Win32 App. By referencing the WinMDs and DLLs, you will also be able to build this compilation artifacts from the UWP Library on the WPF/WinForms project (.NET Full Framework).
NOTE: There is a whole different process for native code (C++/WinRT), which I’m not going to get into the details in this blog post.
You also can’t build these artifacts as-is. You need to inform the build system to disable type information reflection and x:Bind diagnostics. That’s because the generated code won’t be compatible with the .NET Framework. You can make it work by adding these properties to your UWP Library project:


Now, you could just manually copy the required files to the WPF/WinForms project, but then you would have multiple copies of it. You can automate that process with a post-build step, just like the documentation does it. If you do it that way though, it will not work if you try to pack your app inside an APPX, because the files will not get copied. To improve that, I created a custom MSBuild snippet that does that for you. The advantage of the Microsoft Build snippet is that is adds the CSharp files as well as the compilation outputs from the library all in the right place. All you must do is copy this script and it will just work.
NOTE: Keep in mind that this will be handled by the Visual Studio in the future, so you’ll have to remove either solution whenever that happens.
This is the snippet:




This Microsoft Build snippet is copying files, based on the IslandLibrary Property path, to the project where it resides. The IslandLibraryCompile includes:

All the .xaml.cs files. That will enable you to reuse the code behind of your custom controls.
All the generated .g.i.cs and .g.cs files. These files are generated files. All that you do under the “x:” prefix, is actually generated code, and this is where this generated code is going after all. So this files are the partial classes that actually hold the fields of all the x:Names inside their corresponding XAML files, and they also hold the code for connecting this fields with their actual instances. They also reference the .XAML file that will be loaded whenever the InitializeComponentmethod is called, usually at the beginning of the control’s constructor. You can look at this as a black box, but it is interesting to understand what is inside these files, but not necessarily how it works. The IslandLibraryContent includes:
All the content files of your project. That basically will copy all the files required for your project to run, like PNGs, JPGs, etc. It already copies them to the right folders so ms-appx:///will “just work”™. There are better ways of doing this, but this will cover the basic needs of the most common scenarios.
All the generated .xbf files. XBF stands for XAML Binary Format and it is a compiled version of your .xaml files, they load much faster than the XAML files (no XML parsing, for example). Even though the .g.i.cs files might look like they are trying to load the .xaml files, the XAML infrastructure itself always tries to load the .XBF files first, for performance. Only if it can’t find them they will try to load the .xaml files. This MSBuild script is not copying the .xaml files since they bring no advantage compared to the .XBFs.

To make sure that your developer experience is optimal, you also have to add a solution level project dependency, from the WPF/WinForms project to the UWPLibrary project. This means that whenever you change any of the UWP Library’s files, you can just build the WPF/WinForms project and the newest artifacts are already in place, in the correct order of project compilation. All these steps are going away in a future version of Visual Studio, when the tooling gets updated. There steps are described here at the documentation.
With these files included into the project’s build infrastructure and with the build dependency added, your WindowsXamlHost should work just fine if you set it’s InitialTypeName to your custom control’s fully qualified name. You can checkout the sample project here.
With this MSBuild snippet, even your apps packaged with the “Windows Application Packaging Project” template should work. If you want to know more, checkout this blog post.

Again, this release is in preview, so nothing you see here is production ready code. Just to name a few:

Wrapped Controls properly responding to changes in DPI and scale.
Accessibility tools that work seamlessly across the application and hosted controls.
Inline inking, @Places, and @People for input controls.

For a complete list, check the docs.

The version just released is not the final stable version, meaning that it is a preview. We’re still actively working on improving Xaml Islands. We would love for you to test out the product and provide feedback on the User Voice or at, but we currently are not recommending this for production use.

Hunt down zombie VMs to trim Windows Server licensing costs

Microsoft’s change from a per-CPU server OS licensing model to a per-core approach has caused many organizations to look for ways to reduce Windows Server licensing costs. Finding and eliminating zombie VMs is one way to accomplish this goal.

Advances in virtualization technology make it quick and easy to scale workloads out horizontally, but this ability has a downside: It can lead to server sprawl. Many application owners see virtual servers as an almost-unlimited resource, so they create VMs quickly and forget them just as fast. Left unchecked, this can lead to numerous zombie servers wandering through a virtual infrastructure.

Numerous virtual servers lead to real costs

Even the smallest physical server requires an organization to spend a minimum of $6,155 for a Windows Server Datacenter edition license based on Microsoft’s 16-core minimum per server. For a quad-core server with a total of 96 cores, the cost jumps to more than $36,000. That’s before Software Assurance gets tacked on.

This sticker shock of the new pricing model hits companies considering a migration, and it causes administrators to take a closer look at the workloads in their inventory in an effort to find ways to cut down on Windows Server licensing costs.

A zombie server that isn’t doing much doesn’t present a huge drain on resources, because hypervisors excel at allocating resources to VMs that need it. However, these zombie servers increase the VM density per host and drive the need for additional hosts, which leads to higher Windows Server licensing costs.

Finding the VM is just the first step

If the admin finds a server that appears to be dormant, with no CPU activity or network I/O over the course of several days, then it might be a zombie VM.

The best way to deal with a zombie virtual server is to remove it. It’s not a complex process to dispatch these VMs. Finding them is where the difficulties lie, because it’s possible to inadvertently remove a VM that’s still necessary.

As part of the investigative work, IT admins need to identify the zombie VM, then verify it is needed. This two-step process is critical to avoid removing or stopping production workloads.

Admins can detect zombie workloads by name or by performance, but the easiest is by name. Temporary servers are often spun up without much advance planning and, consequently, do not match the company’s server-naming scheme. The hard part is determining if the workload has a valid purpose. IT admins should not disconnect or remove something that looks off just by its label. 

This is where the monitoring aspect comes into play. If the admin finds a server that appears to be dormant, with no CPU activity or network I/O over the course of several days, then it might be a zombie VM. Instead of deleting it, disconnect it from the network. This is an ideal way to see if anyone notices without causing much harm.

If someone asks about the VM, then IT can reconnect the server easily. When the organization finds VMs that don’t use the proper naming scheme, find out what the server does and who owns it to ensure they follow best practices the next time.

VM CPU usage
Administrators can use the Hyper-V Manager to see which VMs might be zombie servers by seeing how much CPU usage goes to each VM.

If no one asks about the server after about a week, then shut it down. Keep it in this shutdown state for several days. If there are no complaints, remove it from the virtual inventory and place it in cold or low-tier storage for about a month before deleting it. This procedure offers a smart and safe way to consolidate workloads on the virtual hosts.

Removing zombies has other benefits

The verification process is the risky part when it comes to zombie workloads. Pulling a VM out of production can trigger a disruption, which is always a concern. There are risks when removing zombie workloads, but the payoff can be huge when the organization can cut back on Windows Server licensing costs.

In addition to the financial savings, removing zombie servers also saves IT admins from unnecessary maintenance. Admins responsible for patching servers have probably been updating these zombie workloads, along with all of the other VMs, to avoid exposure to an east-west attack. This time and effort puts a drain on the IT staff’s resources.

1 Week to Skype-a-Thon 2018: Join us to celebrate global learning and open your students’ hearts and minds |

If you want to give your students something they can’t wait to go home and share around the dinner table, and keeps them interested and curious for days, participate in Skype-a-Thon. And if you want to give your students something they’ll never forget from their time with you as their educator, participate in Skype-a-Thon. It’s just a week away and an event you don’t want to miss!

Many of the participating educators in over 90 countries tell us this is their favorite activity of the year, because it engages their students deeply in learning and opens their hearts and minds to the world. Just be ready for the added enthusiasm in your classroom!

One of those participating educators is Todd Flory (Kansas, USA), who says: “We are excited to connect with experts and virtual field trips that will make our academic standards come to life. Our students learn so much more when they can ask questions and do authentic research. We also love connecting with other classrooms around the world, as it helps our students celebrate diversity and learn that our differences are what makes us special and stronger as a global society.”

A classroom of students talk to another classroom via a Skype window projected onto a screen.

A classroom of students talk to another classroom via a Skype window projected onto a screen.

Another educator from Nigeria, Olukemi Olurinola, says: “Participating in Skype-a-Thon is always the highlight of our school year! We open our classrooms to the world and connect with experts and other classrooms to learn about different cultures and environments and help them model compassion for one other. The best way to learn about the world is learning with the world.”

Seated students facing a screen in their classroom as they converse with a remote educator via Skype.

Seated students facing a screen in their classroom as they converse with a remote educator via Skype.

We expect nearly 500,000 children and guest experts around the world to connect via Skype on November 13th and 14th. They’ll share stories about their culture and their environments, play games, take virtual field trips and, most importantly, discover that we are more similar than different around the world.

Participating classrooms and Skype in the Classroom partners will travel an estimated 14 million virtual miles from over 90 countries through these live Skype experiences. And with every 400 virtual miles traveled, participating classrooms will help support up to 35,000 children in need with an education in WE Villages. That’s one of the great things about this event: while it opens the hearts and minds of students, it always opens potential for many more children to receive an education.

It’s not too late to register. You can find a classroom or guest expert to connect with and many resources and activity plans to integrate Skype-a-Thon into your curriculum.

We hope you’ll join this global learning community on their 48-hour journey around the world to leave an impact on the next generation of global citizens. Please share all the fun with @SkypeClassroom and #skypeathon and #MicrosoftEDU.

IBM business partners mull benefits, risks of Red Hat buyout

IBM business partners have begun recalibrating strategies in the wake of the vendor’s announcement that it would acquire open source software vendor Red Hat.

IBM, which plans to purchase Red Hat for $34 billion, sparked a wildfire of questions this week concerning the fate of Red Hat’s roadmap and commitment to open source culture under Big Blue. While IBM stated that Red Hat would operate independently within its hybrid cloud business unit, and retain its multi-cloud alliances with providers such as AWS, Microsoft and Google, overlaps in the vendors’ portfolios and the nuances around integrating the companies have yet to be fleshed out. Despite the uncertainties, IBM business partners revealed they are optimistic and recognize the major boost the buyout would give Big Blue in the hybrid cloud market.

“IBM did some analysis, and they believe 80% of … the enterprise workloads have not moved to any type of cloud platform for many reasons,” said Charles Fullwood, senior director of software sales engineering at Force 3, a solution provider and IBM and Red Hat partner based in Crofton, Md. “IBM also believes that the hybrid cloud market is about a $1 trillion market by the year 2020” and the Linux operating system “is the dominant platform across the entire cloud marketplace.”

Todd Matters, co-founder and chief architect of RackWare, a cloud migration platform provider based in Fremont, Calif., agreed. “IBM’s growth has stalled in recent years, but this acquisition could help jump-start the company’s growth by giving IBM a chunk of the burgeoning hybrid cloud market, driven by enterprises that are switching to multi-cloud and hybrid cloud strategies,” he said in an email.

Broaden opportunities for IBM business partners

Fullwood said the IBM-Red Hat combo presents a “tremendous opportunity” in the federal space, which Force 3 targets, and beyond.

“The appeal of Red Hat is very broad,” he said. He added that the acquisition would benefit Force 3’s federal cloud migration offering, dubbed ‘Bridge to the Cloud,’ due in part to customer interest in the OpenShift container application platform, he said.

Tim Beerman, CTO of Ensono, a hybrid IT provider and IBM and Red Hat partner headquartered in Downers Grove, Ill., said he expects IBM will make significant investment to speed up Red Hat’s research and development and technology roadmaps.

“We have large relationships with both [vendors] across the entire hybrid IT spectrum. I think where IBM takes the Red Hat capabilities and brings some of those capabilities into other IBM services, that will be interesting,” Beerman said. “I am really interested to see the investment IBM is going to presumably pour into Red Hat to move IBM’s hybrid story faster but also to move Red Hat’s capabilities further along quickly, too, so we can leverage those.”

Maintaining Red Hat’s independence

If the Red Hat acquisition is completed, IBM business partners acknowledged the risk of IBM compromising Red Hat’s open source ethos.

I don’t see major risk for IBM other than ensuring that they don’t hinder what Red Hat has brought to them.
Tim BeermanCTO, Ensono

Fullwood said he doesn’t expect IBM to interfere with Red Hat’s culture or strategy. Citing previous acquisitions of Lotus, Rational and Tivoli, Fullwood noted that while the vendor typically integrates acquisitions into “the IBM machine,” assimilating Red Hat like previous buyouts would “be a big culture shock to Red Hat employees” and could create problems.

“If the two companies collaborate … and try to work together, it can be a very powerful acquisition,” he said.

Beerman, meanwhile, said he equates the buyout to EMC’s acquisition of VMware. “It is like a great acquisition, but VMware still had its own independence. They still had a lot of their partnerships [with EMC’s competitors].” He said he predicts IBM to take a similar approach: Integrate very seamlessly with Red Hat’s products, while maintaining the company’s vendor-agnostic approach to the market.

“I don’t see major risk for IBM other than ensuring that they don’t hinder what Red Hat has brought to them,” Beerman said.

For Sale – Acer Predator XB271H Gaming Monitor 170hz G-sync 1ms

Amazing 27″ monitor for sale.

This thing is an absolute beast, however I’m upgrading to 4k and need to sell.

It’s really hard for me to part with this monitor as it’s been the central part of my build.

Collection only or if you pay for courier I will get it delivered.

Price and currency: 400
Delivery: Delivery cost is not included
Payment method: Bank Transfer or Paypal Gift
Location: Nottingham
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.