Tag Archives: remains

Windows troubleshooting tools to improve VM performance

Whether virtualized workloads stay on premises or move to the cloud, support for those VMs remains in the data center with the administrator.

When virtualized workloads don’t perform as expected, admins need to roll up their sleeves and break out the Windows troubleshooting tools. Windows has always had some level of built-in diagnostic ability, but it only goes so deep.

Admins need to stay on top of ways to analyze ailing VMs, but they also need to find ways to trim deployments to control resource use and costs for cloud workloads.

VM Fleet adds stress to your storage

VM Fleet tests the performance of your storage infrastructure by simulating virtual workloads. VM Fleet uses PowerShell to create a collection of VMs and run a stress test against the allocated storage.

This process verifies that your storage meets expectations before deploying VMs to production. VM Fleet doesn’t help troubleshoot issues, but it helps confirm the existing performance specifications before you ramp up your infrastructure. After the VMs are in place, you can use VM Fleet to perform controlled tests of storage auto-tiering and other technologies designed to adjust workloads during increased storage I/O.

VM Fleet tests the performance of your storage infrastructure by simulating virtual workloads.

Sysinternals utilities offer deeper insights

Two Windows troubleshooting tools from the Microsoft Sysinternals collection, Process Explorer and Process Monitor, should be staples for any Windows admin.

Process Explorer gives you in-depth detail, including the dynamic link library and memory mapped files loaded by a process. Process Explorer also lets you dig in deep to uncover issues rather than throwing more resources at an application and, thus, masking the underlying problem.

Process Explorer
Process Explorer lets administrators do a technical deep dive into Windows processes that the Task Manager can’t provide.

Process Monitor captures real-time data of process activity, and Registry and file system changes on Windows systems. It also provides detailed information on the process trees.

Administrators can use Process Monitor’s search and filtering functions to help administrator focus on particular events that occur over a longer period of time.

VMMap and RAMMap detail the memory landscape

Another Sysinternals tool called VMMap shows what types of virtual memory is assigned to a process and its committed memory, which is the virtual memory reserved by the operating system. This tool shows where allocated memory is used with a visual presentation.

VMMap measurements
VMMap shows how the operating system maps physical memory and uses memory in the virtual space to help administrators analyze how applications work with memory resources.

VMMap doesn’t check the hypervisor layer, but it does detail virtual memory use provided by the OS. Combined with other tools that view the hypervisor, VMMap gives a complete picture of the applications’ memory usage.

Another tool called RAMMap is similar to VMMap, but it works at the operating system level rather than the process level. Administrators can use both tools to get a complete picture of how applications are getting and using the memory.

BgInfo puts pertinent information on display

BgInfo is a small Sysinternals utility that displays selected system information on the desktop, such as the machine name, IP address, patch version and storage information.

While it’s not difficult to find these settings, making them more visible can help when you log into multiple VMs in a short amount of time. It’s also helpful to avoid installations on the wrong VM or even rebooting the wrong VM.

The top Exchange and Office 365 tutorials of 2017

Even in the era of Slack and Skype, email remains the key communication linchpin for business. But where companies use email is changing.

In July 2017, Microsoft said, for the first time, its cloud-based Office 365 collaboration platform brought in more revenue than traditional Office licensing. In October 2017, Microsoft said it had 120 million commercial subscribers using its cloud service.

This trend toward the cloud is reflected by the heavy presence of Office 365 tutorials in this compilation of the most popular tips of 2017 on SearchExchange. More businesses are interested in moving from a legacy on-premises server system to the cloud — or at least a new version of Exchange.

The following top-rated Office 365 tutorials range from why a business would use an Office 365 hybrid setup to why a backup policy is essential in Office 365.

5. Don’t wait to make an Office 365 backup policy

Microsoft does not have a built-in backup offering for Office 365, so admins have to create a policy to make sure the business doesn’t lose its data.

Admins should work down a checklist to ensure email is protected if problems arise:

  • Create specific plans for retention and archives.
  • See if there are regulations for data retention.
  • Test backup procedures in Office 365 backup providers, such as Veeam and Backupify.
  • Add alerts for Office 365 backups.

4. What it takes to convert distribution groups into Office 365 Groups

Before the business moves from its on-premises email system to Office 365, admins must look at what’s involved to turn distribution groups into Office 365 Groups. The latter is a collaborative service that gives access to shared resources, such as a mailbox, calendar, document library, team site and planner.

Microsoft provides conversion scripts to ease the switch, but they might not work in every instance. Many of our Office 365 tutorials cover these types of migration issues. This tip explains some of the other obstacles administrators encounter with Office 365 Groups and ways around them.

3. Considerations before a switch to Office 365

While Office 365 has the perk of lifting some work off IT’s shoulders, it does have some downsides. A move to the cloud means the business will lose some control over the service. For example, if Office 365 goes down, there isn’t much an admin can do if it’s a problem on Microsoft’s end.

Businesses also need to keep a careful eye on what exactly they need from licensing, or they could end up paying far more than they should. And while it’s tempting to immediately adopt every new feature that rolls out of Redmond, Wash., the organization should plan ahead to determine training for both the end user and IT department to be sure the company gets the most out of the platform.

2. When a hybrid deployment is the right choice

A clean break from a legacy on-premises version of Exchange Server to the cloud sounds ideal, but it’s not always possible due to regulations and technical issues. In those instances, a hybrid deployment can offer some benefits of the cloud, while some mailboxes remain in the data center. Many of our Office 365 tutorials assist businesses that require a hybrid model to contend with certain requirements, such as the need to keep certain applications on premises.

1. A closer look at Exchange 2016 hardware

While Microsoft gives hardware requirements for Exchange Server 2016, its guidelines don’t always mesh with reality. For example, Microsoft says companies can install Exchange Server 2016 on a 30 GB system partition. But to support the OS and updates, businesses need at least 100 GB for the system partition.

A change from an older version of Exchange to Exchange 2016 might ease the burden on the storage system, but increase demands on the CPU. This tip explains some of the adjustments that might be required before an upgrade.

Oracle Universal Credits another shot directed at AWS

Oracle continues to do everything it can to compete with Amazon Web Services, but the question remains whether IT pros will take the bait.

This week, the company introduced Oracle Universal Credits for cloud consumption to allow customers under one contract to spend on a pay-as-you-go, monthly or yearly basis. Oracle claims its software license agreement (SLA) will guarantee Oracle databases can run on Oracle Cloud for 50% less than on Amazon Web Services (AWS). Oracle Universal Credits can be used for infrastructure as a service and platform as a service (PaaS) across Oracle Cloud services, such as Oracle Cloud and Oracle Cloud at Customer. Customers are allowed to switch across services at any time.

Oracle also introduced a “Bring Your Own License” program for customers to use their own existing licenses for PaaS.

Oracle Universal Credits is something that piques the interest of at least one current Oracle Cloud customer.

“Universal Credits are a great program for budgeting and controlling speed, while still providing great flexibility,” said Nikunj Mehta, founder and CEO of Falkonry Inc., a Sunnyvale, Calif., startup that provides artificial intelligence for operational intelligence. “Oracle’s move toward pricing innovation parallels T-Mobile’s business model disruption and has the real potential of shaking up the industry.”

But it won’t be easy for Oracle to sustain, analysts said.

“The SLA guaranteeing that Oracle databases are cheaper than AWS by 50% is a big commitment,” said Jean Atelsek, analyst with 451 Research. “As AWS continues to cut its prices, it’ll effectively squeeze Oracle to cut its pricing.”

Few would argue that Oracle licensing methods could use some clarification. In February of this year, Oracle effectively doubled its licensing requirements for customers that run its software on other cloud platforms, such as AWS and Azure.

While Oracle continues to make strategic moves aimed at AWS, the licensing model could also signify a way for the company to bridge its on-premises and cloud business closer together in an easier transition from legacy products into the cloud.

“It’s about time Oracle figured out those of us with on-premises licenses were not going to just abandon our perpetual license and jump to the cloud,” said Brian Peasland, an Oracle database administrator and TechTarget contributor, about Oracle Universal Credits. “Oracle licensing is not cheap, and it is a long-term investment. It’s nice to know we can leverage that investment to help move to the cloud.” 

Oracle autonomous database details emerge

The company has also provided more details on its autonomous database, which also aims to lower overall cloud costs. The automated database will be based on machine learning, and Oracle said it guarantees 99.995% uptime, which amounts to less than 30 minutes of planned downtime per year.

Automated operations for databases are an important part of Oracle’s effort to become a full-fledged cloud provider, both for customers looking to move work to the cloud and for providers such as Oracle that must efficiently take over day-to-day administration work from customers.

Success on the cloud is crucial to Oracle. It is, in the estimation of IDC analyst Carl Olofson, an issue of “survival.”

“The big picture for Oracle is to lead the customer to the cloud,” Olofson said, while cautioning that the move to cloud for most organizations is “still in its early days.”

The biggest challenge for Oracle going forward with its new licensing scheme is to convince customers it truly is simplifying how they pay.

“[Oracle] is notorious for complex licensing structures. … A lot of customers [and ex-customers] are very wary when it comes to Oracle licensing,” said Bob Sheldon, an analyst and TechTarget contributor. “Oracle could have an uphill battle convincing them that the company can be trusted.”

A notable feature missing from Oracle this week revolves around hybrid, Sheldon said, who noted that the new Oracle licensing structure on paper could stand to be a “boon to hybrid implementations.”

At next month’s Oracle OpenWorld event, the company is expected to provide details on Oracle 18, or 2018 — not Oracle 12.2.0.3. As recently reported, it will change the cadence of release cycles and a version-numbering scheme that had become a bit creaky.

In July, Oracle added its SaaS offering to its Oracle Cloud at Customer portfolio. Built on Oracle Database Exadata Cloud, Oracle Integration Cloud, Identity Services and more, this package is seen as a way to prepare Oracle customers for the flight to cloud.

SearchOracle Senior News Writer Jack Vaughn also contributed to this report.

Microsoft’s self-soaring sailplane improves IoT, digital assistants

A machine learning project to build an autonomous sailplane that remains aloft on thermal currents is impressive enough. But the work conducted by Microsoft researchers Andrey Kolobov and Iain Guilliard will also improve the decision making and trustworthiness of IoT devices, personal assistants and autonomous cars.

The constraints limiting the computational resource of weight and space imposed by the airframe of the sailplane adds relevance to the many new developments in ubiquitous computing. The autonomous sailplane is controlled by a 160MHz Arm Cortex M4 with 256KB of RAM and 60KB of flash running on batteries that monitor the sensors, run the autopilot and control the servo motors, to which the researchers have added a machine learning model that continuously learns how to autonomously ride the thermal currents.

+ Also on Network World: The inextricable link between IoT and machine learning +

In the these early days of platforms like digital assistants, IoT and autonomous vehicles, there are hundreds of open problems that will be distilled into a handful of scientific questions that first must be answered to build products that match popular visions of them. When scientific questions start to emerge, the platform’s future becomes predictable — maybe not to an exact month, but within a year or three.

IoT following the same path as Web 2.0

The Web 2.0 platform followed a similar course. First talked about in 1999, it developed enough interest in 2004 for O’Reilly Media and MediaLive to host the first Web 2.0 conference in 2004. But it was not until later in the decade that companies such as Salesforce and Google implemented Web 2.0. This was a decade-long evolution of first a vision, a collection of open problems distilled into scientific questions that university and industrial researchers answered.

As the technology shifts from research to development, product developers find the answers to the scientific questions in the work of researchers that enables them to build a product. Digital assistants, IoT and autonomous vehicles — on a Web 2.0 time-scale — are much closer to 2004 than the later part of the first decade of this millennium when enough research was translated into development that products could be built at scale.

Goals of Microsoft’s sailplane project

At an all-hands meeting, inspired by a 2013 story in the Economist about autonomous sailplanes, the team of Microsoft researchers set the goals of this project to answer two scientific questions: how to build trusted AI systems and how to architect system with AI and machine learning as a fundamental systems design principle. Kolobov’s said:

“The state of the art in AI development is not at the level where AI agents can reliably act fully autonomously, which is why we do not see many AI systems acting in the physical world with full autonomy. MSR is trying to build AI systems that are robust and can be trusted to act fully on their own, performing better than humans. The implications of this research apply to personal assistants, autonomous cars and IoT.

“We wanted to gain experience in designing systems where AI and machine learning are first-class citizens so we do not have to fundamentally modify the architecture of the systems post hoc.”

Kolobov and Guillard are part of a multidisciplinary team with complementary skills. Kolobov has applied AI and machine learning to commercial products such as Windows and Bing. Guillard, after over a decade working on control systems on the Airbus 350 and the A4 Skyhawk, is a computer science Ph.D. candidate at the Australian National University and an intern at Microsoft. An odd pairing perhaps, unless the question that they are trying to answer is understood.

There are many machine learning problems that do not have ground truth. Ground truth is an accurate data set for machine learning classification and training. Machine learning models are programmed (trained) with data, not lines of code. The models are taught to predict a correct answer with data. If a model is taught to recognize cats, labeled images of cats are ground truth. This method called supervised training has two stages: training, typically based on beefy GPUs — learning from images of cats and not cats — and inference, or predicting the right answer — cat or not cat.

Because the physical world in which the machine learning model interacts is unpredictable, ground truth cannot be easily simulated with computers — and classified training data does not exist. How can every defensive maneuver of an autonomous vehicle in reaction to another vehicle be predicted? Or how can every elderly caretaker robot’s response to a patient in distress be predicted?

These unstable systems cannot be predicted. The models need to learn by interacting with chaotic, unstable systems. Vehicles on highways and elderly patients are unsuitable for AI to learn to operate reliably and safely. A sailplane, however, is.

How the sailplane works

Ground truth for an autonomous sailplane is limited because it is impossible to measure the turbulent condition, where the rising thermal column of air begins and ends and what is happening inside. A laptop on the ground performs high-level planning for the sailplane based on data from flights by manned sailplanes using the local terrain and wind conditions to predict thermals sent to the autonomous sailplane via telemetry.

The sailplane uses an onboard Bayesian reinforcement learning algorithm to make decisions by using the observations it gets from its sensors. Bayesian reinforcement learning was chosen because of the model’s ability to plan its actions to learn and exploit knowledge in an optimal manner. This approach is easier to understand the decisions the agent is making and why.

The model uses Monte Carlo tree search to choose the detected areas of lift that can be exploited to optimize altitude to instruct the autopilot to adjust the elevators, which are the horizontal control surfaces on the tail that cause an airplane to climb or descend with servo motors to keep the sailplane soaring. It also uses Monte Carlo tree search to exploit thermal currents, sending instructions to the autopilot. Monte Carlo tree search has been applied to win non-deterministic games, such as Google’s Go project and poker. It gradually builds up a partial game tree of moves, then uses advanced strategies to find a balance between exploring new decision branches and exploiting the most promising branches.

Running Bayesian reinforcement algorithms on a sailplane poses significant challenges compared to Go and poker. Remember the computational and battery constraints? The sailplane is controlled by the real-time open-source ArduPilot running on open-source Pixhawk Arm Cortex M4 hardware. Execution of the Bayesian reinforcement algorithm is interleaved with the real-time ArduPilot in short, less than 100ms intervals so that control of the sailplane’s sensors and servo motors is maintained and crashes avoided.

The pairing of Kolobov and Guillard is a careful match of a machine learning expert with an aeronautic control systems domain expert who is also a computer science Ph.D. candidate. But their pairing isn’t the only clever combination. This search for a deeper understanding was combined with off-the shelf sailplane airframes, sensors, servo motors, open-source hardware and an open-source auto-pilot so that Kolobov and Guillard could get right to implementing and tuning the Bayesian reinforcement learning and quickly iterate their designs, improving their results.

During these pioneering days of digital assistants, IoT and autonomous vehicles, product developers will find some of the most relevant answers to scientific questions that can be translated into their products fields, such as sailplanes that might appear at first to be orthogonal and unrelated.

It may take another five to 10 years for digital assistants, IoT and autonomous vehicles to become as reliable as humans. Microsoft Research is working on one of the scientific questions that must first be answered before these pioneering product visions reach this point.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Endpoint security threats force Windows to adapt

LAS VEGAS — Enterprise applications and data are increasingly moving to the cloud, but the endpoint remains the biggest security risk.

Ransomware, spear phishing and other emerging endpoint threats often fly under the radar of traditional security tools. And as they grow more sophisticated, they can trick even the most vigilant and well-educated user into clicking a malicious link or opening a malware-laden attachment.

In response to these endpoint security threats, Microsoft in Windows 10 has embraced the concept of micro-virtualization, which isolates applications and other system processes from each other. That way, if one process falls victim to an attack, it doesn’t affect the rest of the PC or the corporate network at large.

Microsoft also partners with Bromium, which developed micro-virtualization, to extend the technology’s capabilities further into Windows. In an interview at VMworld, Bromium co-founders Ian Pratt and Simon Crosby discuss that partnership and explain how organizations can protect themselves against emerging endpoint security threats. 

Is the hype around ransomware real?

Ian Pratt: The whole point of ransomware is that it announces its presence and demands money. If you think about it, it’s the easiest kind of thing to detect.

The malware which tries to be stealthy — hides in your machine, steals your intellectual property or credit card data or patient records — typically those kinds of attacks have far more cost to the organization.

It’s really kind of odd that so much of the behaviors are being driven around ransomware. It’s drawing attention away from bigger risks.

What are the major challenges your customers are facing?

Pratt: Windows is their biggest challenge, not because Windows is worse from a security point of view, but because it’s most attacked. That’s where most organizations’ intellectual property lives.

Blaming users … is ridiculous.
Ian Prattpresident, Bromium

It’s an impossible problem trying to secure Windows and all the applications. They’re just way too big of an attack surface. [Windows is] pushing 150 million lines of code, much of it written in the 1980s, when security was not what people focused on.

Simon Crosby: Out there on PCs, [organizations are] still doing arcane, silly stuff. A huge amount of the challenge is on legacy PCs.

What have been the effects of your partnership with Microsoft?

Crosby: The core capabilities of micro-virtualization are being adopted into Hyper-V, both on the Windows 10 client but also Windows Server. On the client side, in Windows 10, if you are running an enterprise license and you’re on the right hardware, then a couple of key Windows services move out of the operating system and into micro VMs. In particular, there is a service that manages locally maintained passwords and their hashes on the host. The goal there is to make the Windows kernel and progressively more and more applications protected and distrusted from each other.

How important is it to educate users about phishing and ransomware, compared to addressing these endpoint threats from a technical perspective?

Pratt: Blaming users, or hoping users will spot this stuff, is ridiculous. Some of the spear phishing attacks we’ve seen have been so well-crafted. We saw one, and the domain was a misspelling of Bromium. But if you looked at it, [you wouldn’t immediately notice]. You need to make it so that the user can click with confidence.

How can organizations find the right balance between security and user productivity?

Crosby: Why did [organizations] get more and more permissive on iPhones? Because they were actually pretty good with security. We see a lot of overly reactive stuff. ‘Let’s close everything down.’ That just isn’t the way forward, because ultimately users have to be productive and they’ll find a way around, and that’ll be a security loophole and the bad guy will find a way in again.

For Sale – ASUS UL30A Notebook spares or repair

working laptop.

one usb port not working , 2 remains working.

battery doesn’t hold charge

body plastic some places broke or missing small pieces.

Perfect and cheap for repair project

can also sell separate parts , like screen ,keyboard ,cpu ram… just ask.

comes with charger and box, manuals.

ASUS UL30A Notebook
Core 2 Duo (SU7300) 1.3GHz
3GB RAM
SSD 120GB
13.3 inch TFT
LAN WLAN
Windows 7

Price and currency: 75
Delivery: Delivery cost is included within my country
Payment method: BT , PPG
Location: ARMAGH
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.