Category Archives: Microsoft Windows

Microsoft.Jet.OLEDB.4.0 provider is not registered on the local machine

I’ve come across this error a couple of times in the past few weeks when migrating old ASP.NET websites to new web servers so I’m popping it into the blog as an aide-memoire for myself and in case it’s useful for others.

The error message below (“Server Error in … Application”, “The Microsoft.Jet.OLEDB.4.0 provider is not registered on the local machine) pops up when trying to open a page which uses the database (in this case a Microsoft Access DB).

2017-06-16

The fix is to enable 32-bit applications for the relevant Application Pool using Internet Information Services Manager.  The Jet drivers are not 64-bit, and by default IIS8 (Server 2012R2) has 32-bit apps disabled.

  1. Open IIS Manager
  2. Navigate to the Application Pools Node underneath the web server.
  3. Select the App Pool in question. If in doubt look at the “Applications” column, if only one has any applications in it then that’s the one you want 🙂
  4. On the Actions menu on the right-hand side click on “Advanced Settings”
  5. In the “Advanced Settings” dialog set the value of “Enable 32-Bit Applications” to True and click OK.

2017-06-16 (1)

OneDrive, Placeholders, and shared PCs.

OneDrive now with Files On Demand

At their annual Build conference, Microsoft announced that OneDrive was getting a new feature called “Files On Demand”- basically a replacement for the placeholders feature that was present in Windows 8.1’s OneDrive client. The official Office blog goes into more detail about the new features, and there’s a detailed writeup by Paul Thurrott which also includes the history of OneDrive placeholders, but I’d like to discuss the advantages for the education vertical- in particular Student PC labs.

Microsoft kindly offer OneDrive to University students for basically nothing, so it sounds like an ideal replacement for traditional on-premises network file-shares. Rather than the IT department struggling to provide 50 or 100GB of space per student from their budget, they could just point students at the 1 TB of disk Microsoft is providing for free.

Sync Good

A sink. Not a sync

Not this kind of sync


With a single regular user and enough local hard disk space a sync client without placeholders is fine. All the users files are synced to the local disk and available instantly whenever they are required. The selective sync in the current Windows 10 client helps on devices with smaller disks, but is still only really beneficial on a PC with a single regular user.

Sync Bad

On the students personal devices this works great- we’re back at this 1 user:1 device ratio. However in a student PC lab environment there are potentially hundreds of desktops and each of tens of thousands of users could log into any one at any time. We have a x,000:1 user:device ratio. Students don’t want to login to a machine at the start of a class and then wait whilst half a terabyte of data they don’t need syncs before document they need appears. Additionally IT don’t want to have to tidy up all this synced data after every user logs off.

Student Computer Labs

It’s technically possible (although can be a little fiddly depending on your infrastructure) to map your OneDrive to a drive letter using WebDav and then access it as you would a traditional “Home” drive, but this is unsupported by Microsoft. There are third party solutions that will map the drive, basically providing a front end and a support contract around this, but they’re often costly and may require infrastructure changes.

Placeholders FTW

Placeholders (or “Files On Demand”) is the ideal solution here. The student now sits down at the shared lab machine and all their files are listed. They then open the file of their choice and there’s an invisible, seamless, download in the background. When they save the file it’s synced back to the cloud. The user is happy as they no longer has to wait for all their files to sync before they can work and can take advantage of the large capacity (and sharing facilities too). IT are happy because they don’t have to fund (and support, maintain) as much storage.

I know many IT Professionals working in Higher Education will be looking forward to this release in the autumn.

100,000 good reasons to virtualise those apps

A few years ago the IT department I work for set about revolutionising it’s software delivery methods using application virtualisation. As a University we were faced with the challenge of providing all applications to the staff and students on any PC- fighting the situation where “Package A is only available in the Engineering labs, Package B is only available in the Library” and so on.
The Legacy way of dealing with this was to provide ever-more-massive disk images, but this meant desktop computers were requiring larger and larger hard disks and were faced with longer and longer rebuild times. Added to this was the problem that the wide application portfolio would get out of date quicker and quicker as individual apps couldn’t be updated independently of that master image.
Using systems such as SCCM Software Centre to deploy apps on request to a slimmer Gold image was great in the traditional office setting where each computer has 1 user so can be setup with their apps. But imagine a student turning up to a lesson in a computer lab and having to wait 30 minutes or more whilst the package they need for that lesson is installed. An hour later a different student on a different course sits down at the same PC and needs different software so also has to wait. Doing the full-fat install in this manner doesn’t work when you have hundreds (or potentially thousands) of different users of a single PC.

Application Virtualisation

Along came Application Virtualisation- for us in the form of Numecent’s Application Jukebox product provided via Software2 (other suppliers and products are available). This meant that a Windows desktop could be quickly deployed as the image only contained a core of applications (Antivirus, Office etc). Additional applications could then be streamed directly from the central repository on demand- with all but the biggest packages the user could be up and running within seconds, rather than waiting for a whole application to be downloaded and installed. Applications could be updated individually and automatically deployed without the need for a rebuild operation. Additionally, just like a traditionally installed application, the virtualised apps work offline- once an app has streamed a laptop can be unplugged, taken home, and the user can continue to work.

Building the Portal

However, we did find a limitation. We wanted to take this a step beyond the computer-lab environment, we wanted to offer a portal whereby staff could install software on demand (and without the security risk of giving them admin rights) and even better students could install apps on their personal devices without collecting a CD from the helpdesk. Unfortunately the user-facing web interface which was included with AppJ at the time was very limited- basically just an unsortable, unsearchable, list of applications.
A team of us set about building a custom portal- building a new friendlier user interface and (after a bit of experimentation and reverse engineering) putting together the back end to use the Jukebox platform to deploy the selected apps.
Screenshot from Application PortalMidway through 2013 this new front-end went live and even though it was a soft launch an immediate following developed. Previously students wanting to BYOD had to visit the helpdesk where they could sign for and collect a CD of an application. They then had to manually install this themselves- child’s play for a computing student, but possibly a bit more of a challenge for someone studying a less technological discipline. Now anyone could login to the website from their home PC, click on their app of choice, and start using it seconds later. By the end of the year we had reached ten thousand applications deployed from the portal, and had some cake to celebrate.

Today

Fast forward to 2017 and The Application Jukebox platform is now developed into CloudPaging and thankfully customers no longer have to build their own front-end thanks to Software2’s S2Hub product .
Our custom portal is still running and we’ve passed a hundred-thousand applications installed from the site. That’s potentially 100,000 CDRs saved and presumably quite a few external-CD drives to install them on modern laptops. To put that into context that many CDs is about the same weight as a Stegosaurus!

Stegosaurus

The Stegosaurus. A legacy measurement of weight.

The portal has developed over time and now also includes links to Mac and Linux installers, and VDI connections to run Windows apps on other platforms – I’ve discussed integrating RemoteApp into a portal on this blog back in 2014 and we’ve even linked managed SCCM deployments into the same front end.

Conclusion

In our environment, with a massive app portfolio supporting subjects from Astrophysics to Zoology and an historically sizable appetite for BYOD, application virtualisation has been incredibly useful in answering the needs of a diverse user base both on the corporate managed platform and also the constantly changing variety of personal devices.
As a supplier of services our customers are now better served, and from an internal management perspective we have not only a much better understanding of the applications in use out there, but we have better, finer control over licenses.
If you are stuck deploying workstations with a legacy Gold Image which is getting unwieldy, I’d recommend having a look at Application Virtualisation.

Checking Encryption Status of Remote Windows Computers

Using the manage-bde command you can check the Bitlocker encryption status on both the local Windows computer but also remote devices on the local area network. For example, to check the encryption status of the C: drive on the computer “WS12345” the following command could be used

manage-bde -status -computername WS12345 C:

and the results might look something like this:

BitLocker Drive Encryption: Configuration Tool version 10.0.14393
Copyright (C) 2013 Microsoft Corporation. All rights reserved.

Computer Name: WS12345

Volume C: [OSDisk]
[OS Volume]

Size:                 237.99 GB
BitLocker Version:    2.0
Conversion Status:    Fully Encrypted
Percentage Encrypted: 100.0%
Encryption Method:    AES 256 with Diffuser
Protection Status:    Protection On
Lock Status:          Unlocked
Identification Field: None
Key Protectors:
    Numerical Password
    TPM

Expanding on this we could wrap some PowerShell around the command and read in a list of hostnames from a text file and report on the encryption status of each.

Firstly we need to format the output of manage-bde to only show us the value of the “Conversion Status” field- PowerShell’s string manupulation can come in handy here- we can locate the “Conversion Status” line, check that it is present (if the computer is not on the network, or access is denied the manage-bde command will not return a status), and then trim back the line so we only have the value of the field. For example:

#Check the Encryption Status of the C: drive, filter to the Conversion Status line
$EncryptionStatus=(manage-bde -status -computername "$hostname" C: | where {$_ -match 'Conversion Status'})
#Check a status was returned.
if ($EncryptionStatus)
{
  #Status was returned, tidy up the formatting
  $EncryptionStatus=$EncryptionStatus.Split(":")[1].trim()
}
else
{
  #Status was not returned. Explain why in the output
  $EncryptionStatus="Not Found On Network (or access denied)"
}

Once this is working, it’s just a case of reading in the text file using the get-content cmdlet and outputting a result. The full code (Get-EncryptionStatus.ps1) I used is available for downloading and/or improving on GitHub here- https://github.com/isjwuk/get-encryptionstatus

The Home Lab

Automated Deployment in the HomeLab- Part 1

I’m commencing a project with my HomeLab- I’m going to build a system whereby I can produce custom mini-lab environments by means of a script. There are off the shelf solutions to do this (see AutoLab as an example) but if I build this myself I get something tailored exactly to my needs (and available resources) and hopefully learn something along the way- which is what the HomeLab is all about really. This is the first post in what should develop into a series showing how I work through the process to create my automation system.

 

The Aims

a.k.a. what I want to achieve

  • ExampleMiniLabThe ability to run a script to deploy a predefined lab environment. For example running “Build-Project-Lab-One.ps1” makes 3 Windows Server VMs, connected on a private switch, with one running AD/DNS/DHCP roles, one acting as a Gateway, and one ready for whatever experiment I throw at it
  • The ability to quickly and easily modify a copy of that script to produce a lab with a different configuration. Then I can have a script that builds me a WDS platform, or another one that produces a SCOM test environment. I can use this library to quickly rebuild, or build a copy of, and of my environments within the HomeLab
  • This script should also create a second script for decommissioning  /destroying the lab environment when I’ve finished.
  • Whilst perhaps not meeting full “production” standards, the scripts should be at least in a state whereby I can post them online and not have to hide in a cave for the next decade whilst they get laughed at.

 

The Resources

a.k.a. what I have to play with

  • One Intel NUC host running vSphere ESXi 6 providing some compute, memory, storageIntel NUC
  • One VMUG Advantage Subscription complete with VMware EVAL Experience licensing- this provides VMware vCenter amongst other things.
  • One Microsoft DreamSpark Subscription and Microsoft Evaluation Licensing (see “Microsoft Licensing” on the Open Homelab Project for details on how to get these)
  • Me with my knowledge of Windows, vSphere, PowerShell, PowerCLI, and how to Google for stuff.
  • The community who not only kindly put content up on the internet for me to Google for but also are there for me to tweet, slack, and (shock, horror) talk to when I encounter problems or lose direction.

 

The Plan

a.k.a. How I’m hoping to achieve those aims with those resources.

To do all this I’m starting out by preparing a vSphere template of Windows Server 2012R2. I can deploy this- with customisations- using PowerCLI to form the building blocks of the lab environment. Once I have Windows VMs deployed I need to be able to configure them- this is where PowerShell remoting will come in handy- I can deploy roles and features and do some basic configuration. I’ll put together a PowerShell function to do all that. This function can then be re-used in the script to deploy multiple VMs with different configurations. For example:

CreateVM "Server1" $TemplateName $CustomizationSpec "Web-Server"
CreateVM "Server2" $TemplateName $CustomizationSpec "WDS"

I’ll use PowerCLI to deploy a private network within the Hypervisor and connect the VMs to it. This method will also be used to configure the connections to the gateway -one NIC pointing at the private switch and one at the internet-facing vSwitch already in place.

Some more in-depth PowerShell (possibly also arranged into reusable functions) to do the in-depth configuration of the roles. For example, when the script completes I want the Active Directory to be up and running, the Gateway providing an internet connection to the VMs, the VMs getting IP addresses from the lab DHCP and domain-joined. Basically I want to be able to run the script, make a brew, and come back and find a fully configured system ready to go.

 

Coming Soon- Part 2, full of scripting goodness.