Category Archives: Opinion

Virtual vs In-person Conferences

In the current pandemic situation (April 2020) a lot of events, both small and large, have had to close their doors and move from in-person to virtual on-line environments. There’s been a lot of chatter about this on the interwebs, and how some people favour the way of conferencing we have been forced into adopting.

From my perspective I find it hard to see how online meetings can match up to the in-person show. The section of the event where you’re sat quietly listening to a speaker, raising your hand with a question, or asking at the end, is similar between the two. Viewing from home you have a more comfortable chair but, on the flip side you must buy your own drinks and snacks. However, you are just watching an online webinar and the moment the session ends, you step out of that breakout back into your home life.

Distance-learning like this is great, but it’s just one component of what makes the traditional tech conference such a worthwhile experience. It’s that time when you’re not sat down listening to a presentation or trying out a lab that can really make the difference.

Discussions happen with random people on the show floor, in a queue, at the bar in the evenings, or even at the airports. The social component, even for an introvert, should not be underestimated. I’ve now got some great friends, gained unexpected knowledge, and understood things from different viewpoints thanks to tech conferences. It’s also one of the few ways of breaking out of the “bubble” of IT in my organisation and seeing what people do in similar functions in the wider world.

IMG_20200128_145028331 (2)

Even the big events I’ve attended- VMworld, Cisco Live, Microsoft TechEd – I’ve gone into knowing few, or even zero, people at the event but always come away with new contacts, experiences, and friends. I don’t get any of that from the breakout sessions, it’s all from those bits in-between.

Getting out of the office (or these days the home-office) is an important method of separation and difficult to replicate without travelling to a conference (even if it’s just down the road). Without that separation it’s hard to avoid being (as) distracted by the day to day and able to concentrate on learning.

I’d love to be proven wrong. If someone can figure out how to answer this puzzle of doing the bits between and after the sessions well in an online environment I’d be overjoyed, but I’m still waiting for that to happen. Perhaps the London VMUG next week might surprise me.

Hyper-Converged Cynicism

Or “How I’ve come to love my vSAN Ready Nodes”

I’ll admit it, some years ago I was very cynical about HyperConverged Infrastructure (HCI). Outside of VDI workloads I couldn’t see how it would fit in my environment – and this was all down to the scaling model.

With the building-block architecture of HCI; storage, compute, and memory are all expanded in a linear fashion. Adding an extra host to the cluster to expand the storage capacity also increases the available memory and CPU in the pool of resources. But my workloads were varied, one day we might get a new storage-intensive application, the next week it might be one which is memory intensive. I was used to independently expanding the storage through a SAN and just the compute/memory side through the servers and didn’t want to be either running up against a capacity wall or purchasing unnecessary compute just to cater for storage demands.

This opinion changed when my own HCI journey started in 2017 with the purchase of a VMware vSAN cluster built on Dell Ready Nodes. Whist I’ll be writing about that particular technology here, the principles apply to other HCI infrastructures.

If the problem of HCI could is scaling, the solution is scale. These imbalances in load and growth balance out once a number of VMs are on the system- and this scale doesn’t have to be massive, even from the 4-host starting point of a vSAN cluster I found that when the time came to install node 5, the demands on storage and memory were roughly matched to the relevant capacities of the new node.

The original hosts need to be sized correctly, but unless you’re starting in a totally greenfield environment then you will have existing hosts and storage to interrogate and establish a baseline on current usage requirements. Use these figures, allow appropriate headroom for growth, and then add a bit more (particularly when considering the storage) to prevent the new infrastructure from running near capacity. Remember you are trading a certain level of efficiency for resilience – the cluster needs to be able to withstand at least one host loss and still have plenty of capacity for manoeuvre.

If you are going down the vSAN route, I can thoroughly recommend the ReadyNode option. Knowing that hardware will arrive and just work with the software-defined storage layer without spending hours digging in the Hardware Compatibility Lists was a great time saver, and we’re confident that we can turn round to our vendors and say “this didn’t work” without getting told “it’s because you’ve got disk controller chipset X and that’s not compatible with driver Y on version Z”. There’s a reason I named this blog “IT Should Just Work”.DellEMC vSAN ReadyNode

When expanding the cluster I consider best practice to be to expand with hosts of as similar configuration as possible to the original. If larger nodes are added (for example, storage/memory/CPU is now cheaper/bigger/faster) then these can create a performance imbalance in the cluster. For example a process running on host A might get access to a 2.2GHz CPU, but run the same process on host B with a 3GHz CPU and it will finish slower. Also worth considering is what happens when a host fails, or is taken into maintenance mode for patching. If this host is larger than it’s compatriots then (without very careful planning and capacity management) there might not be sufficient capacity on the remaining hosts to keep the workloads running smoothly.

It is possible in vSAN to add “storage-only” nodes, reducing the memory and possibly going single-socket (this saves on your license cost too!) and then using DRS rules to keep VMs off the host. Likewise “compute-only” nodes are possible, where the host doesn’t contribute any storage to the cluster. Whilst there are probably specific use-cases for both these types of nodes, the vast majority of the time I believe them to be best avoided. Without very careful consideration of workloads and operational practices these could easily land you in hot water.

So, I’m a convert. Two years down the line here and HCI is the on-premises infrastructure I’d recommend to anyone who asks. And those clouds gathering on the horizon? Well, if you migrate to VMware Cloud on AWS then you’re going to be running vSAN HCI there too!

Rise of the Full Stack Vendors

In a recent Datanauts podcast Chris Wahl was discussing Azure and Azure Stack with fellow Rubrikan Mike Nelson and Microsoft’s Jeffrey Snover (If you haven’t already, you can check out the podcast for yourself- Datanauts #148). Jeffrey made some interesting observations about the changes in alignment of some of the major IT vendors over time (this discussion runs from 25min to 29min into the podcast).

He detailed how the big players (DEC, IBM etc) had started with a “vertical” alignment by building their own chips, boards, operating systems, and applications. This was followed by a dis-integration where the industry shifted to a “horizontal” alignment- chips from Intel/Motorola , Operating Systems from Microsoft/Sun, and applications and services coming from a wide range of vendors. He goes on to posit how cloud vendors are turning the industry back towards a vertical alignment, and gives the example of how Microsoft are designing their own chips (FPGAs, NICs, servers , the new “Brainwave” chip to accelerate AI etc)  right through to software; all to create the Azure Cloud.

This idea got me thinking about how this is happening elsewhere in the industry, and what the future might hold.

This realignment can be seen across the major IT manufacturers- in recent years Dell- traditionally just a client and server PC vendor- has formed Dell Technologies, picking up tech such as Force10’s network, EMC’s storage, and VMware’s hypervisor. This now puts them in that vertical alignment of controlling their own enterprise stack from the client device through the network to the server hardware and the hypervisor sat on it. In an on-premises setup Dell can provide the infrastructure from the end of the user’s fingers to the start of the Operating System or Container.

Amazon have started from the other direction- AWS as a cloud provider owning their own chipsets. servers, storage, and networking. They own the datacentre end of their customers today, but how long is it before we see the successors to the Kindle Fire devices and Alexa-connected displays being pushed as the end-user device of choice. Everything between the user and the application would then be in their single vertical.

We see similar activity from Google. Their cloud platform stretches down to their Android and ChromeOS operating systems, the Chrome browser, and even into hardware. Although (similarly to Amazon) the endpoint devices are today largely aimed at the consumer market, as the commoditisation of IT continues there’s nothing stopping this leaking into the enterprise.

However, these vertical orientations are not to the exclusion of horizontal partnerships and we’ve seen a lot more of that over recent years. For example VMware partnering with AWS, IBM, and Microsoft and Google for Cloud provision, or Dell-EMC powering the on-premises Microsoft Azure Stack, or IBM providing their software on Azure.

So will this continue, and what does the distant future hold? Looking far into the tech future is always guesswork, but if I had to bet I’d suggest that this alignment model will eventually swing back as these sort of things always seem to go in cycles. The verticalisation (new word?) will carry on for the next few years but over time the customers demand more choice and (in enterprise at least) less of the perceived risk of “vendor lock-in”. Eventually this leads to a tipping point, fragmentation of the stack and a turn back towards that horizontal alignment we are moving away from today.

Thanks Datanauts for the inspiration behind this, and #Blogtober2018 for convincing me to do more long-form opinion posts.

Happy 18th

Lighted Candles on CupcakesOctober 2018 marks my 18 year anniversary working in Higher Education IT (so yes, about the same time since this year’s Freshers were born). It’s been a long ride and things have changed dramatically from technology, personal, and industry perspectives in that time. In this post I’ll be discussing a few of those differences, so gather round and imagine me sat in a rocking chair holding a pipe and talking about the olden times.

October 2000 was a time of change in technology- the perils of the millennium bug were nearly 10 months behind us, Napster had gone legal, the last major release on LaserDisk hit the shelves, Sony released the Playstation 2, and Amazon was best known for selling books online.

I arrived fresh faced to the University department and one of the first tasks in my new role was to order some parts for my new computer. There was little budget for IT and we scraped things together from what was around. If memory serves I ordered a motherboard, memory and AMD K6 processor and coupled this up with an existing beige case, power supply, 14″ CRT monitor, and old hard disk from the recycling pile.

These days we order laptops and desktops from (insert major manufacturer here) and my office desk has a 15″ 8th-Gen-i7 hooked up to a pair of 29″ widescreen displays. As well as the advances in technology this is one of the most apparent signs of the professionalisation (and some might say commercialisation) of IT within Higher Education. There’s less scrabbling to recycle outdated components and squeeze assets for decades and a lot more focus on allowing IT to spend it’s time fixing and improving things.

Behind the scenes the server infrastructure consisted of tower cases on a desk in the corner of my office- a sneaky way for a junior employee to get an office to themselves- there was a small UPS on the floor under the table, and the entire lot ran off a single wall outlet. Windows NT 4 was the platform of choice here, about a year later upgrading to Windows 2000 and Active Directory. Fast forward and we saw the proliferation of rackmount servers and disk arrays in purpose built datacentres. Then there was the arrival of virtualisation, VMware Server and then ESX providing the opportunity to run multiple servers on one piece of tin. These days we’re putting some of these servers “out in the cloud” on the other end of an internet connection, something we wouldn’t have considered 18 years ago.

The network joining all these things together has changed as well. Gone are the days of 10Base2, crimping BNC connectors on cables we’d threaded through the suspended ceilings, and troubleshooting T-pieces and terminators.

View this post on Instagram

CentreCOM 3012SL Hub

A post shared by Chris Bradshaw (@startmenu) on

Today Gigabit ethernet to the desktop is norm, the datacentres run on fibre and 10G copper, and you can sit outside by the campus lake and get a Wifi connection.

As with the network, storage capacity has increased dramatically. On my first day in the office I had a 15 MB quota on my network home drive. In addition to storing all my personal files and settings this also had to hold my POP mailbox which I accessed by Eudora. Jump to 2018 and I’m working at a University where staff get a 1TB OneDrive account and a separate 100GB for their email.

Personally, whilst staying in the HE sector I’ve developed from a “Generic IT Support bod #7” to a more senior role, whilst keeping myself technical. I still retain some of that generalist approach, but my day-to-day work has become much more focused- particularly around virtualisation, servers, and automation.

In conclusion, as with everywhere else technology has definitely moved on dramatically in the past 18 years. Network, Storage, and Compute have all grown incredibly and this has allowed us to do things we wouldn’t have considered back in 2000. As well as that though, I believe the UK Higher Education industry has also changed and it’s IT departments have worked hard to adapt to that. We now take on many more of the processes and technologies you’d expect from our colleagues in more commercial backgrounds in a bid to provide a modern, up-to-date IT environment for the teaching and research activities of Universities in the current era.

As I finish writing this post, someone has just brought in a laptop from 1992 which they’ve just decided is no longer required. Please ignore the text above about how things have changed.


IT in Higher Education

After over 15 years working in IT within the H.E. vertical I’ve spoken publically a few times about our corner of the tech industry, with talks at VMworld in 2016 and a recent TechUG meeting and chairing a roundtable at a UK VMUG UserCon. This post covers the highlights some of the content of these sessions, it contains themes that I’ve seen myself at various institutions and have struck a common chord in discussions with colleagues from other Universities.

The HE IT Environment

TechUG Talk November 2017

TechUG Talk November 2017

There are 17,000 IT Professionals* working in the UK Higher Education industry spread across 160 Universities the length and breadth of the nation- that’s a sizable number and doesn’t include those working in IT within Schools and Further Education Colleges. These staff support some amazing research and teaching and have the opportunity to work with some really awesome people and kit in a wide variety of disciplines.

How many IT departments in other environments can support racing teams, particle accelerators, gene sequencers, dance studios, silver-service restaurants, sports centres, and farms whilst looking after residential internet customers, Nobel prize-winners, Rocket Scientists and Brain Surgeons all in a normal day? Dealing with the cutting edge presents unique challenged – for example in most environments the team looking after the wireless LAN doesn’t have to worry about the people in the office next door experimenting with next-gen wireless tech in the same airspace. As well as the cutting edge, there’s also IT supporting the more generic activities, most of which are found in any large enterprise organisation. There is still the need for a projector in the boardroom, a website for marketing, the EPOS in the coffee shop, payroll systems and so on.

State of the Art vs State of the Ark

Probably the most obvious challenge to someone dropped into the HE environment is the age range in supported equipment. There’s plenty of the latest and greatest- if you look round the vendors at any tech conference I’d be surprised if any of them didn’t have product in at least one University. But alongside this there’s usually a plethora of kit that’s perhaps past it’s best-before date but has to be kept running- this is partly down to the traditional grant-based funding model where “services” are funded once but then expected to stay on for ever.

Thankfully server virtualisation came along and helped to keep some of the old operating systems running when the hardware they relied on dies, and the advances in software defined networking have provided the opportunity to secure some manufacturer-unsupported workloads and protect the rest of the infrastructure.


In higher education (and education in general) the employee headcount is much smaller than student numbers – UK Higher-Ed has about 400,000 Staff and roughly 2.2 Million students. Compared to a normal corporate environment there is a high turnover of these users because in addition to the regular comings and goings of employees, roughly a quarter of the “headcount” leaves every year as students graduate. This leads to the obvious potential difficulties in handling services such as user accounts – one that most Universities addressed some time ago with automation and integration with payroll and student record services.

It also presents some problems with software licensing- if site licensed software is based on the number of actual users on a site rather than the number of staff this can get quite costly. Most establishments also operate student computer labs- essentially a large scale hotdesking environment. If a software license is per-seat (and not on a flexible concurrent basis) then licensing enough seats for students to use the software in any lab (rather than having to be timetabled to just one for that application) can run up similarly high fees.


One of the more bizarre things that a newcomer to the world of Higher Ed will come across is the issue of ownership. Often a Researcher can leave to join another institution and take their in-progress grants with them. This can mean that hardware and data can sometimes leave the company when staff do, and on the flip-side unexpected computer equipment and large amounts of data can arrive with new starters. Imagine in a more traditional corporate setting a developer or salesperson leaving and not only taking their Macbook with them, but also all the code or customer data they had been working on.

It’s an unusual situation, and one that IT departments in Higher Education need to deal with on a regular basis. They need to ensure that they have sufficient storage capacity that if terabytes of data arrive unexpectedly tomorrow it can be safely stored- requiring a flexible infrastructure. They also need to ensure that software licenses and hardware assets that are owned by the company and not part of any mobile grant are retained.  VDI and Application Virtualisation technologies can help with the software ownership and a rigorous asset management system and process is required to keep track of physical devices.


Staff arriving with computers from their previous employer is only one part of the “Bring Your Own Device” experience. BYOD is, and always has been, the norm at Universities for both students and staff. Thousands of students arrive each year with their own devices, coupled with staff with personal budgets and requirements sometimes choosing what to buy themselves. I’ve joked before that in Higher Education IT we were “doing” BYOD before we knew it was a thing.

But BYOD is not just for personal devices, this extends to the server environment as well- Staff and research students running servers in cupboards or under their desks. “UDDC” (The Under Desk DataCentre) can be commonplace. Add to this the “Bring Your Own Storage” problem everyone in the tech industry sees following the proliferation of large, cheap, portable USB disks and IT has a real challenge on it’s hand to provide the security and resilience that the institution, the business, requires.

Again VDI and App Virtualisation can help to deliver and maintain the software on the plethora of endpoint devices. For the server side, P2V for Under Desk DataCentres is an option. IT can easily show the benefits of a proper server environment and the ability to provide scaling and resilience that’s just not possible with one of these foot-warming server deployments.


I’ve touched on Application virtualisation (and written in more depth on the subject) and there’s a lot of software used in Higher Education, a noticeable proportion of which presents a challenge to deploy and manage in the enterprise environment. IT are dealing with thousands of devices but the individual researcher wants to just download an app and get on with their job.

In Higher Ed (and Research in general) there’s a lot of little applications out there that another researcher has popped on the web (possibly back in 1994). Todays academics just want to download and use them, often with the expectation that everything will just work. However accompanying the download file there’s often no installation instructions, or instructions that remind you of the cover of a Led Zepplin album– there are so many steps. If anyone reading this ever finds themselves writing a manual don’t presume that just because someone has a Nobel Prize in Quantum Chemistry they are adept at editing the Windows Registry.

There’s also a lot of scientific applications just not designed to work in an enterprise environment. IT try and live in a world where users’ don’t share a login, and don’t require full administrator rights on their local workstation just to use it. It’s not just the freeware downloads that fall foul of these expectations- similar issues can often be found in expensive commercial research applications.

To aid this IT can invest in deployment methods- packaging through platforms such as SCCM, or virtualising the package (using ThinApp, XenApp, AppV, or Cloudpaging etc..) , or presenting the app through a virtual desktop infrastructure minimises the number of times an awkward installation process needs to be repeated and potentially allows some flexibility of the end-user device. User Environment Management plugs in here too, letting users escalate permissions without blanket issuing of admin rights across the estate.


So, to summarise, the big difference in a University environment to a traditional corporate one is the great variety of disciplines and activities, almost all of which require some form of IT. IT has become more and more central to almost every workplace over the past few decades and Higher Education institutions- themselves large enterprises- have at the same time adopted more and more of the practises and processes of the commercial sector. The Information Technology departments at Universities today faces many challenges common to their corporate counterparts in addition to some some unique to the sector. Thankfully modern technology is helping IT Pro’s rise to these challenges.


*HESA (Higher Education Statistics Agency) report for 2014/15 shows 16,900 staff categorised as “Information Technology Technicians” or “Information Technology and Telecommunications Professionals”