Mar
02
Filed Under (e-Learning, Reviews & Opinions) by David Wiles on 02-03-2012

Moodle and Blackboard are both popular online LMS solution (Learning Management System) with which the Faculty of Health Sciences can develop complete online courses that can include multimedia content.

How do the two compare to each other and what are the benefits unique to each course delivery system?  Let’s explore some of these benefits of  Moodle and Blackboard.

Firstly let’s clear the deck and note what Moodle and Blackboard are.

Moodle is an Open Source Learning Management System that is provided freely and can be run on many operating systems. According to the Moodle website it is “free to download, change, share, improve, and customize to whatever you want it to be,”. Therefore, any lecturer can use it to build or supplement a course.

Blackboard on the other hand is a proprietary Learning Management System and its use is typically limited to institutions like the university which pay a sizeable fee each year to take on a license agreement for its use. Each and every student at the university pays a small amount every year for the licencing.

Moodle’s is definitely the gawky teenager here. It is constantly in a state of development and improvement, there’s no waiting for the company to fix a bug or impove the program. Being “open source” each and every user has a unique opportunity to contribute to the development of the product.

The new features of Moodle mostly centre around increased usability, these include: easier navigation, improved user profiles, community hub publishing and downloading, a new interface for messaging, and a feature that allows teachers to check student work for plagiarism. Text formats will also allow plug-ins for embedded photos and videos in text (but Blackboard allows for this too).

A major improvement over previous releases is that anyone can set up a community hub, which is a public or private directory of courses. Another notable feature is that Moodle now allows teachers to search all public community hubs and download courses to use as templates for building their own courses. Also, teachers can now see when a student completes a certain activity or task and can also see reports on a student’s progress in a course.

Many small scale open source platforms require that users support the product themselves, getting their “hands dirty” tweaking and improving the hard way – of course using the open source community as their primary resource. However Moodle has an advantage, it has become so popular that a small industry has evolved around it, providing a wide range of support and services. Two of the most popular support and hosting services are  Moodlerooms and Remote-Learner.

Blackboard Learn is Blackboard’s newest and most innovative upgrade to its Blackboard Learn package.

Improvements in its uses for higher education include course wikis (Moodle improved theirs as well), blogs and journals that stimulate conversation and reflection on a course, and group tools that make group collaboration and communication easier than the previous version. Its most notable feature is its Web 2.0 interface, which makes it easy for educators to navigate when adding content to an online course and for students to navigate when accessing course content.

Blackboard Learn now incorporates Blackboard Connect (of course at an additional cost), which alerts students to deadlines, due dates and academic priorities within a course. The new release also allows educators to more easily incorporate videos and photos directly into text for a more complete learning experience.  Finally, Blackboard features Blackboard Mobile Learn (also at an additional cost – and why am I not surprised), which lets students connect to their online courses using various handheld devices, such as the iPhone or iPad.

So, what are the biggest differences?

Features & Functions: Both of these tools have a lot of different functionality available, either natively, or through add-on types of functionality. If different functions are going to be the deciding factor in selecting one of these versus the other, you will really need to drill in and compare and decide for yourselves which features and functions will make the difference for the Faculty.

Cost: This is clearly different. As an open source product, Moodle is simply less expensive. Blackboard is sort of the “Rolls Royce” of today’s LMS, and there are users of the product who would tell you that if you want the best LMS money can buy, you should make the financial commitment to Blackboard. On the other hand, if you want a premier product for a much lower cost, Moodle is really the way to go. Another thing to be aware of is that Blackboard builds substantial annual increases into their pricing model, since they are continually procuring and integration additional products into their offerings, with the intent of adding value for their users.

Product/vendor model: As indicated above, Moodle and Blackboard are very different products with very different vendor models. One is open source, and there are many support and service vendors to choose from, while the other is proprietary and there is just the one company to work with. How that impacts your decision is up to you and your institution to determine.

May
11
Filed Under (e-Learning) by David Wiles on 11-05-2010

Screen CastIt seems that I am not a very good tutorial video student. I like audio clips and MP3s but videoclips I find rather difficult to concentrate on, but many people enjoy watching lively videos or hearing human voices over reading text. However I dislike learning with video tutorials. I prefer having something written down in a manual that I can refer to whenever I feel like it. I can’t pull out a video in the car, and I don’t want to take training lessons with me on the go.

I want something I can read. Think over. Highlight. Skim and scan. Jot notes. Come back to tomorrow. Revisit next week. Print out and take with me.

I can’t do that with a video. I have to focus and listen hard to make sure I’ve captured everything. I lose track of the pointer or action on the screen. I have to concentrate and follow along. I can’t get distracted. I can’t set it down and come back later, unless I’ve bookmarked the video or noted the URL somewhere.

People don’t have one standard when it comes to learning. Humans typically belong to one of three learning type groups: tactile, auditory, or visual. So while you may love your video training courses, you can’t be sure that your associate learns well using the same media.

Here’s an example of different learning types: If I were learning to be a wildlife reserve guide, a map on the wall outlining all the paths and intersections available in the reserve could be used. I could use that map, ask for landmarks I’d spot at various points, and travel each path once and I would never got lost.

A fellow trainee guide however might be the complete opposite.  She might not need a map. She wouldn’t ask for landmarks. She would ask for a spoken description and would listened intently. She’d repeat the information back to make sure she’d gotten it all, and that was it – and she wouldn’t get lost.

Don’t Make them Listen to Learn

The current trend leaning towards teaching videos is a good one, because it lets auditory learners get the information they want in the media they like best. The problem is that in leaning towards videos, people are leaving the visual learners behind.

No more articles, no more posts. No more text. It’s all about video – and that’s… not very smart.

If you’re choosing to move away from text into the video world, you’re neglecting a third of your potential customers. All the visual learners would be happy to give you money for your expertise, if only they could get the information you have to sell them in a format that works best for their learning preferences.

So help them out. Offer them downloads.

I don’t mean offer a PDF checklist to go with your one-hour video. I don’t mean giving them a link to the video’s slideshow cue cards. I mean giving them transcripts, hard copies of every word spoken in that video you recorded.

When I get a PDF transcript option on a site, I’m relieved. I’m thankful. I’m grateful. I love the person for remembering that people who have my learning type and people who prefer old-fashioned text-only-please really do exist.

We’re not all into the coolness of video. Really. Especially when it comes to learning.

Speech-to-Text is Dead Easy, Folks

Here’s the long and short of it: If you’re going to do a video, it’s a really nice thing to offer your audience a PDF transcript, too. Really. Even if you don’t think PDFs are cool. Even if you hate text. Even if you’re the trendiest person on earth.

There are people who don’t want to listen to learn. Those people want what you’re offering. So forget being cool. Forget being trendy. Sure, stay on top of technology, but don’t forget that people learn in different ways.

It’s not hard to create a PDF transcript. Get a copy of Dragon Naturally Speaking. Turn on your video and play it back into your microphone. The speech-to-text software gived you a pretty good rough draft of your transcript. Edit it, and post that PDF up for people to download.

Or, hire a transcription service. Many transcriptionists can turn your video or audio into a nice text document in damned good time. Heck, even we can transcribe audio and video for you. (Yes, there’s a difference between transcription and learning through listening. The two are not the same.)

Call me old-fashioned if you’d like. Go ahead and laugh that I’m behind the times. Poke fun that I’m resisting change or shunning technology. But just keep this in mind:

If you’re excluding your students, you’re not preparing them to be good medical practitioners!

Oct
06
Filed Under (e-Learning) by David Wiles on 06-10-2009

There are many groups of scholars and academics who have attempted to define the concept of Visual Literacy, but as with any group of individuals there is little general consensus so far. This is certainly due to the fact that those representing the different disciplines and archetypes are each wanting to interpret Visual Literacy in a way that reflects and supports their own contribution or way of thinking. The tragic result is that a theoretical concept was created with seemingly little practical value, and cannot be used productively until an agreed definition is established.

It should be self-evident that if a concept does not have a broadly-accepted framework, and that if the theory behind it is confusing, and if it is a matter of continuing controversy with every individual trying to force their will on the collective, then the only reasonable way to cope with it is to abandon it. Nevertheless, with the exception of very few and of minor importance cases, no serious attempt has ever been made towards discarding Visual Literacy altogether.

According to Wikipedia, “visual literacy” is defined as: “the ability to interpret, negotiate, and make meaning from information presented in the form of an image. Visual literacy is based on the idea that pictures can be “read” and that meaning can be communicated through a process of reading.”

Pay Visual-Literacy.org a visit and see what they are attempting…

F1_paper_man_MJ04 Professor Lancaster’s audacious prediction of a paperless society by the end of the twentieth century is examined from multiple perspectives. Rationales for the prognostication, textual and contextual; reception by the profession; and impact on the literature of library and information science are reviewed. Bibliometric data is introduced in support of the extensive citation links to Lancaster’s core writings. The accuracy of Lancaster’s prediction and the leavening insights of the collateral literature are considered.

DEFINING THE EXPERIENCE

Sometimes we call upon fiction to explain and to make us wiser. The transition from print dominance to paperless ascendancy was one of many important historical shifts. The change from scroll to codex and the introduction of moveable type were also hugely significant innovations. Linking them all together, Thomas Wharton tells us that:

Within every book there lies concealed a book of nothing. Don’t you sense it when you read a page brimming with words? The vast gulf of emptiness beneath the frail net of letters. The ghostliness of the letters themselves. Giving a semblance of life to things and people who are really nothing. Nothing at all. No, it was the reading that mattered, I eventually understood, not whether the pages were blank or printed. The Mohammedans say that an hour of reading is one stolen from paradise. (Wharton, 2002, pp. 75-76)

A COMPELLING FUTURE

Professor F. W. Lancaster’s protean legacy, still unfolding, encompasses four decades of excellent teaching, superb scholarship, and professional leadership. This essay will focus on his justly famous predictions about the paperless society and the future direction of libraries and the librarians who manage them. Although this aspect of his scholarship represents only one facet of his many contributions, it is perhaps the most often cited, invoked, and debated. It has been exactly three decades since Professor Lancaster launched his own library Sputnik, namely his transformative volume entitled Toward Paperless Information Systems (Lancaster, 1978a).

Generously acknowledging such information pioneers as Vannevar Bush, J. C. R. Licklider, and John G. Kemeny, Lancaster then lays the foundation for his own vision of an information-driven, paperless society. And it was a blueprint nurtured by his prior employment with the Saul Herner Company, the National Library of Medicine, Westat Research, and the Central Intelligence Agency. Propelling Lancaster’s futuristic information model was a pervasive concern with the proliferation of the scholarly literature, the cost of producing journals, and the increasing expenses for libraries to acquire and process journals. Lancaster’s scenario for an electronic information system for the year 2000 revolved around what he referred to as the "library in a desk" (Lancaster, 1978a, p. 3).

Scholars and students would have access to major digital files composed of bibliographic information and full-text documents. Scholarly journals, for example, would be composed, edited, distributed, and accessed through his proposed online system. Further, this unified online system would (1) facilitate rapid and effective person-to-person and group-to-group communication; (2) maintain indexes to ongoing research to make these highly accessible; (3) make the archival literature of sciences as accessible as possible; (4) provide facilities to aid the scientist in building and exploiting his own information files; and (5) provide rapid and convenient access to the facilities of one or more information analysis centers.

Lancaster was especially concerned with the proactive distribution of information through a selective dissemination of information (SDI) mechanism. And what of the libraries and librarians that will preside over the coming digital juggernaut? In Toward Paperless Information Systems, Lancaster assumes a possible withering of libraries, but redefined roles for librarians in "libraries without walls" suggested so presciently by Robert S. Taylor in 1975. Lancaster closes Toward Paperless Information Systems with a vigorous reaffirmation of the coming paperless society and an almost solemn warning that if we do not plan for its arrival we may be overwhelmed by the ensuing chaos:

Continued here…

Jul
03
Filed Under (e-Learning) by David Wiles on 03-07-2009

The annual “PC Skills” is an assessment that all 1st year health science students complete during orientation and has provided lectures and training personnel at Tygerberg with concrete data to measure the computer literacy levels of the new students.

GERGA in association is looking at improving the current assessment environment and “testing via simulation” has been suggested as a possible way forward.

In theory an assessment environment that simulates common software applications is an ideal and logical method to investigate a student’s abilities with various computer applications.

Assessments that utilize only one brand or model of software would be severely limited, as there is no standard for any common software applications, such as word processors, spreadsheets, or databases. However is cleverly designed and presented, the broader principles and processes (flowchart) that apply to all applications common to a particular need (like word processing) can be used.

Bytes People in Johannesburg have provided a link to a new simulation-based assessment that they are building using the Questionmark Perception software and Adobe Captivate. This course content is being developed for the University of Pretoria.

Flash Simulation

Login details:

Username: Test

Password: password

We are all very excited about what is possible with Questionmark Perception for creating simulations. Take a look…

David Wiles

Jun
18
Filed Under (e-Learning) by David Wiles on 18-06-2009

elearning_treeofknowledge One of the most confusing aspects of eLearning is that nobody knows what it is. Did you know that the “e” does not stand for “electronic”.

The”e” in eLearning would be better defined as Evolving or Everywhere or Enhanced or Extended

Based on one reported survey from a very respected eLearning company, there are many people that have the wrong definition of eLearning.

The survey asked 259 training managers at Fortune 500 firms what tools they use to create e-learning content. The top choice was PowerPoint with 66% of responses. Next was Microsoft Word with 63%, Macromedia’s Dreamweaver 61% and Flash 47%.
(Respondents could choose more than one.)

  • Just taking a Word document or PowerPoint presentation and doing a “Save as HTML” does NOT mean you have created eLearning.
  • Just taking a “talking head” presentation and presenting it using a web conference is not eLearning.
eLearning can be defined as …
  • A learning environment supported by continuously evolving, collaborative processes focused on increasing individual and organizational performance.
  • Effective eLearning thrives at the nexus of web usability, communication, relationship, document, and Knowledge Management tools.

I like this definition of Knowledge Management.

Knowledge Management is about using information strategically to achieve one’s core business objectives.

Knowledge Management is the organizational activity of creating the social environment and technical infrastructure so that knowledge can be accessed, shared and created.
Robert K. Logan

eLearning IS

eLearning is NOT

Non Linear – Learners determine how, what and when they access information.

Linear – Learners must move through presentation in a predetermined sequence.

Dynamic Process – Transformed, personalized, customized on demand in response to learner and environmental variables. Available on demand and just in time.

Static Event – Learning is not an event that only happens when scheduled training occurs, it happens continuously.

Learner Controlled – Learner controls their own interaction with the content and presentation. Learner has opportunities for reflection and application.

Instructor Controlled – Instructor determines sequence, content, media and timing. Long simulations, or animations or Flash presentations are instructor controlled.  Synchronous meetings are instructor controlled.

Reusable Objects – Content of any media that can be chunked down to the most granular, meaningful level to allow combinations of objects to be assembled and dynamically presented for different environments and functional needs.

Learning Objects or Knowledge Objects or Information Objects – By focusing the use of an object for only one environment, you remove reusability.  Web standard enterprise level portal and CMS platforms should be used.

Informal – Recognizes that at least 70% of learning occurs in lectures, and in the class interaction, through collaboration, in situational communities.

Formal – Learning occurs w/o formal training presentations. Training is not the same as learning.

Platform Independent – can be transformed for use in a variety of standard formats – XML, HTML, DHTML, PDA, etc. in a variety of environments, both formal and informal.

Standards – AICC ( Aviation Industry CBT Committee) , SCORM – (Sharable Content Object Reference Model – Department of Defense, USA)  Why use these limiting standards from extremely different, strongly hierarchical environments?

Knowledge Management – Rich, flexible tools chosen to create, collect and distribute information, on demand and contextually, to learners, intra and extra organizationally.

LMS or LCMS – To manage the administrative and content aspects of training, usually supports a linear presentation of materials.  Used to track learners, not the value of the learning processes

Communities of Interest – Collaborative, self selecting and organizing groups of individuals that share the same interests.

CoPs (Communities of Practise), Functional or Departmental – Limited by type of function, title or expertise.

RAD (Rapid Application Development) – Iterative, incremental design process. Define, design, refine processes are integrated and parallel. Continuously refining prototypes allows improvements to be integrated and tested with each iteration. Each iteration offers an opportunity to increase the penetration and acceptance of the learning support processes.

ISD – Linear approach to needs analysis, design and evaluation. Errors are geometrically compounded from wrong audience analysis, invalid sample audience, skewed survey results, wrong focus on weaknesses. Validity and usability issues are not discovered until training is delivered.  By then it’s too late to correct, adjust, or change because of the sunk resource investment in the deliverables.

Multi Channel
Learner <-> Learner, Content <-> Learner, Expert <-> Learner, Expert <-> Content, Expert <-> Expert

Single Channel – Trainer to Attendee

interact When we design a website (essentially for providing information) or a so-called “Help” page” you should always keep the following user habits in mind. It might just help you provide a better learning experience and make your content more “intuitive” and manageable:

 

  • When visitors to your website process the information on a computer screen, their focus and flow of eye movement (so-called “browsing”) will be the same way they process information on a printed page. For example, in English and most European languages reading is done from the upper left to the lower right, scanning text left to right, top to bottom. (Middle-Eastern language visitors will be right to left, top to bottom)
  • When using an application or user interface, and visitors/users have to execute an action, like fill in a form, activate a choice or click a button, their focus is most easily drawn to objects such as buttons, menus and text fields. They are looking for visual clues that will lead them to complete the task that is required.
  • Once this required action is completed, or the users focus has been directed to an object that below their flow of focus (often called “downstream”) is is relatively difficult to redirect the focus back above the object (“upstream”) Users will tend to continue their focus across and down from the object. Simply put if you are asking a user to do something that is visually in the middle of the page, then they will look for feedback, or the next action to the left or below of the initial object. An example would be a “login successful” message should be placed below or to the right of the login screen and not above it.

This behaviour has certain implications for content designers:

  • Do not introduce concept information before the visitor is busy with activities that require these concepts (In other words, don’t explain the requirement for typing in a password twice for confirmation for security reasons on a page before the visitor is actually required to type in the password twice.)
  • Do not explain rules or constraints before a visitor encounters such rules or constraints. (A good example of this would be a long list of what is allowed and what is not allowed in a password before the user has to type in the password.)

The key concept in both statements above is “before”. Although both statements seem to go against the grain of good instructional design, we must consider the reality of user behaviour within an interactive medium. Given a choice between reading and doing, users prefer to be doing. If you try to warn students about problem areas before they encountered them in computer lab exercises, the students still made the very mistakes you had warned them against, and when you help them rectify their mistakes, they will only then remember that you warned them prior to that. That is the principle and the harsh facts: Until someone experiences or recognizes a problem, they cannot process information pertaining to that problem. Most of what we do in instructional design has to do with giving answers to people who don’t yet have a problem.

Mike Hughes
UX Matters

KEYNOTE I attended a intensive course on presentation skills last year and during the course I picked up common mistakes that we all make when engaged in public speaking and giving lectures. I failed on every one of them, but I am learning.

There are plenty of ways in which you can lose your audience and ruin the impression you leave on audience, albeit a group of health science students. You will want to avoid these eight common mistakes that lecturers and public speakers make:

 

  • Too much seriousness. Public speakers don’t need to be serious to be taken seriously. If you are overly reserved, you can look wooden, stiff and uncaring. A smile goes a long way. Show that you can take a joke or handle pressure with graciousness and warmth.
  • Weak speaking skills. In a media-saturated world, people know a good speaker when they hear one. Speaking with a flat or monotone voice, inappropriate volume or poor diction will not be tolerated. Whether you’re speaking one-on-one or to a crowd, pay attention to how you speak, not just to what you say.
  • Lack of clarity. Of course, what you say is important, too. By speaking with clarity of thought and message, you’ll convey an image of effectiveness that a teacher who rambles or speaks disjointedly does not convey. If the message is unclear and non-specific, your listeners will tune out and assume you don’t know what you’re talking about. In many cases the old adage “those who can, do. Those who can’t teach.” apply. An expert in the filed cannot necessarily teach others his or her skill!
  • Self-absorption. By overusing the words "I," "me" and "my," you will isolate yourself and fail to engage your audience. Even if you’re speaking about your idea, your vision and your responsibility, keep in mind that your job as a teacher is much bigger than you.
  • Lack of interest. Think back to when you were in school. Which teachers captured your attention? The energetic teachers who seemed to love their job or those who lectured dispassionately? Energy, interest and passion for your work are incomparable assets. Are you genuinely interested in what you are saying and doing?
  • Obvious discomfort. It’s painful to watch a leader who is awkward in conversation or uncomfortable before a crowd. If you are tentative or uncomfortable in your role, people begin to doubt your ability to be an effective leader – especially in difficult situations.
  • Inconsistency. Over time, your image becomes tied to your larger reputation. If you have a reliable pattern of behavior – one that’s reflected in what you do and how you do it – your image as a lecturer or public speaker will be seen as genuine. Inconsistencies, however, create an image of a person who’s flaky, insincere or dishonest.
  • Defensiveness. When a teacher is on the defensive, confidence and assurance are undermined. Being unwilling to consider other views, giving a knee-jerk defense of your decision or being incapable of seeking and hearing feedback all weaken your image as a capable, effective lecturer.

A good way to jump-start a change in your image is to see yourself the way others see you. Ask a coworker, boss or direct report to give you feedback on how you come across to those around you. Above all – be teachable. The ancient Greek word for “unteachable” is the same word we use today for the word “heretic”!

David Wiles

May
13
Filed Under (e-Learning, Tips) by David Wiles on 13-05-2009

exchange2007logoI have created a quick online tutorial to help you  get to grips with the improved “Out-of-Office” messaging function in Outlook and the university Webmail. The tutorial can be found here – right at the top of the list.

The new Exchange 2007 has significantly improved the Out-Of-Office capabilities of both Outlook (2003 upwards) and Webmail.

Now, instead of a single “all-or-nothing” message, you get the following:

  • The ability to tailor separate messages to internal and external users. Now, if you have information that you don’t want to whole world to know, you can place it in your internal out-of-office message so only your co-workers can see it.
  • The ability to schedule the time period covered by the out-of-office message. Have you ever tried to get a jump on things by setting up your out of office message a couple of hours before you leave only to have out-of-office messages sent to people while you were still at your desk? Now you can decide when the messages will start and when they will stop.
  • The ability to have an out-of-office response sent only to those that are in your contacts. One common complaint regarding out-of-office messages lies with members of mailing lists . When people go on vacation and a message is sent to the list, list members often get an out-of-office reply for each person on the list that has one set. If you limit responses to just those in your contact list, you can avoid this.
May
02
Filed Under (e-Learning, Tips) by David Wiles on 02-05-2009

virtualisation Most of us sit with a computer on our desks with an operating system like Windows XP or Vista installed. The CPU (Central Processing Unit or “brain”) of the computer handles most of the tasks you throw at it without any problem. If your computer is 2 years old or younger then most likely you have enough computing power at your disposal to do more than just send e-mail and type out a document or two. A lot of the processing power is going to waste!

Virtualisation a relatively new “buzz-word” that is surfacing today and most people – who are not some sort of computer geek – will not know or understand what it means. This article aims to explain what the concept means and what potential it might have for you.

Virtualisation is the process of running one operating system inside another. For example, you can have a perfectly normal Windows Vista operating system installed, and install Windows XP inside that operating system so that it runs in a window. The Windows Vista OS (Operating System) isn’t affected, you don’t need to reboot to switch between them, and you get all sorts of extra features for the virtual Windows, such as the ability to pause it to preserve its virtual RAM contents exactly as you left it.

The first attempts at PC virtualisation were essentially very clever software – lots of programmers worked together to create a virtual machine, complete with virtual CPU, virtual RAM and such. The operating system that was being virtualised (usually known as the “guest” to contrast it with the main operating system, known as the “host”) didn’t realize it was being virtualised at all – all the programs it ran were actually be intercepted by the software and translated on to real hardware.

Modern CPUs have built-in support for virtualisation, making virtualisation very easy to do “out-the-box” in the end the result of all this technology is that a virtualised OS in a virtual machine should be able to run at about 95% of the speed of the same OS on real hardware.

“This all sounds rather nice, but what can this do for me?” you might be thinking.

Firstly, let us look briefly at server virtualisation. A server is a large and powerful computer that provides a “service” to “clients” that are connected to it. These services might include storage space (a network drive) shared access to programs, e-mail, managing printing and so on. If this server breaks then the service that it provides is no longer available to its clients. If the server becomes old or outdated then its performance will slow down. If the server is replaced then the software running it will have to be built up and configured again – all of which takes time and resources.

Servers are expensive. With virtualisation a single large, and powerful server could replace a number of smaller machines, and running costs could be dramatically reduced. Virtualisation is hardware-independent so moving a virtual machine from one physical machine to another is easy.

With virtualisation, you can lower number of physical servers – you can reduce hardware maintenance costs because of a lower number of physical servers.

You can consolidate your servers. One physical machine can consolidate a collection of services and functions so physical space can be utilized more efficiently in places like your network room.

By having each application within its own “virtual server” you can prevent one application from impacting another application when upgrades or changes are made.

You can develop a standard virtual server build that can be easily duplicated which will speed up server deployment.

You can deploy multiple operating system technologies on a single hardware platform (i.e. Windows Server 2003, Linux, Windows 2000, etc).

Secondly Software virtualisation, also known as Application virtualisation is an broad term that describes software that allows portability, manageability and compatibility of applications by encapsulating them from the underlying operating system on which they are executed.

A fully virtualized application is not installed in the traditional sense, although it is still executed as if it is. The application is fooled into believing that it is directly interfacing with the original operating system and all the resources managed by it, when in reality it is not. Application virtualization differs from operating system virtualization in that in the latter case, the whole operating system is virtualized rather than only specific applications.

Firstly, virtualising an application means that you can switch it on and off as needed. It allows you to customise your operating environment and run only the software or applications that you need.

We use virtualisation in GERGA extensively. Software suites like Microsoft Office 2003 and its modern equivalent Office 2007 can run on the same physical workstation, independent of each other. If students need to do a project using Excel 2007 then they can switch on the Office 2007 application suite as required. Often compatibility is a problem – older programs do not work with newer versions of applications. Virtualisation can address this problem. Virtualise the older program and let it interface with the compatible application…

Virtualisation can also reduce the cost of software licensing. Instead of buying a software licence for each and every physical computer you have, fewer licenses are purchased and then the software is virtualised and run on demand. The logic behind this is that if you have 100 computers, not all the computers might need to run an application at one time. Virtualising the application and only having it run concurrently on a few workstations at one time means a huge saving in licensing. In most cases the legal requirements of the software license are satisfied.

I have discussed only a couple of benefits that virtualisation might provide for a large institution. So far I have only scratched the surface of how the Faculty of Health Sciences can benefit from this exciting technology.

David Wiles