In previous blog posts I mentioned and elaborated on specific ways of getting the best out of using CELCAT, and in particular being ready to use Automation. In my last blog entry, I mentioned how having unrealistic expectations of Automation is a common source of problems, and today I thought I’d focus on that. However, the truth is that such unrealistic expectations flow from not following what I like to call the ‘Golden Rule’ of using timetabling software, which is far broader than just Automation.

“The timetable and data in the system must reflect reality. “

Simple, right? It should really go without saying, but I find that it’s often not the case, leading to a variety of problems and challenges. It really is just another way to express the software principle GIGO – “Garbage In, Garbage Out”. To find out if your timetable reflects reality, simply ask yourself “If I pick any random event on my timetable for a specific week, day, time and room, and I go to the room at that time, how will the students, staff, room layout, room fixtures, room size, type of event, duration of the event, content of the delivery, attendance monitoring and booked equipment match up to what I have against it in CELCAT?” Apart from staff and/or student absences, the answer should be 100%. If it’s not, you should identify the reasons for this, and put in place measures to improve the situation as much as possible.

Now, there are limits to how achievable this is which depend on a variety of factors, and sometimes the effort of maintaining 100% realistic timetables simply outweigh the benefits. This should not be the case across most or even large parts of your timetable. For example, if there’s an initial joint lecture followed immediately by five concurrent workshops in five rooms where students split themselves into five groups based on what they feel like doing on the day. It’s hardly realistic to expect someone to capture the memberships of the five groups and enter that in CELCAT at the end of the joint session and before the workshops start. The best you can do is create one event with five rooms and an attached note explaining what will happen, and either make the events exempt from attendance monitoring, or put in place a procedure to take and capture manual registers. However, you should limit such approximations and exceptions to the absolute minimum, and ideally have a formal process, naming convention, categorization or note to mark such events as special. That way, you can conceptually split your timetable into ‘realistic’ and ‘approximate’ parts, and use that to measure and compare feedback and satisfaction. If one department consistently fails to provide realistic data and you can mark it like that in the system, this can be used to compare metrics such as student feedback or NSS results across departments - to justify additional resource needs or policy changes.

For the rest, you should aim at all times to make every single aspect of the event and resources in CELCAT reflect what is going to happen in reality. If you do this, there can never be any problems or surprises - CELCAT clash checking will identify all possible double-bookings and problems in advance and allow you to resolve these. Every student will have a seat, and every event will always be a in a room with the required fixtures and equipment. Also, the system will never mislead anyone; students and staff will see all their events (and only their events) on personal timetables, and clash checks and other error reports will reflect real problems that need to be resolved by timetablers, without having false errors mixed in which someone must keep track of outside the system. Utilisation statistics will also be true, and can then be used to drive and support resource planning.

As just one example, a common problem I encounter with clients is that multiple staff members are added to events in the staff field, which the system interprets (correctly, because that is what that field is for) as if all those people need to be present for the event in order to deliver it. The results are entirely predictable: If the same people are also linked to other events happening at the same time (a very common problem), it could flood the clash reports with false positive double bookings that can be ignored. This in turn makes it difficult to identify true clashes that cannot be ignored. At the same time, staff will see events on their timetables that they don’t have to attend, students will see the wrong staff on timetables and reports on staff working hours will be incorrect. The Time Adviser functionality is also severely restricted, as suggested alternative slots will exclude those where any of the multiple staff members are shown as busy. All this leads to a breakdown of trust in the system, and increases the complexity of work that timetablers must work out in their heads instead of using CELCAT tools designed for it.

How does this reflect expectations about Automation? Well, with Automation implementation, there is often the temptation to use the opportunity to try and get what you’ve always wanted but couldn’t achieve - If the data collection and communication is not handled well, it’s very easy to end up with hundreds (or thousands) of requests where staff enter their desired requirements instead of realistic requirements, with no indication that this is the case.

Consider the following example: If, for the past several years you’ve always been able to only provide one-hour seminars for all your modules while staff would prefer to have two-hour seminars, and you then open up data collection with vague or unrealistic promises of the abilities of automated timetabling, staff could jump at the opportunity and enter two-hour seminars as an absolute requirement for all their modules. Now, unless you’ve been doing an absolutely terrible job with timetabling (of course you haven’t!), there’s probably a good reason you’ve never been able to give everyone two hours - it’s physically impossible. Now, while Automation is a powerful tool, it can’t perform miracles, and running it with such a series of impossible requests that do not reflect what is achievable, will lead to problems that could take a great deal of time and effort to unpick. Data collection should therefore be sensitive to this, and while there’s great benefit to capture desired requirements for the purposes of modelling and testing, you should also capture a baseline that you know from experience to be physically achievable, or risk being lost in a sea of uncertain requests and requirements that simply cannot be used to create a timetable, with no way of knowing which ones are absolute requirements and which ones can be safely ignored or treated as optional.