tech-ed collisions

Bringing the cloud into your LMS

The Web can be a wonderful place, filled with an un-estimable number of great tools and content that have tremendous teaching and learning potential.  At the same time though, there is so much material on it that is completely inappropriate to have accessable from a learning environment. 

As we move towards digital curricula, more integrated online learning environments, adapt and adopt technologies for use in education, many schools and other educational organisations are implementing Learning Management Systems (LMS's).  These have been around for quite some time and are mature and very sophisticated applications however they are far from ubiquitous.  This post is for those that use LMS's and other online learning environments, and those that understand them.

LMS vendors, no matter how good they are and how fast they can roll out new functionality, cannot keep up with the pace of the Web.  New content and services are being introduced at an astounding rate.  There will always be someone who can do something better or something new that is really useful and has great applicability in the classroom.  The best of the vendors recognise this and for some time have allowed plugins, widgets, blocks etc to be integrated into their environments.  Trouble is - they all had their own unique way of doing it so if you are a small tool/service provider, in order to get your tool into their LMS's, you would have to write custom interfaces for each of the LMS's you wanted to integrate with (a difficult and costly exercise for small providers).

Enter IMS LTI.  The IMS Global Learning Consortium  (IMS) "is a global, nonprofit, member association that provides leadership in shaping and growing the learning and educational technology industries through collaborative support of standards, innovation, best practice and recognition of superior learning impact."  IMS has quite a number of technical specifications to support the use of technology within an educational context however three of those specifications form the core of their 'Digital Learning Services'.  These are:

I will endeavor to look at the broader Digital Learning Services in more detail in a later post but for the moment I am interested in exploring Learning Tools Interoperability (LTI).  LTI allows you to 'launch' an external tool from within (typically) an LMS.  There is a great overview of LTI here by Dr. Chuck which is well worth a look at if you want a much better explanation than I can offer.

LTI comes in two flavours (three if Basic LTI Simple Outcomes gets the promotion it deserves!).  These are:

  1. Basic Learning Tools Interoperability
  2. Learning Tools Interoperability

Essentially Basic LTI allows an LMS to 'launch' an external tool while full LTI will allow you to launch that tool and return some data back to the LMS from the tool. 

In LTI terminology, an LMS is known as a Tool Consumer (TC). The external tool is known as a Tool Provider (TP).  Tool providers can really be any manner of interesting Web 2.0 style services, content etc, making for great potential teaching and learning opportunities.  A Tool Consumer is not restricted to an LMS either.  An LMS is simply a 'context' of a TC.  It could also be a portal or any other type of Web environment that may be used in the delivery of learning.

Security is supported via the use of OAuth.  Using OAuth, teachers/tool providers are able to ensure that only authorised users (eg students) are able to launch and play the tool.

For teachers this is great - often they are restricted in their access to the Web in schools.  You can easily imagine though how getting access to great tools and content and making them available through a safe and secure channel in the LMS could open up the classroom to some fantastic services from the Web.

We have trialled Basic LTI as both a tool consumer and tool provider and are keen to go through the conformance testing from IMS to get listed as compliant.

I can imagine a whole market opening up for small tool providers as they now have access to significant markets via LTI compliant LMS's that are used by education departments and schools worldwide.

The following is a short video highlighting the experience of one such provider:

Cheers,

Jerry

Filed under  //   standards  

Common Cartridge in action - some teachers perspectives

A great video here providing some perspectives from teachers on the use of Common Cartridge and the application of it in Moodle.  The video was taken at a summer school for the ASPECT project in Europe. 

There are some really savvy teachers and others in the support of education in this project.  They really understand what interoperability can achieve and how to adapt content and their teaching/learning environments to meet their needs.  I would love to see examples of their work - I am sure they would stand out as stellar examples of leading practice.  

Filed under  //   standards  
Posted July 21, 2010

Review of IDEA10: "Learning Futures: Technology Challenges"

I was fortunate enough to attend this year's !DEA conference (IDEA10) last week and my first impressions of it were 'what a long way it has come over the last few years'. What started out as a lab where content and application developers got together a few years ago to test how they could move learning content from one application to another, along with some presentations on areas of interoperability has now emerged as a very important conference. As some of the speakers stated, discussions on interoperability and technical standards can cause many an eye to glaze over but when you look at what they are enabling, and the fantastic outcomes for education that they can and are achieving, you realise just how important this work is. Day 1 of the conference was the IDEALab Workshop. In the morning we looked at the Systems Interoperability Framework (SIF) and the work that is happening around the country as school education jurisdictions work together to solve common interoperability problems. After lunch we looked at the consultation work Link Affiliates has been doing for DEEWR in supporting the DER. These areas included:

  • 21C Curriculum Content
  • W3C Accessibility guidelines
  • Curriculum description
  • Lesson Plans
  • Content discovery and exchange
  • E-portfolio technologies
  • 21C Learning Environments
Following this we had a detailed session on Accessibility and the WCAG 2.0 (Web content accessibility guidelines). A demonstration on accessibility really highlighted for me just how much consideration really needs to be put into making your web content properly accessible. The 'Technology in Education Open Forum' began on day 2 and this was a really interesting day. The scene was really set in the panel session from a group of educators talking about what they want from technology to support and enhance the work that they are doing. Following this session a number of panels looked at how they, as infrastructure developers, providers etc are working towards providing the types of environments that our educators need. Another panel of educators then responded to the earlier sessions and discussed what was needed so that they could use these environments. Following this was a session which looked at some of the amazing work that is taking place - unfortunately I missed this session as I had some other duties to attend to. Finally, we had the IMS GLC Learning Impact Awards Regional Finalist Showcase. A number of initiatives were showcased and I would love to have seen them all however I was representing one of those initiatives and there was no time for me to get to see the others. On day 3 we had an international perspective from IMS GLC and ADL. These two standards organisations are doing some great work and gave fantastic insights into the work they are doing at the leading edge. What came through for me is that standards really do provide a platform for innovation. Finally we had the Winners of the Regional Learning Impact Awards and congratulations must go to Peter Higgs and his team at the Tasmanian Polytechnic and Skills Institute for the work they have done on Mobile Assessment and Online Recognition using QTI solutions. They are very deserving winners. I am also really pleased to say that we were runners up to them and took out the "People's Choice Award" for the work that we have been doing developing personal and professional development social networking environments using our tool - Fused. A big thank-you to all who participated.

Filed under  //   standards  

a global infrastructure for sharing learning resources

Here's a great little article on working towards creating a 'global infrastructure for sharing learning resources'. The article, on the Creative Commons wiki, discusses what you, as a repository or content owner might like to think about should you want to make your content discoverable and sharable as open education resources (and why wouldn't you). Of course, it isn't necessarily limited to OER - it outlines good practice and considerations for other types of resources. Firstly, you need a consistent way of structuring the information about your resources (title, author, description and so on). This descriptive information, known as metadata (data about data) should be created conforming to one of the many metadata standards. The OER team who put the article together recommend Learning Object Metadata (LOM) or Dublin Core Metadata (DC). LOM metadata is used to describe a particular type of learning resource known as Learning Objects while DC metadata is a more general purpose metadata specification used to describe a wider variety of learning resources. Once you have described your resources you need to make them more discoverable. There are a few ways you can do this. The OER folk suggest that you allow it to be 'harvested' (collected by a special computer program and stored in a centralised repository along with metadata from other repositories). This is similar to the way Google and other search engines collect information about your website. There are standards for harvesting too. The Open Archives Initiative - Protocol for Metadata Harvesting (OAI-PMH) is perhaps the most well known specification for harvesting while more generalised Web specifications such as RSS and Atom might also be worth considering in some situations. Harvesting your data into a centralised repository is one way of making your resources more discoverable. Another way may be to participate in a real-time federated search whereby a search service will make multiple simultaneous search request over a number of repositories. The OER article recommends harvesting over federated searching and there are a number of reasons for this. Firstly, it is more efficient and more scalable, but also helps enable richer search functionality (it is easier to develop and implement advanced search features in one place than potentially many). GLOBE, another initiative dedicated to making educational resources more discoverable and sharable also recommends the approaches and specifications put forward in the OER article. GLOBE however does recognise that there are instances where there may be business rules in place that prevent harvesting and is looking at the SQI specification. A number of GLOBE partners have implemented SQI.

Filed under  //   standards  

Tweak the tweet: a lesson in standards development

This should be interesting for all those interested in standards development. Standards development for me is all about consensus building (to state the bleeding obvious) but the way consensus (or at least more or less general agreement) is reached on many of our standards can take months or years even. A justification for the standard may be developed, use-cases created, issues are examined in intricate detail by experts. There can be 'raging' academic debate over minor nuances in terms and expressions. Email lists play host to (sometimes) furious and boisterous? discussions between leading proponents of obscure points of view. Drafts are published, votes are taken, there may even be a face-to-face meeting in some exotic location. Editors are engaged to produce faultless works of academic excellence and the standard may eventually be published (with an appropriately important looking version number) to some obscure website tucked away behind a payment gateway and safely out of reach of its intended audience.
Actually, some standards are even open (need to carefully define what open means here) to the 'public' - or at least free to access if you can find them. Using, or implementing them can also be a major challenge. The pursuit of academic excellence and absolute correctness leaves us with documents that are incredibly difficult for us mere mortals.
OK- enough of the satire, but hopefully it illustrates a point or two. Many of our standards do take quite some time to develop and experts have to be employed to ensure that this often difficult work gets done correctly. Usability is also a concern for some of us who have tried to implement various (not all) standards and specifications.
Now, getting to the point of this post - crowdsourcing and the 'wisdom of the crowd' is becoming quite well accepted. The ability to get things done very quickly using the power of the Web and the enthusiasm of the crowd is amazing to watch. What has this got to do with standards?
Well...take a look at what is going on over at Project Epic. From their website, Project Epic are:

information scientists, computer scientists and computational linguists at the University of Colorado at Boulder and the University of California, Irvine. We specialize in societal transformation in conjunction with technology use; computer-mediated communication studies; software engineering and architectures; information security; network security and computational linguistics with a deep commitment to understanding the domains for which we design and study.
So what are they up to and what have they done? The ReadWriteWeb blog has this great post on Project Epic's 'Tweak the Tweet' initiative where they have got together with a bunch of hackers from around the world to create a new hashtag syntax for Twitter for use in emergencies such as that in Haiti. From the RWW post, the syntax is pretty easy to learn and gives a few examples:
Every tweet should contain at least one main tag like #need [explain need], #offering or #injured [name]. You can find a full list of main tags here. In addition, tweets can also have data tags like #name [name], #loc [location] or #contact [email, phone etc.]. These tweets can also contain often used keywords that don't need the hashtag sign like food, supplies, road, hospital or help. Examples Here are some real-world examples of this new syntax being used in Haiti: * #haiti #need security #loc General Hospital PAP #contact @thehatian * #haiti #need water #loc Orphanage Foyer de Sion #contact @robinbauer #src @AnnCurry * Can you deliver beans rice water to orphanage? #Haiti #Need Food #Contact: @childhopeintl #Loc: Delmas 75, Rue Cassagnol #14, PaP BLESS YOU
The Project Epic website has more details on the initiative here. This 'standard' of course is not a real one as it hasn't gone through all the ratification processes by formally recognised standards bodies yet, like many other examples, may become a defacto standard - one which the crowd actually intentionally uses. The Twitter hashtag itself and other elements of Twitter (eg DM, @ etc) are other great examples.
Now, it should be said that we use standards and specifications all the time without even knowing it - I wouldn't even be able to count or identify the number of standards in place to support the writing of this post for example - but I am not intentionally using all of them. Like many great standards they are just there, implemented somewhere in the background enabling me to post something that could be read almost instantly in almost any place in the world. However, in writing this post, I have only used two bits of a standard (I should say that for this post I am not distinguishing between a standard and a specification) intentionally and overtly - the rest are behind buttons etc in my wysiwyg HTML editor. They have been the italic tag and the break tag in HTML.
Back to the twitter tags, something has to be said about the willingness of the crowd to intentionally and quite willingly use those 'standardised' tags, and that is their simplicity and usability.
'Tweak the Tweet' looks like a great initiative and it will be interesting to see how it is adopted. In a disaster situation it will be interesting to see if those involved have the poise to remember its syntax when they are sending out their tweets. I guess the real use will be by applications that may be developed to support their use. They have been developed so that computer applications can easily interpret the tweets so we need applications that can support both the creation of the tweets and the reading of them (I feel an iPhone application coming on).
So now to the really important point of this post - how long has it taken to develop this 'standard'? Really it has happened 'in the blink of an eye' when compared to general standards development. Likewise with Twitter tags in general. The crowd is able to build consensus incredibly quickly when the need arises and as can be seen, it is quite good stuff. It is relevant, easy to use and can have amazing adoption rates. The RWW post even mentions the word 'metadata' - Amazing! Normally I would expect any standard or specification involving 'metadata' would take an age to develop. There are some important lessons to be learned here for standards developers. While many standards and specifications require enormous amounts of rigour and examination, it is interesting for us to look at what can be achieved in a very short period of time when the need arises and we have a motivated 'crowd'.

Filed under  //   Twitter   standards  

Live coverage of the SCORM 2.0 workshop

For anyone interested in what may be happening at the SCORM 2.0 workshop, I see Mark Oehlert is covering it using Cover It Live here. Cheers, Jerry

Filed under  //   standards  

Specifications for Repository Federations - Part 1

Our company has been providing search/discovery services for repositories for over 10 years. The first project that we were involved with was EDNA (EDucation Network Australia) - Australia's gateway to education resources. Edna, as it is now known, is still providing search services for content that it catalogs, and also other useful collections of content on the Web via its distributed search. Distributed, or federated search, as it is often called has a number of challenges. Searching multiple collections in real time can have performance/scalability problems but also many repositories and collections use different methods for accessing them. Specifications (and adoption of them) helps federated search implementers. However, just as there are many different search solutions with different interfaces, there are a number of relevant standards and specifications to select from. The edna distributed search mentioned earlier in this article uses some open source distributed search software (openDSM) that we have developed and which utilises a number of these specifications to access different collections. Real time searching of multiple collections provides one way of searching multiple repositories in a single query. Another approach is to 'harvest' information about resources from many repositories to a central repository and to provide a search across that central collection. There are specifications available for this approach but one in particular is very widely adopted. When we have a number of repositories to search across (loosely speaking, a federation) it is useful to be able to describe those repositories (what they contain, what protocols/specifications they use, intended audience, metadata profiles etc) and store that information in some sort of a registry. This gives us at least three types of specifications to look at:

  • federated search
  • harvesting
  • registries
In upcoming posts I will start to consider some of these specifications as they relate to one of the initiatives that I participate in called GLOBE which is a federation of federated searches providing access to education related repositories. GLOBE uses a number of search and harvesting specifications and is looking at developing Registry services. Cheers, Jerry (comfortably back in Firefox after a brief foray into Chrome)

technorati tags: , ,

Filed under  //   standards  

SIFA and ADL partnership

The Schools Interoperability Framework Association (SIFA) and Advanced Distributed Learning (ADL) have officially announced a partnership looking at the development and implementation of SCORM into school software applications. SIF is a specification that enables interoperability between applications that you will find in a school system while ADL's SCORM defines learning content. The pilot will be testing a number of use cases around passing content to learning platforms and sharing it. Further information about the partnership is in the press release which is available here.

technorati tags: , ,

Filed under  //   standards  

Interoperability standards for virtual worlds

Virtual worlds are getting quite a bit of attention at work at the moment and for good reason. Clearly there is a huge potential for their use in education and momentum in this area is really building. Like many I have experimented a bit with platforms like Second Life but am now wanting to do more. Second Life, while being great for some uses may not meet all requirements for everyone. I am particularly interested in installing my own virtual world and there are a number of options. Once installed, the first thing I need to do is start building or populating my virtual worlds and so far this is not easy for a novice. A lot of effort seems to be required to start creating appealing and useful artefacts for these worlds. What I'd really like to do once I have created something is to be able to transport it into another virtual world so that I can get some re-use out of it. Some sort of standards and mechanisms for such transportation would be great here. Interoperability specs here we come! So it was with interest I read this post on the readwriteweb blog. 'Teleporting' sounds a lot more interesting than 'harvesting' (metadata). Sharing and re-using assets from virtual worlds is going to be very important and I look forward to hearing more about work in this area (perhaps we will see an OpenSocial for virtual worlds). Cheers, Jerry.

technorati tags: , ,

Filed under  //   standards   virtual worlds  
Posted July 9, 2008

nice post on ePortfolio standards

From the Learning Futures Eiffel team blog here's a nice introduction on standards to consider for ePortfolios. It gives a nice summary of the major ePortfolio specific standards to consider and importantly, mentions related specifications such as OpenSocial which I believe those interested in ePortfolios should start to at a minimum, become familiar with. From the article:

Today, even if few ePortfolio suppliers are engaged in the implementation of existing specifications, those doing it generally do so within the context of a specific community, using what is called application profiles, i.e. an adaptation of a base specification to the particular requirements of this community. This adaptation adds a level of complexity to the issue of interoperability, as different application profiles of the same base specification do not necessarily interoperate...
Our own experience in this area certainly backs this up. Some time ago, we developed an Employability Skills ePortfolio and used the IMS ePortfolio specification to build it. The IMS specification itself is (well) quite comprehensive to say the least which added some complexity to our work but in developing a profile specifically for employability skills, we in effect lost interoperability with other IMS ePortfolio conformant applications unless they used the same profile as us (highly unlikely). In the past I have discussed the need for simple to implement standards and concentrated on specifications such as RSS, Atom and microformats. The Learning Futures article references hResume, an interesting format used by LinkedIn. Compare the definition for that with a heavy duty specification and see which one you would rather implement.

technorati tags: , ,

Filed under  //   eportfolio   standards