Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Think like a publisher

2 May 2011 by Clark Leave a Comment

Way  back when we were building the adaptive learning system tabbed Intellectricityâ„¢, we were counting on a detailed content model that carved up the overall content into discrete elements that could be served up separately to create a unique learning experience.  As I detailed in an article, issues included granularity and tagging vocabulary.  While my principle for the right level of granularity is playing a distinct role in the learning experience, e.g. separating a concept presentation from an example from a practice element, my more simple heuristic is to consider “what would a knowledgeable mentor give to one learner versus another”. The goal, of course, is to support future ability to personalize and customize the learning experience.

Performance Ecosystem

Back then, we were thinking then as a content delivery engine, but our constraints required content produced in a particular format, and we were thinking about how we’d get content produced the way we needed.  Today, I’m still thinking that the advantages of content produced in discrete chunks, under a tight model, is a valuable investment in time and energy.  Increasingly, I’m seeing publishers taking a similar view, and as new content formats get developed and delivered (e.g. ebooks, mobile web), the importance of more careful attention to content makes sense.

The benefits of more careful articulation of content can go further. In the performance ecosystem model (PDF), the greater integration step is specifically around more tightly integrating systems and processes.  While this includes coupling the disparate systems into a coherent workbench for individuals, it also includes developing content into a model that accounts for different input sources, output needs, and governance.  While this is largely for formal content, it could be community-generated content as well.  The important thing is to stop redundant content development.  Typically, marketing generates requirements, and engineering develops specifications, which then are fed separately to documentation, sales training, customer training, and support, which all generate content anew from the original materials.  Developing into and out of a content model reduces errors and redundancy, and increases flexibility and control.  (And this is not incommensurate with devolving responsibility to individuals.)

We’re already seeing the ability to create custom recommendations (e.g. Amazon, Netflix), and companies are already creating custom portals (e.g. IBM).  The ability to begin to customize content delivery will be important for customer service, performance support, and slow learning.  Whether driven by rules or analytics (or hybrids), semantic tagging is going to be necessary, and that’s an concomitant requirement of content models.  But the upside potential is huge, and will eventually be a differentiator.

Learning functions in organizations need to be moving up the strategic ladder in terms of their overall responsibility for more than just formal learning, but also performance support and ecommunity.  Thinking like advanced publishers can and should be about moving beyond the text, and even beyond content, to the experience.  While that could be custom designs (and in some cases it must be, e.g. simulation games), for content curators and providers it also has to be about flexible business models and quality development.  I believe it’s a must for other organizations as well.  I encourage you to start thinking strategically about content development in rich ways that stop with one-off development, and start thinking about putting some up-front effort into not only templates, but also models with tight definitions and labels.

New Horizon Report: Alan Levine – Mindmap

20 April 2011 by Clark 3 Comments

This evening I had the delight to hear Alan Levine present the New Media Consortium’s New Horizon Report for 2011 to the ASTD Mt. Diablo chapter.  As often happens, I mindmapped it.  Their process is interesting, using a Delphi approach to converge on the top topics.

For the near term (< 1 year), he identified the two major technologies as ebooks and mobile devices (with a shoutout for my book: very kind).  For the medium term (2-3 years), he pointed to augmented reality and game-based learning (though only barely touching on deeply immersive simulations, which surprised me).  For the longer term (4-5 years), the two concepts were gesture-based computing and learning analytics.

A very engaging presentation.

mind map of Levine talk

Learning Experience Design thru the Macroscope

7 April 2011 by Clark 11 Comments

Our learning experience design is focused, essentially, on achieving one particular learning objective.  At the level of curricular design, we are then looking at sequences of learning objectives that lead to aggregate competencies.  And these are delivered as punctate events.  But with mobile technologies, we have the capability to truly start to deliver what I call ‘slow learning’: delivering small bits of learning over time to really develop an individual.  It’s a more natural map to how we learn; the event model is pretty broken.  Most of our learning comes from outside the learning experience.  But can we do better?

Really, I don’t think we have a handle on designing and delivering a learning experience that is spaced over time, and layered over our real world activities, to develop individuals in micro bits over a macro period of time rather than macro bits over a micro bit of time (which really doesn’t work).  We have pieces of the puzzle ( smaller chunks, content models) and we have the tools (individualized delivery, semantics), but putting them together really hasn’t been done yet.

Conceptually, it’s not hard, I reckon.  You have more small chunks of content, and more distributed performance model. You couple it with more self-evaluation, and you design a system that is patiently persistent in assisting people and supporting them along.  You’d have to change your content design, and provide mechanisms to recognize external content and real performance contexts as learning experiences.  You’d want to support lots of forms of equivalency, allowing self-evaluation against a rubric to co-exist with mentor evaluation.

There are some consequences, of course.  You’d have to trust the learner, they’d have to understand the value proposition, it’s a changed model that all parties would have to accommodate.  On the other hand, putting trust and value into a learning arrangement somehow feels important (and refreshingly different :).  The upside potential is quite big, however: learning that sticks, learners that feel invested in, and better organizational outcomes.  It’s really trying to build a system that is more mentor like than instructor like.  It’s certainly a worthwhile investigation, and potentially a big opportunity.

The point is to take the fact that technology is no longer the limit, our imaginations are. Then you can start thinking about what we would really want from a learning experience, and figure out how to deliver it.  We still have to figure out what our design process would look like, what representations we would need to consider, and our associated technology models, but this is doable.  The possibility is now well and truly on the table, anyone want to play?  I’m ready to talk when you are.

Clarity needed around Web 3.0

25 February 2011 by Clark 6 Comments

I like ASTD; they offer a valuable service to the industry in education, including reports, webinars, very good conferences (despite occasional hiccups, *cough* learning styles *cough*) that I happily speak at and even have served on a program committee for.     They may not be progressive enough for me, but I’m not their target market.   When they come out with books like The New Social Learning, they are to be especially lauded.   And when they make a conceptual mistake, I feel it’s fair, nay a responsibility, to call them on it.   Not to bag them, but to try to achieve a shared understanding and move the industry forward.   And I think they’ve made a mistake that is problematic to ignore.

A recent report of theirs, Better, Smarter, Faster: How Web 3.0 will Transform Learning in High-Performing Organizations, makes a mistake in it’s extension of a definition of Web 3.0, and I think it’s important to be clear.   Now, I haven’t read the whole report, but they make a point of including their definition in the free Executive Summary (which I *think* you can get too, even if you’re not a member, but I can’t be sure).   Their definition:

Web 3.0 represents a range of Internet-based services and technologies that include components such as natural language search, forms of artificial intelligence and machine learning, software agents that make recommendations to users, and the application of context to content.

This I almost completely agree with.   The easy examples are Netflix and Amazon recommendations: they don’t know you personally, but they have your purchases or rentals, and they can compare that to a whole bunch of other anonymous folks and create recommendations that can get spookily good.   It’s done by massive analytics, there’s no homunculus hiding behind the screen cobbling these recommendations together, it’s all done by rules and statistics.

I’ve presented before my interpretation of Web 3.0, and it is very much about using smart internet services to do, essentially system-generated content (as opposed to 1.0 producer-generated content and 2.0 user-generated content).   The application of context to content could be a bit ambiguous, however, and I’d mean that to be dynamic application of context to content, rather than pre-designed solutions (which get back to web 1.0).

As such, their first component of their three parts includes the semantic web.   Which, if they’d stopped at, would be fine. However, they bring in two other components. The second:

  • the Mobile Web, which will allow users to experience the web seamlessly as they move from one device to another, and most interaction will take place on mobile devices.

I don’t see how this follows from the definition. The mobile web is really not fundamentally a shift.   Mobile may be a fundamental societal shift, but just being able to access the internet from anywhere isn’t really a paradigmatic shift from webs 1.0 and 2.0. Yes, you can acccess produced content, and user-generated content from wherever/whenever, but it’s not going to change the content you see in any meaningful way.

They go on to the third component:

  • The third element is the idea of an immersive Internet, in which virtual worlds, augmented reality, and 3-D environments are the norm.

Again, I don’t see how this follows from their definition.   Virtual worlds start out as producer-generated content, web 1.0. Sims and games are designed and built a priori.   Yes, it’s way cool, technically sophisticated, etc, but it’s not a meaningful change. And, yes, worlds like Second Life let you extend it, turning it into web 2.0, but it’s still not fundamentally new.   We took simulations and games out of advanced technology for the conferences several years ago when I served.   This isn’t fundamentally new.

Yes, you can do new stuff on top of mobile web and immersive environments that would qualify, like taking your location and, say, goals and programmatically generating specific content for you, or creating a custom world and outcomes based upon your actions in the world from a model not just of the world, but of you, and others, and… whatever.   But without that, it’s just web 1.0 or 2.0.

And it’d be easy to slough this off and say it doesn’t matter, but ASTD is a voice with a long reach, and we really do need to hold them to a high standard because of their influence.   And we need people to be clear about what’s clever and what’s transformative.   This is not to say my definition is the only one, others have   interpretations that differ, but I think the convergent view is while it may be more than semantic web, it’s not evolutionary steps.   I’m willing to be wrong, so if you disagree, let me know.   But I think we have to get this right.

Jane Hart’s Social Learning Handbook

24 February 2011 by Clark 1 Comment

Having previously reviewed Marcia Conner and Tony Bingham’s The New Social Learning, and Jane Bozarth’s Social Media for Trainers, I have now received my copy of Jane Hart’s Social Learning Handbook.   First, I’ll review Jane’s book on it’s own, and then put it in the context of the other two.   Caveat: I’m mentioned in all three, for sins in my past, so take the suitable precautions.

Jane’s book is very much about making the case for social learning in the workplace, as the first section details.   This is largely as an adjunct to formal learning, rather than focusing on social media for formal learning. Peppered with charts, diagrams, bullet lists, and case studies, this book is really helpful in making sense of the different ways to look at learning.

The first half of the book is aimed at helping folks get their minds around social media, with the arguments, examples, and implementation hints.   While her overarching model does include formal structured learning (FSL), it also covers her other components that complement FSL: accidental and serendipitous learning (ASL), personally directed learning (PSL), group-directed learning (GDL), and intraorganizational learning (IOL).   The point, as she shares Harold Jarche’s viewpoint on, is that we need to support not just dependent learning, but independent and interdependent learning.   And she’s focused on helping you succeed, with lots of practical advice about problems you might face and steps that might help.

Jane has a unique and valuable talent for looking at things and sorting them out in sensible ways, and that is put to great use here.   Nearly the last half of the book is 30 ways to use social media to work and learn smarter, where she goes through tools, hints and tips on getting started, and more.   Here, her elearning tool of the day site has yielded rich benefits for the reader, because she’s up to date on what’s out there, and has lists of sites, tools, people with helpful comments.

This is the book for the learning and development group that wants to figure out how to really support the full spectrum of performers, not just the novices, and/or who want to quit subjecting everyone to a course when other tools may make sense.

So, how does this book fit with Jane Bozarth’s Social Media for Trainers, and Conner & Bingham’s The New Social Learning?   Jane B’s book is largely for trainers adding social media to supplement formal learning, where as Jane H’s book is for those looking to augment formal learning, so they’re complementary.   Marcia and Tony’s book is really more the higher level picture and as such is more useful to the manager and executive.   Roughly, I’d sell the benefits to the organization with Marcia & Tony’s book, I’d give Jane B’s book to the trainers and instructional designers who are charged with improving on formal learning, and I’d give Jane H’s book to the L&D group overall who are looking to deliver more value to the organization.

They’re all short, paperback, quick and easy reading, and frankly, I reckon you oughta pick all three of them up so you don’t miss a thing.   You’d be hard pressed to get a better introduction and roadmap than from this trio of books.   Let’s tap into this huge opportunity to make things go better and faster.

Quip: limits

21 February 2011 by Clark Leave a Comment

The limits are no longer the technology; the limits are between our ears (ok, and our pocketbooks).

My old surfing buddy Carl Kuck used to say that the only limits are between our ears, and I’ve purloined his phrase for my nefarious purposes.   This comes from the observation that Arthur C. Clarke made that “any truly advanced technology is indistinguishable from magic“.   I want to suggest that we now have magic: we can summon up demons (ok, agents) to do our bidding, and peer across distances with crystal balls (or web cams). We really can bring anything, anywhere, anytime. If we can imagine it, we can make it happen if we can marshal the vision and the resources. The question is, what do we want to do with it?

Really, what we do in most schooling is contrary to what leads to real learning. I believe that technology has given us a chance to go back to real learning and ask “what should we be doing?”.   We look at apprenticeship, and meaningful activity, and scaffolding, and realize that we need to find ways to achieve this.   (Then we look at most schooling and recoil in horror.)

So, let’s stop letting the ways in which our cognitive architecture limits us (set effects, functional fixedness, premature evaluation) and think broadly about what we could be doing, and then figure out how to make it so. I’ll suggest that some components are slow learning, distributed cognition, social interaction, and meta-learning (aka 21st Century skills).   What do you think might be in the picture?

Learning Technologies UK wrap-up

31 January 2011 by Clark 4 Comments

I had the pleasure of speaking at the Learning Technologies ’11 conference, talking on the topic of games.   I’ve already covered Roger Schank‘s keynote, but I want to pick up on a couple of other things. Overall, however, the conference was a success: good thinking (more below), good people, and well organized.

The conference was held on the 3rd floor of the conference hall, while floors 1 and ground hosted the exposition: the ground floor hosted the learning and skills (think: training) exhibits while the 1st floor held learning technology (read: elearning) vendors.   I have to admit I was surprised (not unpleasantly) that things like the reception weren’t held in the exhibit halls.   The conference was also split between learning technologies (Day 1) and learning and skills (day 2), so I have to admit being somewhat surprised that there weren’t receptions on the respective floors, to support the vendors, tho’ having a chance to chat easily with colleagues in a more concise environment was also nice.

I’m not the only one who commented on the difference between the floors: Steve Wheeler wrote a whole post about it, noting that the future was above, and the past showing below.   At a post-conference review session, everyone commented on how the level of discussion was more advanced than expected (and gave me some ideas of what I’d love to cover if I got the chance again).   I’d   heard that Donald Taylor runs a nice conference, and was pleased to see that it more than lived up to the billing.   There was also a very interesting crowd of people I was glad to meet or see again.

In addition to Roger’s great talk on what makes learning work, there were other stellar sessions. The afore-mentioned Steve did a advanced presentation on the future of technologies that kept me engaged despite a severe bout of jetlag, talking about things you’ve also heard here: semantics, social, and more.   He has a web x.0 model that I want to hear more about, because I wasn’t sure I bought the premise, but I like his thinking very much. There was also a nice session on mobile, with some principles presented and then an interesting case study using iPads under somewhat severe(military) constraints on security.

It was hard to see everything I wanted to, with four tracks. To see Steve, I had to pass up Cathy Moore, who’s work I’ve admired, though it was a pleasure to meet her for sure.   I got to see Jane Bozarth, but at the expense of missing my colleague Charles Jennings.   I got to support our associate Paul Simbeck-Hampson, but at the cost of missing David Mallon talk on learning culture, and so on.

A great selection of talks to hear is better than not. There was also a very interesting crowd of people I was glad to meet or see again.   A great experience, overall, and I can happily recommend the conference.

Continual Learning

20 January 2011 by Clark 4 Comments

A recent request for feedback on new learning technology research areas highlighted areas they thought were important, and a subset naturally struck me:

  • the connection between formal and informal learning: an interest of mine since I first noticed the gap in organizations
  • emotional and motivational aspects of technology-enhanced learning which was the topic of first book
  • informal learning: which is a major component of my work as a member of the Internet Time Alliance
  • personalization of learning: which was the focus of a project I led a decade ago and still an area of interest
  • ubiquitous and mobile technology and learning: given that I’ve just written a book about it :)

As academics are wont to do, this isn’t a surprising list (there were interesting others as well) because despite the overlap there’s reason to study each on their own.   But what inspired me was the intersection.

I started thinking about a vision (PDF) I had about 8 years ago now, where your portable mobile device would know where you are and what you are doing, and coupling that with your learning goals, would layer on support for developing your learning goals opportunistically based upon your context.   Think about how you’d learn if you had no limits at all: your ideal could be to have a personal mentor always with you looking for opportunities to develop you.

The learning benefits are severalfold, it’s customized for you, and it’s focused on your interests.   It also ideally would bridge the gap between formality and informality, as it could be more formal for a new area but then become more informal gradually.   Another way to think of it is as ‘slow learning‘, (like ‘slow food’, not like ‘slow learner’) based upon a long-term relationship with (and a long-term interest in) the learner.

The technology capabilities make this possible. What is still required would be the curricula, the content, the rules, and the business model. If nothing else, I think organizations should be thinking about this internally, mobile or not.   It is another way to start thinking about workscapes/performance ecosystems and a broader perspective on learning. Anyone game?

2011 Predictions

1 January 2011 by Clark

For the annual eLearn Mag predictions, this year I wrote:

I think we’ll see some important, but subtle, trends. Deeper uses of technology are going to surface: more data-driven interactions, complemented by both more structured content and more semantics. These trends are precursors to some very interesting nascent capabilities, essentially web 3.0: system-generated content.   I also think we’ll see the further demise of “courses über alles” and the ‘all-singing all-dancing‘ solution, and movement towards performance support and learning facilitation driven via federated capabilities.

I think it’s worth elaborating on what I mean (I was limited to 75 words).

I’ve talked before about web 3.0, and what it takes is fine granularity and deep tagging of content, and some rules about what to present when.   Those rules can be hand-crafted based upon good guesses or existing research, but new opportunities arise from having those rules capitalize on rich data of interactions.   Both based upon some client work, and what I heard at the WCET conference, folks are finally waking up to the potential of collecting internet-scale data (e.g. Amazon and Netflix) and mining that as a basis for optimizing interactions.   Taking the steps now have some immediate payoffs in terms of optimizing content development streams and looking anew at what are important interactions, but the big returns come in creating optimized learning and performance interactions.

The second part is a bit of evangelism hoping that more organizations will follow the path foreseen by my Internet Time Alliance colleagues, and move beyond just training to covering informal learning.   I’ve talked before about looking at the bigger picture of learning, because I’m convinced that the coming differentiator will not be optimal execution but continuing innovation.   That takes, in my mind, both an optimized infrastructure and ubiquitous access (c.f. mobile).   It’s more, of course, because it also implies a culture supportive of learning, yet I think this is both an advantage for business competitiveness and a move that meets real human needs, which makes it an ideal as well as real goal.

The eLearning Mag predictions should be out soon, and I strongly encourage you to see what the bevy of prognosticators are proposing for the coming year.   I welcome hearing your thoughts, too!

Engineering both the front- and back-end

11 November 2010 by Clark Leave a Comment

I had the pleasure of meeting Bob Glushko a couple of weeks ago, and finally had a chance to dig into a couple of papers if his (as well as scan his book Document Engineering). He’s definitely one that you would call ‘wicked’ smart, having built several companies and now having sold one, he’s only hanging around doing cutting edge information science because he wants to.

The core of what he’s on about is structuring data, as documents, to facilitate transactions that for the basis of services. He focuses on the term ‘document’ rather than data to help emphasize the variety of forms in which they manifest, the human component, and most of all the nature of combining data to facilitate business interactions. At the heart is something I’ve been excited about, what I call content models, but he takes much further to support a more generic and comprehensive capability.

He makes a useful distinction between ‘front-end’ and ‘back-end’ services to help highlight the need to take the total service-delivery system into account. The front end provides the customer-facing experience, while the back end ensures efficiency and scalability. It can be difficult to reconcile these two, and yet both are necessary.

This is important in learning experience design as well. Having served on either side, both, and as the mediator between, I know the tension that can result from the caring designer crossing swords with the focused developer.

I have talked before about the potential of web 3.0, system-generated content, and that’s what this approach really enables. Yes, there are necessary efficiencies and effectiveness enough to justify this approach in your learning experience system design, but the potential for smart adaptive experiences is the new opportunity.

If you’re building more than just content, but also delivery systems and business engines, you owe it to yourself to get into Document Engineering. If you’re going further (and you should), you really need to get into the whole services and information science area.

There are exciting advancements in technologies, going beyond just XML to learning focused structures on top, and solid concept engineering behind these that are the key to the next generation of learning systems (and, of course, more).

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok