Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

16 August 2017

3 E’s of Learning: why Engagement

Clark @ 8:07 AM

Letter EWhen you’re creating learning experiences, you want to worry about the outcomes, but there’s more to it than that.  I think there are 3 major components for learning as a practical matter, and I lump these under the E’s: Effectiveness, Efficiency, & Engagement. The latter may be more of a stretch, but I’ll make the case .

When you typically talk about learning, you talk about two goals: retention over time, and transfer to all appropriate (and no inappropriate) situations.  That’s learning effectiveness: it’s about ensuring that you achieve the outcomes you need.  To test retention and transfer, you have to measure more than performance at the end of the learning experience. (That is, unless your experience definition naturally includes this feedback as well.) Let alone just asking learners if they thought it was valuable.  You have to see if the learning has persisted later, and is being used as needed.

However, you don’t have unlimited resources to do this, you need to balance your investment in creating the experience with the impact on the individual and/or organization.  That’s efficiency. The investment is rewarded with a multiplier on the cost.  This is just good business.

Let’s be clear: investing without evaluating the impact is an act of faith that isn’t scrutable.  Similarly, achieving the outcome at an inappropriate expense isn’t sustainable.  Ultimately, you need to achieve reasonable changes to behavior under a viable expenditure.

A few of us have noticed problems sufficient to advocate quality in what we do.  While things may be trending upward (fingers crossed), I think there’s still ways to go when we’re still hearing about ‘rapid’ elearning instead of ‘outcomes’.  And I’ve argued that the necessary changes produce a cost differential that is marginal, and yet yields outcomes more than marginal.   There’s an obvious case for effectiveness and efficiency.

But why engagement? Is that necessary? People tout it as desirable. To be fair, most of the time they’re talking about design aesthetics, media embellishment, and even ‘gamification‘ instead of intrinsic engagement.  And I will maintain that there’s a lot more possible. There’s an open question, however: is it worth it?

My answer is yes. Tapping into intrinsic interest has several upsides that are worth the effort.  The good news is that you likely don’t need to achieve a situation where people are willing to pay money to attend your learning. Instead, you have the resources on hand to make this happen.

So, if you make your learning – and here in particular I mean your introductions, examples, and practice – engaging, you’re addressing motivation, anxiety, and potentially optimizing the learning experience.

  • If your introduction helps learners connect to their own desires to be an agent of good, you’re increasing the likelihood that they’ll persist and that the learning will ‘stick’.
  • If your examples are stories that illustrate situations the learner recognizes as important, and unpack the thinking that led to success, you’re increasing their comprehension and their knowledge.
  • Most importantly, if your practice tasks are situated in contexts that are meaningful to learners both because they’re real and important, you’ll be developing their skills in ways closest to how they’ll perform.  And if the challenge in the progression of tasks is right, you’ll also accelerate them at the optimal speed (and increase engagement).

Engagement is a fine-tuning, and learner’s opinions on the experience aren’t the most important thing.  Instead, the improvement in learning outcomes is the rationale.  It takes some understanding and practice to get systematically good at doing this. Further, you can make learning engaging, it is an acquired capability.

So, is your learning engaging intrinsic interest, and making the learning persist? It’s an approach that affects effectiveness in a big way and efficiency in a small way. And that’s the way you want to go, right? Engage!

15 August 2017

Innovative Work Spaces

Clark @ 8:09 AM

working togetherI recently read that Apple’s new office plan is receiving bad press. This surprises me, given that Apple usually has their handle on the latest in ideas.  Yet, upon investigation, it’s clear that they appear to not be particularly innovative in their approach to work spaces.  Here’s why.

The report I saw says that Apple is intending to use an open office plan. This is where all the tables are out in the open, or at best there are cubicles. The perceived benefits are open communication.  And this is plausible when folks like Stan McChrystal in Team of Teams are arguing for ‘radical transparency’.  The thought is that everyone will know what’s going on and it will streamline communication. Coupled with delegation, this should yield innovation, at the expense of some efficiency.

However, research hasn’t backed that up. Open space office plans can even drive folks away, as Apple’s hearing. When you want to engage with your colleagues and stay on top of what they’re doing, it’s good.  However, the lack of privacy means folks can’t focus when they’re doing heavy mental work. While it sounds good in theory, it doesn’t work in practice.

When I was keynoting at the Learning@Work conference in Sydney back in 2015, a major topic was about flexible work spaces. The concept here is to have a mix of office types: some open plan, some private offices, some small conference rooms. The view is that you take the type of space you need when you need it. Nothing’s fixed, so you travel with your laptop from place to place, but you can have the type of environment you need. Time alone, time with colleagues, time collaborating. And this was being touted both on principled and practical grounds with positive outcomes.

(Note that in McChrystal’s view, you needed to break down silos. He would strategically insert a person from one area with others, and have representatives engaged around all activities.  So even in the open space you’d want people mixed up, but most folks still tend to put groups together. Which undermines the principle.)

As Jay Cross let us know in his landmark Informal Learningeven the design of workspaces can facilitate innovation. Jay cited practices like having informal spaces to converse, and putting the mail room and coffee room together to facilitate casual conversation.  Where you work matters as well as how, and open plan has upsides but also downsides that can be mitigated.

Innovation is about culture, practices, beliefs, and  technology.  Putting it all together in a practical approach takes time and knowledge to figure out where to start, and how to scale.  As Sutton and Rao tell us, it’s a ground war, but the benefits are not just desirable, but increasingly necessary. Innovation is the key to transcending survival to thrival. Are you ready to (Qu)innovate?

9 August 2017

Simulations versus games

Clark @ 8:04 AM

At the recent Realities 360 conference, I saw some confusion about the difference between a simulation and a game. And while I made some important distinctions in my book on the topic, I realize that it’s possible that it’s time to revisit them. So here I’m talking about some conceptual discriminations that I think are important.

Simulations

As I’ve mentioned, simulations are models of the world. They capture certain relationships we believe to be true about the world. (For that matter, they can represent worlds that aren’t real, certainly the case in games.). They don’t (can’t) capture all the world, but a segment we feel it is important to model. We tend to validate these models by testing them to see if they behave like our real world.  You can also think about simulations as being in a ‘state’ (set of values in variables), and move to others by rules.  Frequently, we include some variability in these models, just as is reflected in the real world. Similarly, these simulations can model considerable complexity.

Such simulations are built out of sets of variables that represent the state of the world, and rules that represent the relationships present. There are several ways things change. Some variables can be changed by rules that act on the basis of time (while countdown timer = on, countdown = countdown -1). Variables can also interact (if countdown=0: if 1 g adamantium and 1 g dilithium, Temperature = Temperature +1000, adamantium = adamantium – 1g, dilithium = dilithium – 1g).  Other changes are based upon learner actions (if learner flips the switch, countdown timer = on).

Note that you may already have a simulation. In business, there may already exist a model of particular processes, particularly if they’re proprietary systems.

From a learning point of view, simulations allow motivated and self-effective learners to explore the relationships they need to understand. However, we can’t always assume motivated and self-effective learners. So we need some additional work to turn a simulation into a learning experience.

Scenarios

One effective way to leverage simulations is to choose an initial state (or ‘space of states’, a start point with some variation), and a state (or set) that constitutes ‘win’. We also typically have states that also represent ‘fail’.  We choose those states so that the learner can’t get to ‘win’ without understanding the necessary relationships.   The learner can try and fail until they discover the necessary relationships.  These start and goal states serve as scaffolding for the learning process.  I call these simulations with start and stop states ‘scenarios’.

This is somewhat complicated by the existence of ‘branching scenarios’. There are initial and goal states and learner actions, but they are not represented by variable and rules. The relationships in branching scenarios are implicit in the links instead of explicit in the variables and rules. And they’re easier to build!  Still, they don’t have the variability that typically is possible in a simulation. There’s an inflection point (qualitative, not quantitative) where the complexity of controlling the branches renders it more sensible to model the world as a simulation rather than track all the branches.

Games

The problem here is that too often people will build a simulation and call it a game. I once reviewed a journal submission about a ‘game’ where the authors admitted that players thought it was boring. Sorry, then it’s not a game!  The difference between a simulation and a game is a subjective experience of engagement on the part of the player.

So how do you get from a simulation to a game?  It’s about tuning.  It’s about adjusting the frequency of events, and their consequences, such that the challenge moves to fall into the zone between boring and frustrating. Now, for learning, you can’t change the fundamental relationships you’re modeling, but you can adjust items like how quickly events occur, and the importance of being correct. And it takes testing and refinement. Will Wright, a game designers’ game designer, once proposed that tuning is 9/10’s of the work!  Now that’s for a commercial game, but it gives you and idea.

You can also use gamification, scores to add competition, but, please, only after you first expend the effort to make the game intrinsically interesting. Tap into why they should care about the experience, and bake that it.

Is it worth it to actually expend effort to make the experience engaging?  I believe that the answer is yes. Perhaps not to the level of a game people will pay $60 to play, but some effort to manifest the innate meaningfulness is worth it. Games minimize the time to obtain competency because they optimize the challenge.  You will have sticks as well as carrots, so you don’t need to put in $M budgets, but do tune until your learners have an engaging and effective experience.

So, does this help? What questions do you still have?

8 August 2017

L&D Tuneup

Clark @ 8:00 AM

auto engineIn my youth, owing to my father’s tutelage and my desire for wheels, I learned how to work on cars. While not the master he was, I could rebuild a carburetor, gap points and sparkplugs, as well as adjust the timing. In short, I could do a tuneup on the car.  And I think that’s what Learning & Development (L&D) needs, a tuneup.

Cars have changed, and my mechanic skills are no longer relevant. What used to be done mechanically – adjusting to altitude, adapting through the stages of the engine warming up, and handling acceleration requests – are now done electronically. The air-fuel mixture and the spark advance are under the control of the fuel injection and electronic ignition systems (respectively) now.  With numerous sensors, we can optimize fuel efficiency and performance.

And that’s the thing: L&D is too often still operating in the old, mechanical, model. We have the view of a hierarchical model where a few plan and prepare and train folks to execute. We stick with face-to-face training or maybe elearning, putting everything in the head, when science shows that we often function better from information in the world or even in other people’s heads!  And this old approach no longer works.

As has been noted broadly and frequently, the world is changing faster and the pressure is on organizations to adapt more quickly. With widely disparate paths  pointing in the same direction, it’s easy to see that there’s something fundamental going on. In short, we need to move, as Jon Husband puts it, from hierarchy to wirearchy.  We need agility: experimentation, review, and reflection, iteratively and collectively. And in that move, there’s a central role for L&D.

The move may not be imminent, but it is unavoidable. Even staid and secure organizations are facing the consequences of increasing rates of change and new technology innovations. AI, networks, 3D printing, there are ramifications. Even traditional government agencies are facing change. Yet, this is all about people and learning.

As Harold Jarche tells us, work is learning and learning is the work. That means learning is moving from the classroom to the workplace and on the go. L&D needs a modern workplace learning approach, as Jane Hart lets us know. This new model is one where L&D moves from fount of knowledge to learning facilitator (or advisor, as she terms it).  People need to develop those communication and collaboration, but it won’t come from classes, but from coaching and more.

And, to return to the metaphor, I view this as an L&D tuneup. It’s not about throwing out what you’re doing (unless that’s the fastest path ;), but instead augmenting it. Shifts don’t happen overnight, but instead it means taking on some internal changes, and then working that outwards with stakeholders, reengineering the organizational relationships. It’s a journey, not an event. But like with a tuneup, it’s about figuring out what your new model should be, and then adjusting until you achieve it. It’s over a more extended period of time, but it’s still a tuning operation. You have to work through the stages to a new revolutionary way of working. So, are you ready for a tuneup?

3 August 2017

My policies

Clark @ 8:04 AM

Like most of you, I get a lot of requests for a lot of things. Too many, really. So I’ve had to put in policies to be able to cope.  I like to provide a response (I feel it’s important to communicate the underlying rationale), so I have stock blurbs that I cut and paste (with an occasional edit for a specific context).  I don’t want to repeat them here, but instead I want to be clear about why certain types of actions are going to get certain types of response. Consider this a public service announcement.

So, I get a lot of requests to link on LinkedIn, and I’m happy to, with a caveat. First, you should have some clear relationship to learning technology. Or be willing to explain why you want to link. I use LinkedIn for business connections, so I’m linked to lots of people I don’t even know, but they’re in our field.

I ask those not in learntech why they want to link. Some do respond, and often have a real reason (shifting to this field, their title masks a real role), and I’m glad I asked.  Other times it’s the ‘Nigerian Prince’ or equivalent. And those will get reported. Recently, it’s new folk who claim they just want to connect to someone with experience. Er, no.  Read this blog, instead. I also have a special message to those in learntech with biz dev/sales/etc roles; I’ll link, but if they pitch me, they’ll get summarily unlinked (and I do).

And I likely won’t link to you on Facebook.  That’s personal. Friends and family. Try LinkedIn instead.

I get lots of emails, particularly from elearning or tech development firms, offering to have a conversation about their services.  I’m sorry, but don’t you realize, with all the time I’ve been in the field, that I have ‘goto’ partners? And I don’t do biz dev: develop contracts and outsource production. As Donald H Taylor so aptly puts it, you haven’t established a sufficient relationship to justify offering me anything.

Then, I get email with announcements of new moves and the like.  Apparently, with an expectation that I’ll blog it.  WTH?  Somehow, people think this blog is for PR.  No, as it says quite clearly at the top of the page, this is for my learnings about learning.  I let them know that I pay attention to what comes through my social media channels, not what comes unsolicited.  I also ask what list they got my name from, so I can squelch it. And sometimes they have!

I used to get a lot of offers to either receive or write blog posts. (This had died down, but has resurrected recently.)  For marketing links, obviously. I don’t want your posts; see the above: my learnings!  And I won’t write for you for free. Hey, that’s a service.  See below.

And I get calls with folks offering me a place at their event.  They’re pretty easy to detect: they ask about would I like to have access to a specific audience,…  I have learned to quickly ask if it’s a pay to play.  It always is, and I have to explain that that’s not how I market myself.  Maybe I’m wrong, but I see that working for big firms with trained sales folks, not me. I already have my marketing channels. And I speak and write as a service!

I similarly get a lot of emails that let me know about a new product and invite me to view it and give my opinion.  NO!  First, I could spend my whole day with these. Second, and more importantly, my opinion is valuable!  It’s the basis of 35+ years of work at the cutting edge of learning and technology. And you want it for free?  As if.  Let’s talk some real evaluation, as an engagement.  I’ve done that, and can for you.

As I’ve explained many times, my principles are simple: I talk ideas for free; I help someone personally for drinks/dinner; if someone’s making a quid, I get a cut.  And everyone seems fine with that, once I explain it. I occasionally get taken advantage of, but I try to make it only once for each way (fool me…).   But the number of people who seem to think that I should speak/write/consult for free continues to boggle my mind.  Exposure?  I think you’re overvaluing your platform.

Look, I think there’s sufficient evidence that I’m very good at what I do. If you want to refine your learning design processes, take your L&D strategy into the 21st century, and generally align what you do with how we think, work, and learn, let’s talk.  Let’s see if there’s a viable benefit to you that’s a fair return for me. Lots of folks have found that to be the case.  I’ll even offer the first conversation free, but let’s make sure there’s a clear two-way relationship on the table and explore it.  Fair enough?

 

 

2 August 2017

Ethics and AI

Clark @ 8:03 AM

I had the opportunity to attend a special event pondering the ethical issues that surround Artificial Intelligence (AI).  Hosted by the Institute for the Future, we gathered in groups beforehand to generate questions that were used in a subsequent session. Vint Cerf, co-developer of the TCP/IP protocol that enabled the internet, currently at Google, responded to the questions.  Quite the heady experience!

The questions were quite varied. Our group looked at Values and Responsibilities. I asked whether that was for the developers or the AI itself. Our conclusion was that it had to be the developers first. We also considered what else has been done in technology ethics (e.g. diseases, nuclear weapons), and what is unique to AI.  A respondent mentioned an EU initiative to register all internet AIs; I didn’t have the chance to ask about policing and consequences.  Those strike me as concomitant issues!

One of the unique areas was ‘agency’, the ability for AI to act.  This led to a discussion for a need to have oversight on AI decisions. However, I suggested that humans, if the AI was mostly right, would fatigue. So we pondered: could an AI monitor another AI?  I also thought that there’s evidence that consciousness is emergent, and so we’d need to keep the AIs from communicating. It was pointed out that the genie is already out of the bottle, with chatbots online. Vint suggests that our brain is layered pattern-matchers, so maybe consciousness is just the topmost layer.

One recourse is transparency, but it needs to be rigorous. Blockchain’s distributed transparency could be a model. Of course, one of the problems is that we can’t even explain our own cognition in all instances (we make stories that don’t always correlate with the evidence of what we do). And with machine learning, we may be making stories about what the system is using to analyze behaviors and make decisions, but it may not correlate.

Similarly, machine learning is very dependent on the training set. If we don’t pick the right inputs, we might miss some factors that would be important to incorporate in making answers.  Even if we have the right inputs, but don’t have a good training set of good and bad outcomes, we get biased decisions. It’s been said that what people are good at is crossing the silos, whereas the machines tend to be good in narrow domains. This is another argument for oversight.

The notion of agency also brought up the issue of decisions.  Vint inquired why we were so lazy in making decisions. He argued that we’re making systems we no longer understand!  I didn’t get the chance to answer that decision-making is cognitively taxing.  As a consequence, we often work to avoid it.  Moreover, some of us are interested in X, so are willing to invest the effort to learn it, while others are interested in Y. So it may not be reasonable to expect everyone to invest in every decision.  Also, our lives get more complex; when I grew up, you just had phone and TV, now you need to worry about internet, and cable, and mobile carriers, and smart homes, and…  So it’s not hard to see why we want to abrogate responsibility when we can!  But when can we, and when do we need to be careful?

Of course, one of the issues is about AI taking jobs.  Cerf stated that nnovation takes jobs, and generates jobs as well. However, the problem is that those who lose the jobs aren’t necessarily capable of taking the new ones.  Which brought up an increasing need for learning to learn, as the key ability for people. Which I support, of course.

The overall problem is that there isn’t a central agreement on what ethics a system should embody, if we could do it. We currently have different cultures with different values. Could we find agreement when some might have different view of what, say, acceptable surveillance would be? Is there some core set of values that are required for a society to ‘get along’?  However, that might vary by society.

At the end, there were two takeaways.  For one, the question is whether AI can helps us help ourselves!  And the recommendation is that we should continue to reflect and share our thoughts. This is my contribution.

1 August 2017

Realities 360 Reflections

Clark @ 8:08 AM

So, one of the two things I did last week was attend the eLearning Guild‘s Realities 36o conference.  Ostensibly about Augmented Reality (AR) and Virtual Reality (VR), it ended up being much more about VR. Which isn’t a bad thing, it’s probably as much a comment on the state of the industry as anything.  However, there were some interesting learnings for me, and I thought I’d share them.

First, I had a very strong visceral exposure to VR. While I’ve played with Cardboard on the iPhone (you can find a collection of resources for Cardboard here), it’s not quite the same as a full VR experience.  The conference provided a chance to try out apps for the HTC Vive, Sony Playstation VR, and the Oculus.  On the Vive, I tried a game where you shot arrows at attackers.  It was quite fun, but mostly developed some motor skills. On the Oculus, I flew an XWing fighter through an asteroid field and escorted a ship and shoot enemy Tie-fighters.  Again, fun, but mostly about training my motor skills in this environment.

It was the one I think on the Vive that gave me an experience.  In it, you’re floating around the International Space Station. And it was very cool to see the station and experience the immersion of 3D, but it was very uncomfortable.  Partly because I was trying to fly around (instead of using handholds), my viewpoint would fly through the bulkhead doors. However, the positioning meant that it gave the visual clues that my chest was going through the metal edge.  This was extremely disturbing to me!  As I couldn’t control it well, I was doing this continually, and I didn’t like it. Partly it was the control, but it was also the total immersion. And that was impressive!

There are empirical results that demonstrate better learning outcomes for VR, and certainly  I can see that particularly, for tasks inherently 3D. There’s also another key result, as was highlighted in the first keynote: that VR is an ’empathy’ machine. There have been uses for things like understanding the world according to a schizophrenic, and a credit card call center helping employees understand the lives of card-users.

On principle, such environs should support near transfer when designed to closely mimic the actual performance environment. (Think: flight or medicine simulators.)  And the tools are getting better. There’s an app that allows you to take photos of a place to put into Cardboard, and game engines (Unity or Unreal or both) will now let you import AutoCAD models.  There was also a special camera that could sense the distances in a space and automatically generate a model of it.  The point being that it’s getting easier and easier to generate VR environments.

That, I think, is what’s holding AR back.  You can fairly easily use it for marker or location based information, but actually annotating the world visually is still challenging.  I still think AR is of more interest, (maybe just to me), because I see it eventually creating the possibility to see the causes and factors behind the world, and allow us to understand it better.  I could argue that VR is just extending sims from flat screen to surround, but then I think about the space station, and…I’m still pondering that. Is it revolutionary or just evolutionary?

One session talked about trying to help folks figure out when VR and AR made sense, and this intrigued me. It reminded me that I had tried to characterize the affordances of virtual worlds, and I reckon it’s time to take a stab at doing this for VR and AR.  I believed then that I was able to predict when virtual worlds would continue to find value, and I think results have borne that out.  So, the intent is to try to get on top of when VR and AR make sense.  Stay tuned!

27 July 2017

Barry Downes #Realities360 Keynote Mindmap

Clark @ 9:59 AM

Barry Downes talked about the future of the VR market with an interesting exploration of the Immersive platform. Taking us through the Apollo 11 product, he showed what went into it and the emotional impact. He showed a video that talked (somewhat simplistically) about how VR environments could be used for learning. (There is great potential, but it’s not about content.). He finished with an interesting quote about how VR would be able to incorporate any further media. A second part of the quote said: “Kids will think it’s funny [we] used to stare at glowing rectangles hoping to suspend disbelief.”

VR Keynote

26 July 2017

Maxwell Planck #Realities360 Keynote Mindmap

Clark @ 9:59 AM

Maxwell Planck opened the eLearning Guild’s Realities 360 conference with a thoughtful and thought-provoking talk on VR. Reflecting on his experience in the industry, he described the transition from story telling to where he thinks we should go: social adventure. (I want to call it “adventure together”. :). A nice start to the event.

Maxwell Planck Keynote Mindmap

25 July 2017

What is the Future of Work?

Clark @ 8:07 AM

which is it?Just what is the Future of Work about? Is it about new technology, or is it about how we work with people?  We’re seeing amazing new technologies: collaboration platforms, analytics, and deep learning. We’re also hearing about new work practices such as teams, working (or reflecting) out loud, and more.  Which is it? And/or how do they relate?

It’s very clear technology is changing the way we work. We now work digitally, communicating and collaborating.  But there’re more fundamental transitions happening. We’re integrating data across silos, and mining that data for new insights. We can consolidate platforms into single digital environments, facilitating the work.  And we’re getting smart systems that do things our brains quite literally can’t, whether it’s complex calculations or reliable rote execution at scale. Plus we have technology-augmented design and prototyping tools that are shortening the time to develop and test ideas. It’s a whole new world.

Similarly, we’re seeing a growing understanding of work practices that lead to new outcomes. We’re finding out that people work better when we create environments that are psychologically safe, when we tap into diversity, when we are open to new ideas, and when we have time for reflection. We find that working in teams, sharing and annotating our work, and developing learning and personal knowledge mastery skills all contribute. And we even have new  practices such as agile and design thinking that bring us closer to the actual problem.  In short, we’re aligning practices more closely with how we think, work, and learn.

Thus, either could be seen as ‘the Future of Work’.  Which is it?  Is there a reconciliation?  There’s a useful way to think about it that answers the question.  What if we do either without the other?

If we use the new technologies in old ways, we’ll get incremental improvements.  Command and control, silos, and transaction-based management can be supported, and even improved, but will still limit the possibilities. We can track closer.  But we’re not going to be fundamentally transformative.

On the other hand, if we change the work practices, creating an environment where trust allows both safety and accountability, we can get improvements whether we use technology or not. People have the capability to work together using old technology.  You won’t get the benefits of some of the improvements, but you’ll get a fundamentally different level of engagement and outcomes than with an old approach.

Together, of course, is where we really want to be. Technology can have a transformative amplification to those practices. Together, as they say, the whole is greater than the some of the parts.

I’ve argued that using new technologies like virtual reality and adaptive learning only make sense after you first implement good design (otherwise you’re putting lipstick on a pig, as the saying goes).  The same is true here. Implementing radical new technologies on top of old practices that don’t reflect what we know about people, is a recipe for stagnation.  Thus, to me, the Future of Work starts with practices that align with how we think, work, and learn, and are augmented with technology, not the other way around.  Does that make sense to you?

Next Page »

Powered by WordPress