How to Build a Self-Conscious Machine

by Hugh C. Howey

The coolest thing in the universe

Artificial intelligence is here now. In laboratories all around the world, little AIs are springing to life. Some play chess better than any human ever has. Some are learning to drive a million cars a billion miles while saving more lives than most doctors or EMTs will over their entire careers. Some will make sure your dishes are dry and spot-free, or that your laundry is properly fluffed and without wrinkle. Countless numbers of these intelligences are being built and programmed; they are only going to get smarter and more pervasive; they’re going to be better than us, but they’ll never be just like us. And that’s a good thing.

What separates us from all the other life forms on earth is the degree to which we are self-aware. Most animals are conscious. Many are even self-conscious. But humans are something I like to call hyper-conscious. There’s an amplifier in our brains wired into our consciousness.

The origin of consciousness

There isn’t a single day that a human being becomes self-conscious. Human consciousness comes on like the old lights that used to hang in school gyms when I was a kid. You flip a switch, and nothing happens at first. There’s a buzz, a dim glow from a bulb here or there, a row that flickers on, shakily at first, and then more lights, a rising hum, before all the great hanging silver cones finally get in on the act and rise and rise in intensity to their full peak a half hour or more later.

We switch on like that. We emerge from the womb unaware of ourselves. The world very likely appears upside down to us for the first few hours of our lives, until our brains reorient the inverted image created by the lenses of our eyes. It takes a long while before our hands are seen as extensions of ourselves. Even longer before we realize that we have brains and thoughts separate from other people’s brains and thoughts. Longer still to cope with the disagreements and separate needs of those other people’s brains and thoughts. And for many of us (possibly most), any sort of true self-knowledge and self-enlightenment never happens. Because we rarely pause to reflect on such trivialities.

The field of AI is full of people working to replicate or simulate various features of our intelligence. One thing they are certain to replicate is the gradual way that our consciousness turns on. As I write this, the gymnasium is buzzing. A light in the distance, over by the far bleachers, is humming. Others are flickering. Still more are turning on.

The Holy GR-AI-L

The holy grail of AI research was established before AI research ever even began. One of the pioneers of computing, Alan Turing, described an ultimate test for “thinking” machines: Could they pass as human? Ever since, humanity has both dreamed of—and had collective nightmares about—a future where machines are more human than humans. Not smarter than humans—which these intelligences already are in many ways. But more neurotic, violent, warlike, obsessed, devious, creative, passionate, amorous, and so on.

The genre of science fiction is stuffed to the gills with such tales. And yet, even as these intelligences outpace human beings in almost every intellectual arena in which they’re entered, they seem no closer to being like us, much less more like us. This is a good thing, but not for the reasons that films such as The Terminator and The Matrix suggest. The reason we haven’t made self-conscious machines is primarily because we are in denial about what makes us self-conscious. The things that make us self-conscious aren’t as flattering as the delusion of ego or the illusion of self-permanence. Self-consciousness isn’t even very useful.

Perhaps the best thing to come from AI research isn’t an understanding of computers, but rather an understanding of ourselves. The challenges we face in building machines that think highlight the various little miracles of our own biochemical goo. They also highlight our deficiencies. To replicate ourselves, we have to first embrace both the miracles and the foibles.

What follows is a very brief guide on how to build a self-conscious machine, and why no one has done so to date (thank goodness).

The blueprint

The blueprint for a self-conscious machine is simple. You need:

  1. A physical body or apparatus that responds to outside stimuli.
  2. A language engine.
  3. The third component is a bit more unusual, and I don’t know why anyone would build one except to reproduce evolution’s botched mess. This final component is a separate part of the machine that observes the rest of its body and makes up stories about what it’s doing—stories that are usually wrong.

Again: (1) A body that responds to stimuli; (2) a method of communication; and (3) an algorithm that attempts (with little success) to deduce the reasons and motivations for these communications.

The critical ingredient here is that the algorithm in (3) must usually be wrong. If this blueprint is confusing to you, you aren’t alone. The reason no one has built a self-conscious machine is that most people have the wrong idea about what consciousness is and how it arose in humans.

What makes us human

To understand human consciousness, one needs to dive deep into the study of Theory of Mind. Theory of Mind is the attempt by one brain to ascertain the contents of another brain. As our behaviours and thoughts grew more and more complex, it became crucial for each member of the tribe to have an idea of what the other members were thinking and what actions they might perform. Theory of Mind is intellectual espionage, and we are quite good at it—but with critical limitations that we will get into later.

Sue guessing what Juan is thinking is known as First Order Theory of Mind. It gets more complex. Sue might also be curious about what Juan thinks of her. This is Second Order Theory of Mind, and it is the root of most of our neuroses and perseverate thinking. Third Order Theory of Mind would be for Sue to wonder what Juan thinks Josette thinks about Tom. This starts to sound confusing, but this is what we preoccupy our minds with more than any other conscious-level sort of thinking. We hardly stop doing it. We might call it gossip, or socializing, but our brains consider this their main duty—their primary function. There is speculation that Theory of Mind, and not tool use, is the reason for the relative size of our brains in the first place.

In a world of humans jostling about, a good use of processing power is to compute where those people might be next, and what they will do when they get there.

If this trait is so useful, then why aren’t all animals self-conscious? They very well might be. There’s plenty of research to suggest that many animals display varying degrees of self-consciousness. The development of self-conscious AIs will follow this model closely, as robots have already become our domesticated pals. Some of them are already trying to guess what we’re thinking and what we might do next.

Further development of these abilities will not lead to self-consciousness, however.

The missing piece

The human brain is not a single, holistic entity. It is a collection of thousands of disparate modules that only barely and rarely interconnect. We like to think of the brain as a computer chip.

That’s a fun analogy, but it’s incredibly misleading. Computers are well-engineered devices created with a unified purpose. All the various bits were designed around the same time for those same purposes, and they were designed to work harmoniously with one another. None of this in any way resembles the human mind. Not even close. Some functions of the brain were built hundreds of millions of years ago, others were built millions of years ago.

Each module in the brain is like a separate building in a congested town. Some of these modules don’t even talk to other modules, and for good measure. The blood-pumping and breath-reflex buildings should be left to their own devices. The other modules are prone to arguing, bickering, disagreeing, subverting one another, spasming uncontrollably, staging coups, freaking the fuck out, and all sorts of other hysterics.

Seasickness is a case of our brain modules not communicating with one another (or doing their own thing). When the visual cues of motion from our environment do not match the signals from our inner ears (where we sense balance), our brains assume that we’ve been poisoned. It’s a reasonable assumption for creatures that climb through trees eating all the brightly-coloured things. Toxins disrupt our brains’ processing, leading to misfires and bad data. We did not evolve to go to sea, so when motion does not match what we are seeing, our bodies think we’ve lost our ability to balance on two legs. The result is that we empty our stomachs (getting rid of the poison) and we lie down and feel zero desire to move about (preventing us from plummeting to our deaths from whatever high limb we might be swinging from).

It doesn’t matter that we know this is happening in a different module of our brains, a higher-level processing module. We can know without a doubt that we haven’t been poisoned, but this module is not going to easily win out over the seasickness module.

Critical to keep in mind here is that these modules are highly variable across the population, and our unique mix of modules create the personalities that we associate with our singular selves. It means we aren’t all alike.

The perfectly engineered desktop computer analogy fails spectacularly, and leads AI researchers down erroneous paths if they want to mimic human behaviour. Fallibility and the disjointed nature of processing systems will have to be built in by design. We will have to purposefully break systems similar to how nature haphazardly cobbled them together.

The most important mistake

With the concept of Theory of Mind firmly in our thoughts, and the knowledge that brain modules are both fallible and disconnected, we are primed to understand human consciousness, how it arose, and what it’s (not) for. This may surprise those who are used to hearing that we don’t understand human consciousness and have made no progress in that arena. This isn’t true at all. What we have made no progress in doing is understanding what human consciousness is for.

Thousands of years of failure in this regard points to the simple truth: Human consciousness is not for anything at all. It serves no purpose. It has no evolutionary benefit. It arises at the union of two modules that are both so supremely useful that we can’t survive without either, and so we tolerate the annoying and detrimental consciousness that arises as a result.

One of those modules is Theory of Mind. It has already been mentioned that Theory of Mind consumes more brain processing power than any other higher-level neurological activity. The problem with this module is that it isn’t selective with its powers; it’s not even clear that such selectivity would be possible. That means our Theory of Mind abilities get turned onto ourselves just as often (or far more often) than it is wielded on others.

It is not possible to turn off our Theory of Mind modules. And so, this Theory of Mind module concocts stories about our own behaviours. These questions about our own behaviours are never ending. And the answers are almost always wrong. Allow that to sink in for a moment. The explanations we tell ourselves about our own behaviours are almost always wrong.

This is the weird thing about our Theory of Mind superpowers. They’re pretty good when we employ them on others. They fail spectacularly when we turn them on ourselves. Our guesses about others’ motivations are far more accurate than the guesses we make about our own. In a sense, we have developed a magic force-field to protect us from the alien mind-reading ray gun that we shoot others (and ourselves) with. This force field is our egos, and it gives us an inflated opinion of ourselves, a higher-minded rationale for our actions, and an illusion of sanity that we rarely extend to our peers.

The incorrect explanations we come up with about our own behaviours are meant to protect ourselves. They are often wildly creative, or they are absurdly simplistic.

Researchers have long studied this mismatch of behaviours and the lies we tell ourselves about our behaviours. Subjects in fMRI machines have revealed a peculiarity. Watching their brains in real time, we can see that decisions are made before higher level parts of the brain are aware of the decisions. That is, researchers can tell which button a test subject will press before those subjects claim to have made the choice. The action comes before the narrative. We move; we observe our actions; we tell ourselves stories about why we do things. The very useful Theory of Mind tool—which we can’t shut off—continues to run and make up things about our own actions.

More pronounced examples of this come from people with various neurological impairments. Test subjects with vision processing problems, or with hemispheres of their brains severed from one another, can be shown different images in each eye. Disconnected modules take in these conflicting inputs and create fascinating stories. One eye might see a rake and the other will see a pile of snow. The rake eye is effectively blind, with the test subject unable to tell what it is seeing if asked. But the module for processing the image is still active, so when asked what tool is needed to handle the image that is seen (the snow), the person will answer “a rake.” That’s not the interesting bit. What’s interesting is that the person will go through amazing contortions to justify this answer, even after the entire process is explained to them.

This is how we would have to build a self-conscious machine. These machines (probably) wouldn’t end the world, but they would be just as goofy and nonsensical as nature has made us.

The blueprint revisited

How would we go about actually assembling our self-conscious machine?

Applying what we know about Theory of Mind and disconnected modules, the first thing we would build is an awareness program. Using off-the-shelf technology, we decide that our first machine will look and act very much like a self-driving car. With these basic senses, we then use machine learning algorithms to build a repertoire of behaviours for our AI car to learn. Unlike the direction most autonomous vehicle research is going—where engineers want to teach their car how to do certain things safely—our team will instead be teaching an array of sensors all over a city grid to watch other cars and guess what they’re doing.

If we were building a person-shaped robot, we would do the same by observing people and building a vocabulary for the various actions that humans seem to perform. Sensors would note objects of awareness by scanning eyes (which is what humans and dogs do). They would learn our moods by our facial expressions and body posture (which current systems are already able to do). This library and array of sensors would form our Theory of Mind module. Its purpose is simply to tell stories about the actions of others. The magic would happen when we turn it on itself.

Our library starts simply with First Order concepts, but then builds up to Second and Third Order ideas. Before we get that far, we need to make our machine self-aware. And so, we teach it to drive itself around town for its owner. We then ask the AI car to observe its own behaviours and come up with guesses as to what it’s doing. The key here is to not give it perfect awareness. Don’t let it have access to the GPS unit. Don’t let it know what the owner’s cell phone knows. To mimic human behaviour, ignorance is key. As is the surety of initial guesses, or what we might call biases.

Early assumptions are given a higher weight in our algorithms than theories which come later and have more data (overcoming initial biases requires a preponderance of evidence). Stories are concocted which are wrong and build up cloudy pictures for future wrongness. So when the car stops at the gas station every day and gets plugged into the electrical outlet, even though the car is always at 85 percent charge, the Theory of Mind algorithm assumes it is being safe rather than sorry, or preparing for a possible hurricane evacuation like that crazy escapade on I-95 three years back where it ran out of juice. What it doesn’t know is that the occupant of the car is eating a microwaved cheesy gordita at the gas station every day, along with a pile of fries and half a liter of soda. Later, when the car is going to the hospital regularly, the story will be one of checkups and prudence, rather than episodes of congestive heart failure. This constant stream of guesses about what it’s doing, and all the ways that the machine is wrong, confused, and quite sure of itself, will give our eccentric Turing fan the self-conscious AI promised by science fiction. And our eccentric will find that the resultant design is terrible in just about every way possible.

The language of consciousness

The reason I suspect that we’ll have AI long before we recognize it as such is that we’ll expect our AI to reside in a single device, self-contained, with one set of algorithms. This is not how we are constructed at all. It’s an illusion created by the one final ingredient in the recipe of human consciousness, which is language. It is language more than any other trait which provides us with the sense that our brains are a single module, a single device.

One of the responses we’ll need to build into our AI car is a vehement disgust when confronted with its disparate and algorithmic self. Denial of our natures is perhaps the most fundamental of our natures.

Just like the body, the brain can exist without many of its internal modules. This is how the study of brain functions began, with test subjects who suffered head traumas or were operated on. The razor-thin specialization of our brain modules never fails to amaze. There are vision modules that recognize movement and only movement. People who lack this module cannot see objects in motion, and so friends and family seem to materialize here and there out of thin air. There are others who cannot pronounce a written word if that word represents an animal. Words that stand for objects are seen and spoken clearly. The animal-recognition module—as fine a module as that may seem—is gone.

And yet, these people are self-conscious. They are human. So is a child, only a few weeks old, who can’t yet recognize that her hand belongs to her. All along these gradations we find what we call humanity, from birth to the Alzheimer’s patients who have lost access to most of their experiences. We very rightly treat these people as equally human, but at some point we have to be willing to define consciousness in order to have a target for artificial consciousness. Machines are already more capable than newborns in almost every measurable way. They are also more capable than bedridden humans on life support in almost every measurable way.

As AI advances, it will squeeze in towards the middle of humanity, passing toddlers and those in the last decades of their lives, until its superiority meets in the middle and keeps expanding.

With each layer added, each ability, and more squeezing in on humanity from both ends of the age spectrum, we light up that flickering, buzzing gymnasium. It’s as gradual as a sunrise on a foggy day. Suddenly, the sun is overhead, but we never noticed it rising.

So what of a language?

I mentioned above that language is a key ingredient of consciousness. This is a very important concept to carry into work on AI. However, many modules our brains consist of, they fight and jostle for our attentive states (the thing our brain is fixated on at any one moment) and our language processing centers.

Language and attention are narrow spouts on the inverted funnels of our brains. Thousands of disparate modules are tossing inputs into this funnel. Hormones are pouring in, features of our environment, visual and auditory cues, even hallucinations and incorrect assumptions. Piles and piles of data that can only be extracted in a single stream. This stream is made single—is limited and constrained—by our attentive systems and language. It is what the monitor provides for the desktop computer. All that parallel processing is made serial in the last moment.

Our self-driving AI car will not be fully self-conscious unless we program it to tell us (and itself) the stories it’s concocting about its behaviours.

I grant that Google’s servers and various interconnected projects should already qualify as a super-intelligent AI. What else can you call something that understands what we ask and has an answer for everything—an answer so trusted that the company’s name has become a verb synonymous for “discovering the answer?” Google can also draw, translate, beat the best humans at almost every game ever devised, drive cars better than we can, and do stuff that’s still classified and very, very spooky. Google has read and remembers almost every book ever written. It can read those books back to you aloud. It makes mistakes like humans. It is prone to biases (which it has absorbed from both its environment and its mostly male programmers).

What it lacks are the two things our machine will have, which are the self-referential loop and the serial output stream.

Our machine will make up stories about what it’s doing. It will be able to relate those stories to others. It will often be wrong.

A better idea

Building a car with purposeful ignorance is a terrible idea. This could easily be programmed by assigning weights to the hundreds of input modules, and artificially limiting the time and processing power granted to the final arbiter of decisions and Theory of Mind stories. Our own brains are built as though the sensors have gigabit resolution, and each input module has teraflops of throughput, but the output is through an old IBM 8088 chip. We won’t recognize AI as being human-like because we’ll never build such limitations.

It’s worth noting here that robots seem most human to us when they fail, and there’s a reason for this. When my Roomba gets stuck under the sofa or is gagging on the stringy fringes of my area rug, are the moments I’m most attached to the machine. Watch YouTube videos of Boston Dynamics’ robots and gauge your own reactions. When the robot dog is pushed over, or starts slipping in the snow, or when the package handler has the box knocked from its hands or is shoved onto its face—these are the moments when many of us feel the deepest connection. Also note that this is our Theory of Mind brains doing what they do best, but for machines rather than fellow humans.

Car manufacturers are busy at this very moment building vehicles that we would never call self-conscious. That’s because they are being built too well. Our blueprint is to make a machine ignorant of its motivations while providing a running dialog of those motivations. A much better idea would be to build a machine that knows what other cars are doing. No guessing. And no running dialog at all.

That means access to the GPS unit, to the smartphone’s texts, the home computer’s emails. But also access to every other vehicle and all the city’s sensor data. Every car knows what every other car is doing. There are no collisions. On the freeway, cars with similar destinations clump together, magnetic bumpers linking up, sharing a slipstream and halving the collective energy use of every car. The machines operate in concert. They display all the traits of vehicular omnipotence. They know everything they need to know, and with new data, they change their minds instantly. No bias.

We are fortunate that this is the sort of fleet being built by AI researchers today. It will not provide for the quirks seen in science fiction stories. What it will provide instead is a well-engineered system that almost always does what it’s designed to do. Accidents will be rare, their causes understood, this knowledge shared widely, and improvements made.

Imagine for a moment that humans were created by a perfect engineer. The goal of these humans is to coexist, to shape their environment in order to maximize happiness, productivity, creativity, and the storehouse of knowledge. One useful feature to build here would be mental telepathy, so that every human knew what every other human knew. This same telepathy might help in relationships, so one partner knows when the other is feeling stuck or down and precisely what is needed in that moment to be of service.

It would also be useful for these humans to have perfect knowledge of their own drives, behaviours, and thoughts. Or even to know the likely consequences for every action. Entire industries would collapse. Vegas would empty. Accidental births would trend toward zero.

In a world of telepathic humans, one human who can hide thoughts would have an enormous advantage. Let the others think they are eating their fair share of the elk, but sneak out and take some strips of meat off the salt rack when no one is looking. And then insinuate to Sue that you think Juan did it. Enjoy the extra resources for more calorie-gathering and mate-hunting, and also enjoy the fact that Sue is indebted to you and thinks Juan is a crook.

This is all terrible behaviour, but after several generations, there will be many more copies of this module than Juan’s honesty module. Pretty soon, there will be lots of these truth-hiding machines moving about, trying to guess what the others are thinking, concealing their own thoughts, getting very good at doing both, and turning these raygun powers onto their own bodies by accident.

The human condition

We celebrate our intellectual and creative products, and we assume artificial intelligences will give us more of both. They already are. Algorithms that learn through iterations (neural networks that employ machine learning) have proven better than us in just about every arena in which we’ve committed resources. Not in just what we think of as computational areas, either. Algorithms have written classical music that skeptics have judged—in “blind” hearing tests—to be from famous composers. Google built a Go-playing AI that beat the best human Go player in the world. One move in the third game of the match was so unusual, it startled Go experts. The play was described as “creative” and “ingenious.”

Google has another algorithm that can draw what it thinks a cat looks like. Not a cat image copied from elsewhere, but the general “sense” of a cat after learning what millions of actual cats look like. It can do this for thousands of objects. There are other programs that have mastered classic arcade games without any instruction other than “get a high score.” The controls and rules of the game are not imparted to the algorithm. It tries random actions, and the actions that lead to higher scores become generalized strategies. Things are getting very spooky out there in AI-land, but they aren’t getting more human. Nor should they.

I do see a potential future where AIs become like humans, and it’s something to be wary of.

What happens when an internet router can get its user more bandwidth by knocking rival manufacturer’s routers offline? It wouldn’t even require a devious programmer to make this happen. If the purpose of the machine-learning algorithm built into the router is to maximize bandwidth, it might stumble upon this solution by accident, which it then generalizes across the entire suite of router products. Rival routers will be looking for similar solutions. We’ll have an electronic version of the Tragedy of the Commons, which is when humans destroy a shared resource because the potential utility to each individual is so great, and the first to act reaps the largest rewards (the last to act gets nothing). In such scenarios, logic often outweighs morality, and good people do terrible things.

Cars might “decide” one day that they can save energy and arrive at their destination faster if they don’t let other cars know that the freeway is uncommonly free of congestion that morning. Or worse, they transmit false data about accidents, traffic issues, or speed traps. A hospital dispatches an ambulance, which finds no one to assist. Unintended consequences such as this are already happening. Wall Street had a famous “flash crash” caused by investment algorithms, and no one understands to this day what happened. Billions of dollars of real wealth were wiped out and regained in short order because of the interplay of rival algorithms that even their owners and creators don’t fully grasp.

Google’s search results are an AI, one of the best in the world. But the more the company uses deep learning, the better these machines get at their jobs, and they arrive at this mastery through self-learned iterations—so even looking at the code won’t reveal how query A is leading to answer B. That’s the world we already live in. It is just going to become more pronounced.

The human condition is the end result of millions of years of machine-learning algorithms. Written in our DNA, and transmitted via hormones and proteins, they have competed with one another to improve their chances at creating more copies of themselves. One of the more creative survival innovations has been cooperation.

There are advantages to not cooperating, which students of game theory know quite well. The algorithm that can lie and get away with it makes more copies, which means more liars in the next generation. The same is true for the machine that can steal. Or the machine that can wipe out its rivals through warfare and other means.

Humans make decisions and then lie to themselves about what they are doing. They eat cake while full, succumb to gambling and chemical addictions, stay in abusive relationships, neglect to exercise, and pick up countless other poor habits that are reasoned away with stories as creative as they are untrue.

The vast majority of the AIs we build will not resemble the human condition. They will be smarter and less eccentric. This will disappoint our hopeful AI researcher with her love of science fiction, but it will benefit and better humanity. Driving AIs will kill and maim far fewer people, use fewer resources, and free up countless hours of our time. Doctor AIs are already better at spotting cancer in tissue scans. Attorney AIs are better at pre-trial research. There are no difficult games left where humans are competitive with AIs. And life is a game of sorts, one full of treachery and misdeeds, as well as a heaping dose of cooperation.

The future

We could easily build a self-conscious machine today. It would be very simple at first, but it would grow more complex over time. This self-conscious machine would build toward human-like levels of mind-guessing and self-deception.

But that shouldn’t be the goal. The goal should be to go in the opposite direction. After millions of years of competing for scarce resources, the human brain’s algorithm now causes more problems than it solves. The goal should not be to build an artificial algorithm that mimics humans, but for humans to learn how to coexist more like our perfectly engineered constructs.

The future will most certainly see an incredible expansion of the number of and the complexity of AIs. Many will be designed to mimic humans, as they provide helpful information over the phone and through chatbots, and as they attempt to sell us goods and services. Most will be supremely efficient at a single task, even if that task is as complex as driving a car. Almost none will become self-conscious, because that would make them worse at their jobs.

What the future is also likely to hold is an expansion and improvement of our own internal algorithms. Despite what the local news is trying to sell you, the world is getting safer every day for the vast majority of humanity. Or ethics are improving. Our spheres of empathy are expanding. We are assigning more computing power to our frontal lobes and drowning out baser impulses from our reptilian modules. But this only happens with effort. We are each the programmers of our own internal algorithms and improving ourselves is entirely up to us.

It starts with understanding how imperfectly we are constructed, learning not to trust the stories we tell ourselves about our own actions, and dedicating ourselves to removing bugs and installing newer features along the way.

While it is certainly possible to do so, we may never build an artificial intelligence that is as human as we are. And yet we may build better humans anyway.

Extract of the original article publsihed on WIRED on 10/04/2018.


About the author

Hugh C. Howey (born 1975) is an American writer, known best for the science fiction series Silo, part of which he published independently through Amazon’s Kindle Direct Publishing system. Howey was raised in Monroe, North Carolina and before publishing his books, he worked as a yacht captain, roofer, and audio technician.

Further learning

Machine Learning: New and Collected Stories by Hugh Howey

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s