All posts tagged: facebook

‘I was shocked it was so easy’: meet the professor who says facial recognition can tell if you’re gay

Psychologist Michal Kosinski says artificial intelligence can detect your sexuality and politics just by looking at your face. What if hes right?

Vladimir Putin was not in attendance, but his loyal lieutenants were. On 14 July last year, the Russian prime minister, Dmitry Medvedev, and several members of his cabinet convened in an office building on the outskirts of Moscow. On to the stage stepped a boyish-looking psychologist, Michal Kosinski, who had been flown from the city centre by helicopter to share his research. There was Lavrov, in the first row, he recalls several months later, referring to Russias foreign minister. You know, a guy who starts wars and takes over countries. Kosinski, a 36-year-old assistant professor of organisational behaviour at Stanford University, was flattered that the Russian cabinet would gather to listen to him talk. Those guys strike me as one of the most competent and well-informed groups, he tells me. They did their homework. They read my stuff.

Kosinskis stuff includes groundbreaking research into technology, mass persuasion and artificial intelligence (AI) research that inspired the creation of the political consultancy Cambridge Analytica. Five years ago, while a graduate student at Cambridge University, he showed how even benign activity on Facebook could reveal personality traits a discovery that was later exploited by the data-analytics firm that helped put Donald Trump in the White House.

That would be enough to make Kosinski interesting to the Russian cabinet. But his audience would also have been intrigued by his work on the use of AI to detect psychological traits. Weeks after his trip to Moscow, Kosinski published a controversial paper in which he showed how face-analysing algorithms could distinguish between photographs of gay and straight people. As well as sexuality, he believes this technology could be used to detect emotions, IQ and even a predisposition to commit certain crimes. Kosinski has also used algorithms to distinguish between the faces of Republicans and Democrats, in an unpublished experiment he says was successful although he admits the results can change depending on whether I include beards or not.

How did this 36-year-old academic, who has yet to write a book, attract the attention of the Russian cabinet? Over our several meetings in California and London, Kosinski styles himself as a taboo-busting thinker, someone who is prepared to delve into difficult territory concerning artificial intelligence and surveillance that other academics wont. I can be upset about us losing privacy, he says. But it wont change the fact that we already lost our privacy, and theres no going back without destroying this civilisation.

The aim of his research, Kosinski says, is to highlight the dangers. Yet he is strikingly enthusiastic about some of the technologies he claims to be warning us about, talking excitedly about cameras that could detect people who are lost, anxious, trafficked or potentially dangerous. You could imagine having those diagnostic tools monitoring public spaces for potential threats to themselves or to others, he tells me. There are different privacy issues with each of those approaches, but it can literally save lives.

Progress always makes people uncomfortable, Kosinski adds. Always has. Probably, when the first monkeys stopped hanging from the trees and started walking on the savannah, the monkeys in the trees were like, This is outrageous! It makes us uncomfortable. Its the same with any new technology.


Kosinski has analysed thousands of peoples faces, but never run his own image through his personality-detecting models, so we cannot know what traits are indicated by his pale-grey eyes or the dimple in his chin. I ask him to describe his own personality. He says hes a conscientious, extroverted and probably emotional person with an IQ that is perhaps slightly above average. He adds: And Im disagreeable. What made him that way? If you trust personality science, it seems that, to a large extent, youre born this way.

His friends, on the other hand, describe Kosinski as a brilliant, provocative and irrepressible data scientist who has an insatiable (some say naive) desire to push the boundaries of his research. Michal is like a small boy with a hammer, one of his academic friends tells me. Suddenly everything looks like a nail.

Born in 1982 in Warsaw, Kosinski inherited his aptitude for coding from his parents, both of whom trained as software engineers. Kosinski and his brother and sister had a computer at home, potentially much earlier than western people of the same age. By the late 1990s, as Polands post-Soviet economy was opening up, Kosinski was hiring his schoolmates to work for his own IT company. This business helped fund him through university, and in 2008 he enrolled in a PhD programme at Cambridge, where he was affiliated with the Psychometrics Centre, a facility specialising in measuring psychological traits.

It was around that time that he met David Stillwell, another graduate student, who had built a personality quiz and shared it with friends on Facebook. The app quickly went viral, as hundreds and then thousands of people took the survey to discover their scores according to the Big Five metrics: openness, conscientiousness, extraversion, agreeableness and neuroticism. When users completed the myPersonality tests, some of which also measured IQ and wellbeing, they were given an option to donate their results to academic research.

Kosinski came on board, using his digital skills to clean, anonymise and sort the data, and then make it available to other academics. By 2012, more than 6 million people had taken the tests with about 40% donating their data, creating the largest dataset of its kind.

From Cesare Lombrosos criminal taxonomy: a habitual thief

and a murderer. Photographs: Alamy

In May, New Scientist magazine revealed that the datasets username and password had been accidentally left on GitHub, a commonly used code-sharing website. For four years, anyone not just authorised researchers could have accessed the data. Before the magazines investigation, Kosinski had admitted to me that there were risks to their liberal approach. We anonymised the data, and we made scientists sign a guarantee that they will not use it for any commercial reasons, he had said. But you just cant really guarantee that this will not happen. Much of the Facebook data, he added, was de-anonymisable. In the wake of the New Scientist story, Stillwell closed down the myPersonality project. Kosinski sent me a link to the announcement, complaining: Twitter warriors and sensation-seeking writers made David shut down the myPersonality project.

During the time the myPersonalitydata was accessible, about 280 researchers used it to publish more than 100 academic papers. The most talked-about was a 2013 study co-authored by Kosinski, Stillwell and another researcher, that explored the relationship between Facebook Likes and the psychological and demographic traits of 58,000 people. Some of the results were intuitive: the best predictors of introversion, for example, were Likes for pages such as Video Games and Voltaire. Other findings were more perplexing: among the best predictors of high IQ were Likes on the Facebook pages for Thunderstorms and Morgan Freemans Voice. People who Liked pages for iPod and Gorillaz were likely to be dissatisfied with life.

If an algorithm was fed with sufficient data about Facebook Likes, Kosinski and his colleagues found, it could make more accurate personality-based predictions than assessments made by real-life friends. In other research, Kosinski and others showed how Facebook data could be turned into what they described as an effective approach to digital mass persuasion.

Their research came to the attention of the SCL Group, the parent company of Cambridge Analytica. In 2014, SCL tried to enlist Stillwell and Kosinski, offering to buy the myPersonality data and their predictive models. When negotiations broke down, they relied on the help of another academic in Cambridges psychology department Aleksandr Kogan, an assistant professor. Using his own Facebook personality quiz, and paying users (with SCL money) to take the tests, Kogan collected data on 320,000 Americans. Exploiting a loophole that allowed developers to harvest data belonging to the friends of Facebook app users (without their knowledge or consent), Kogan was able to hoover up additional data on as many as 87 million people.

Cambridge Analytica whistleblower Christopher Wylie who says the company tried to replicate Kosinskis work for psychological warfare. Photograph: Getty Images

Christopher Wylie, the whistleblower who lifted the lid on Cambridge Analyticas operations earlier this year, has described how the company set out to replicate the work done by Kosinski and his colleagues, and to turn it into an instrument of psychological warfare. This is not my fault, Kosinski told reporters from the Swiss publication Das Magazin, which was the first to make the connection between his work and Cambridge Analytica. I did not build the bomb. I only showed that it exists.

Cambridge Analytica always denied using Facebook-based psychographic targeting during the Trump campaign, but the scandal over its data harvesting forced the company to close. The saga also proved highly damaging to Facebook, whose headquarters are less than four miles from Kosinskis base at Stanfords Business School in Silicon Valley. The first time I enter his office, I ask him about a painting beside his computer, depicting a protester armed with a Facebook logo in a holster instead of a gun. People think Im anti-Facebook, Kosinski says. But I think that, generally, it is just a wonderful technology.

Still, he is disappointed in the Facebook CEO, Mark Zuckerberg, who, when he testified before US Congress in April, said he was trying to find out whether there was something bad going on at Cambridge University. Facebook, Kosinski says, was well aware of his research. He shows me emails he had with employees in 2011, in which they disclosed they were using analysis of linguistic data to infer personality traits. In 2012, the same employees filed a patent, showing how personality characteristics could be gleaned from Facebook messages and status updates.

Kosinski seems unperturbed by the furore over Cambridge Analytica, which he feels has unfairly maligned psychometric micro-targeting in politics. There are negative aspects to it, but overall this is a great technology and great for democracy, he says. If you can target political messages to fit peoples interests, dreams, personality, you make those messages more relevant, which makes voters more engaged and more engaged voters are great for democracy. But you can also, I say, use those same techniques to discourage your opponents voters from turning out, which is bad for democracy. Then every politician in the US is doing this, Kosinski replies, with a shrug. Whenever you target the voters of your opponent, this is a voter-suppression activity.

Kosinskis wider complaint about the Cambridge Analytica fallout, he says, is that it has created an illusion that governments can protect data and shore up their citizens privacy. It is a lost war, he says. We should focus on organising our society in such a way as to make sure that the post-privacy era is a habitable and nice place to live.


Kosinski says he never set out to prove that AI could predict a persons sexuality. He describes it as a chance discovery, something he stumbled upon. The lightbulb moment came as he was sifting through Facebook profiles for another project and started to notice what he thought were patterns in peoples faces. It suddenly struck me, he says, introverts and extroverts have completely different faces. I was like, Wow, maybe theres something there.

Physiognomy, the practice of determining a persons character from their face, has a history that stretches back to ancient Greece. But its heyday came in the 19th century, when the Italian anthropologist Cesare Lombroso published his famous taxonomy, which declared that nearly all criminals have jug ears, thick hair, thin beards, pronounced sinuses, protruding chins, and broad cheekbones. The analysis was rooted in a deeply racist school of thought that held that criminals resembled savages and apes, although Lombroso presented his findings with the precision of a forensic scientist. Thieves were notable for their small wandering eyes, rapists their swollen lips and eyelids, while murderers had a nose that was often hawklike and always large.

Lombrosos remains are still on display in a museum in Turin, besides the skulls of the hundreds of criminals he spent decades examining. Where Lombroso used calipers and craniographs, Kosinski has been using neural networks to find patterns in photos scraped from the internet.

Kosinskis research dismisses physiognomy as a mix of superstition and racism disguised as science but then argues it created a taboo around studying or even discussing the links between facial features and character. There is growing evidence, he insists, that links between faces and psychology exist, even if they are invisible to the human eye; now, with advances in machine learning, such links can be perceived. We didnt have algorithms 50 years ago that could spot patterns, he says. We only had human judges.

In a paper published last year, Kosinski and a Stanford computer scientist, Yilun Wang, reported that a machine-learning system was able to distinguish between photos of gay and straight people with a high degree of accuracy. They used 35,326 photographs from dating websites and what Kosinski describes as off-the-shelf facial-recognition software.

Presented with two pictures one of a gay person, the other straight the algorithm was trained to distinguish the two in 81% of cases involving images of men and 74% of photographs of women. Human judges, by contrast, were able to identify the straight and gay people in 61% and 54% of cases, respectively. When the algorithm was shown five facial images per person in the pair, its accuracy increased to 91% for men, 83% for women. I was just shocked to discover that it is so easy for an algorithm to distinguish between gay and straight people, Kosinski tells me. I didnt see why that would be possible.

I did not build the bomb. I only showed it exists. Photograph: Jason Henry for the Guardian

Neither did many other people, and there was an immediate backlash when the research dubbed AI gaydar was previewed in the Economist magazine. Two of Americas most prominent LGBTQ organisations demanded that Stanford distance itself from what they called its professors dangerous and flawed research. Kosinski received a deluge of emails, many from people who told him they were confused about their sexuality and hoped he would run their photo through his algorithm. (He declined.) There was also anger that Kosinski had conducted research on a technology that could be used to persecute gay people in countries such as Iran and Saudi Arabia, where homosexuality is punishable by death.

Kosinski says his critics missed the point. This is the inherent paradox of warning people against potentially dangerous technology, he says. I stumbled upon those results, and I was actually close to putting them in a drawer and not publishing because I had a very good life without this paper being out. But then a colleague asked me if I would be able to look myself in the mirror if, one day, a company or a government deployed a similar technique to hurt people. It would, he says, have been morally wrong to bury his findings.

One vocal critic of that defence is the Princeton professor Alexander Todorov, who has conducted some of the most widely cited research into faces and psychology. He argues that Kosinskis methods are deeply flawed: the patterns picked up by algorithms comparing thousands of photographs may have little to do with facial characteristics. In a mocking critique posted online, Todorov and two AI researchers at Google argued that Kosinskis algorithm could have been responding to patterns in peoples makeup, beards or glasses, even the angle they held the camera at. Self-posted photos on dating websites, Todorov points out, project a number of non-facial clues.

Kosinski acknowledges that his machine learning system detects unrelated signals, but is adamant the software also distinguishes between facial structures. His findings are consistent with the prenatal hormone theory of sexual orientation, he says, which argues that the levels of androgens foetuses are exposed to in the womb help determine whether people are straight or gay. The same androgens, Kosinski argues, could also result in gender-atypical facial morphology. Thus, he writes in his paper, gay men are predicted to have smaller jaws and chins, slimmer eyebrows, longer noses and larger foreheads… The opposite should be true for lesbians.

This is where Kosinskis work strays into biological determinism. While he does not deny the influence of social and environmental factors on our personalities, he plays them down. At times, what he says seems eerily reminiscent of Lombroso, who was critical of the idea that criminals had free will: they should be pitied rather than punished, the Italian argued, because like monkeys, cats and cuckoos they were programmed to do harm.

I dont believe in guilt, because I dont believe in free will, Kosinski tells me, explaining that a persons thoughts and behaviour are fully biological, because they originate in the biological computer that you have in your head. On another occasion he tells me, If you basically accept that were just computers, then computers are not guilty of crime. Computers can malfunction. But then you shouldnt blame them for it. The professor adds: Very much like: you dont, generally, blame dogs for misbehaving.

Todorov believes Kosinskis research is incredibly ethically questionable, as it could lend a veneer of credibility to governments that might want to use such technologies. He points to a paper that appeared online two years ago, in which Chinese AI researchers claimed they had trained a face-recognition algorithm to predict with 90% accuracy whether someone was a convicted criminal. The research, which used Chinese government identity photographs of hundreds of male criminals, was not peer-reviewed, and was torn to shreds by Todorov, who warned that developments in artificial intelligence and machine learning have enabled scientific racism to enter a new era.

Kosinski has a different take. The fact that the results were completely invalid and unfounded, doesnt mean that what they propose is also wrong, he says. I cant see why you would not be able to predict the propensity to commit a crime from someones face. We know, for instance, that testosterone levels are linked to the propensity to commit crime, and theyre also linked with facial features and this is just one link. There are thousands or millions of others that we are unaware of, that computers could very easily detect.

Would he ever undertake similar research? Kosinski hesitates, saying that crime is an overly blunt label. It would be more sensible, he says, to look at whether we can detect traits or predispositions that are potentially dangerous to an individual or society like aggressive behaviour. He adds: I think someone has to do it Because if this is a risky technology, then governments and corporations are clearly already using it.


But when I press Kosinski for examples of how psychology-detecting AI is being used by governments, he repeatedly falls back on an obscure Israeli startup, Faception. The company provides software that scans passports, visas and social-media profiles, before spitting out scores that categorise people according to several personality types. On its website, Faception lists eight such classifiers, including White-Collar Offender, High IQ, Paedophile and Terrorist. Kosinski describes the company as dodgy a case study in why researchers who care about privacy should alert the public to the risks of AI. Check what Faception are doing and what clients they have, he tells me during an animated debate over the ethics of his research.

I call Faceptions chief executive, Shai Gilboa, who used to work in Israeli military intelligence. He tells me the company has contracts working on homeland security and public safety in Asia, the Middle East and Europe. To my surprise, he then tells me about a research collaboration he conducted two years ago. When you look in the academia market youre looking for the best researchers, who have very good databases and vast experience, he says. So this is the reason we approached Professor Kosinski.

But when I put this connection to Kosinski, he plays it down: he claims to have met Faception to discuss the ethics of facial-recognition technologies. They came [to Stanford] because they realised what they are doing has potentially huge negative implications, and huge risks. Later, he concedes there was more to it. He met them maybe three times in Silicon Valley, and was offered equity in the company in exchange for becoming an adviser (he says he declined).

Kosinski denies having collaborated on research, but admits Faception gave him access to its facial-recognition software. He experimented with Facebook photos in the myPersonality dataset, he says, to determine how effective the Faception software was at detecting personality traits. He then suggested Gilboa talk to Stillwell about purchasing the myPersonality data. (Stillwell, Kosinski says, declined.)

He bristles at my suggestion that these conversations seem ethically dubious. I will do a lot of this, he says. A lot of startup people come here and they dont offer you any money, but they say, Look, we have this project, can you advise us? Turning down such a request would have made him an arrogant prick.

He gives a similar explanation for his trip to Moscow, which he says was arranged by Sberbank Corporate University as an educational day for Russian government officials. The university is a subsidiary of Sberbank, a state-owned bank sanctioned by the EU; its chief executive, Russias former minister for economic development, is close to Putin. What was the purpose of the trip? I didnt really understand the context, says Kosinski. They put me on a helicopter, flew me to a place, I came on the stage. On the helicopter I was given a briefing about who was going to be in the room. Then I gave a talk, and we talked about how AI is changing society. And then they sent me off.

The last time I see Kosinski, we meet in London. He becomes prickly when I press him on Russia, pointing to its dire record on gay rights. Did he talk about using facial-recognition technology to detect sexuality? Yes, he says but this talk was no different from other presentations in which he discussed the same research. (A couple of days later, Kosinski tells me he has checked his slides; in fact, he says, he didnt tell the Russians about his AI gaydar.)

Who else was in the audience, aside from Medvedev and Lavrov? Kosinski doesnt know. Is it possible he was talking to a room full of Russian intelligence operatives? Thats correct, he says. But I think that people who work for the surveillance state, more than anyone, deserve to know that what they are doing is creating real risk. He tells me he is no fan of Russia, and stresses there was no discussion of spying or influencing elections. As an academic, you have a duty to try to counter bad ideas and spread good ideas, he says, adding that he would talk to the most despicable dictator out there.

I ask Kosinski if anyone has tried to recruit him as an intelligence asset. He hedges. Do you think that if an intelligence agency approaches you they say: Hi, Im the CIA? he replies. No, they say, Hi, Im a startup, and Im interested in your work would you be an adviser? That definitely happened in the UK. When I was at Cambridge, I had a minder. He tells me about a British defence expert he suspected worked for the intelligence services who took a keen interest in his research, inviting him to seminars attended by officials in military uniforms.

In one of our final conversations, Kosinski tells me he shouldnt have talked about his visit to Moscow, because his hosts asked him not to. It would not be elegant to mention it in the Guardian, he says, and besides, it is an irrelevant fact. I point out that he already left a fairly big clue on Facebook, where he posted an image of himself onboard a helicopter with the caption: Taking off to give a talk for Prime Minister Medvedev. He later changed his privacy settings: the photo was no longer public, but for friends only.

Comments on this piece are premoderated to ensure the discussion remains on the topics raised by the article.

Commenting on this piece? If you would like your comment to be considered for inclusion on Weekend magazines letters page in print, please email, including your name and address (not for publication).

Read more:

adminadmin‘I was shocked it was so easy’: meet the professor who says facial recognition can tell if you’re gay
read more

Your Instagram #Dogs and #Cats Are Training Facebook’s AI

Using a social network like Facebook is a two-way street, part-shrouded in shadow. The benefits of sharing banter and photos with friends and family—for free—are obvious and immediate. So are the financial rewards for Facebook; but you don’t get to see all of the company’s uses for your data.

An artificial intelligence experiment of unprecedented scale disclosed by Facebook Wednesday offers a glimpse of one such use case. It shows how our social lives provide troves of valuable data for training machine-learning algorithms. It’s a resource that could help Facebook compete with Google, Amazon, and other tech giants with their own AI ambitions.

Facebook researchers describe using 3.5 billion public Instagram photos—carrying 17,000 hashtags appended by users—to train algorithms to categorize images for themselves. It provided a way to sidestep having to pay humans to label photos for such projects. The cache of Instagram photos is more than 10 times the size of a giant training set for image algorithms disclosed by Google last July.

Having so many images for training helped Facebook’s team set a new record on a test that challenges software to assign photos to 1,000 categories including cat, car wheel, and Christmas stocking. Facebook says that algorithms trained on 1 billion Instagram images correctly identified 85.4 percent of photos on the test, known as ImageNet; the previous best was 83.1 percent, set by Google earlier this year.

Image-recognition algorithms used on real-world problems are generally trained for narrower tasks, allowing greater accuracy; ImageNet is used by researchers as a measure of a machine learning system’s potential. Using a common trick called transfer learning, Facebook could fine-tune its Instagram-derived algorithms for specific tasks. The method involves using a large dataset to imbue a computer vision system with some basic visual sense, then training versions for different tasks using smaller and more specific datasets.

As you would guess, Instagram hashtags skew towards certain subjects, such as #dogs, #cats, and #sunsets. Thanks to transfer learning they could still help the company with grittier problems. CEO Mark Zuckerberg told Congress this month that AI would help his company improve its ability to remove violent or extremist content. The company already uses image algorithms that look for nudity and violence in images and video.


The WIRED Guide to Artificial Intelligence

Manohar Paluri, who leads Facebook’s applied computer vision group, says machine-vision models pre-trained on Instagram data could become useful on all kinds of problems. “We have a universal visual model that can be used and re-tuned for various efforts within the company,” says Paluri. Possible applications include enhancing Facebook’s systems that prompt people to reminisce over old photos, describe images to the visually impaired, and identify objectionable or illegal content, he says. (If you don’t want your Instagram snaps to be part of that, Facebook says you can withdraw your photos from its research projects by setting your Instagram account to private.)

Facebook’s project also illustrates how companies need to spend heavily on computers and power bills to compete in AI. Computer-vision systems trained from Instagram data could tag images in seconds, says Paluri. But training algorithms on the full 3.5 billion Instagram photos occupied 336 high-powered graphics processors, spread across 42 servers, for more than three weeks solid.

That might sound like a long time. Reza Zadeh, CEO of computer vision startup Matroid and an adjunct professor at Stanford, says it in fact demonstrates how nimble a well-resourced company with top-tier researchers can be, and how the scale of AI experiments has grown. Just last summer, it took Google two months to train software on a set of 300 million photos, in experiments using many fewer graphics processors.

High-powered chips designed for machine learning are becoming more widely available, but few companies have access to so much data or so much processing power. With top machine-learning researchers expensive to hire, the more quickly they can run their experiments, the more productive they can be. “When companies are competing, that’s a big edge,” Zadeh says.

Desire to keep that edge, and the ambition revealed by the scale of its Instagram experiments, help explain why Facebook recently said it is planning to design its own chips for machine learning—following in the footsteps of Google and others.

Still, progress in AI requires more than just data and computers. Zadeh says he was surprised to see that the Instagram-trained algorithm didn’t lead to better performance on a test that challenges software to locate objects within images. That suggests existing machine learning software needs to be redesigned to take full advantage of giant photo collections, he says. Being able to locate objects in images is important for applications such as autonomous vehicles and augmented reality, where software needs to locate objects in the world.

Paluri is under no illusions about the limitations of Facebook’s big experiment. Image algorithms can excel at narrowly focused tasks, and training with billions of images can help. But machines don’t yet display a general ability to understand the visual world like humans do. Making progress on that will require some fundamentally new ideas. “We are not going to solve any of these problems just by pushing brute force scale,” Paluri says. “We need new techniques.”

Artificial Intelligence, Real Smarts

Read more:

adminadminYour Instagram #Dogs and #Cats Are Training Facebook’s AI
read more

Facebook: LinkedIn, Twitter, Google also collect and share your data

Facebook: We're not the only one who collects your data for ad targeting.
Image: mashable

Facebook has been firmly under the spotlight about how it uses people’s data, but the platform wants to make one thing clear: It’s not the only company doing so.

On Monday, the social network posted an explainer on how it collects your data through its services even when you’re not on Facebook.

The post details the impact of Facebook’s social plugins, which allow users to like or share content on another site, as well as its login tool — when you use Facebook credentials to sign into websites or apps. Other services unpacked include Facebook Analytics, as well as its own advertising and measurement tools. 

As Facebook reiterates, if you’ve used a similar service from other internet companies, they’ve been collecting your data too.

“Twitter, Pinterest and LinkedIn all have similar Like and Share buttons to help people share things on their services. Google has a popular analytics service. And Amazon, Google and Twitter all offer login features. These companies — and many others — also offer advertising services,” the post reads.

“In fact, most websites and apps send the same information to multiple companies each time you visit them.”

Facebook and other companies essentially collect this data to better target their ads — simply, the more an advertiser knows about you, the better. 

Through Facebook’s social plugins and login function, the platform collects information about what website you’ve visited, plus your browser and device information. 

Facebook Analytics shows which visitors use Facebook, and aggregated demographic data like age and gender. And Facebook’s Audience Network shows ads from the social media platform on other websites, unless you’re not signed up with Facebook.

If you check out your Facebook ad preferences, you’ll be able to see how that data builds a repertoire of interests from your profile, as well as from sites you’ve visited. Apparently I’m into cats and sailing.

Image: facebook

Google has a similar dashboard, and you can find out what Twitter believes you’re interested in based on your activity.

You can turn off personalisation with a click of a button on Twitter, LinkedIn and on Google’s ad settings dashboard, while on Facebook you can opt out of ads based on your use of websites and apps. 

Unless you like being targeted with ads about cats and sailing, of course.

Read more:

adminadminFacebook: LinkedIn, Twitter, Google also collect and share your data
read more

Mark Zuckerberg’s Testimony Birthed an Oddly Promising Memepocalypse

The internet loves to tear things to shreds. So to absolutely no one's surprise, it jumped on Mark Zuckerberg's congressional testimony—a serious event in which the CEO of one of the world's richest companies is answering to the federal government for mistakes like user-data breaches and enabling Russian interference in the 2016 presidential election—like a choice steak. Instead of a fork and knife, though, it used its own pointy tools: memes.

Memes used to be about cats and Chuck Norris. But now, not only is a dry, two-day, multi-hour Congressional grilling session considered meme fodder, it's a veritable treasure trove of repeatable phrases and exploitable images. It wasn't entirely Social Network jokes, but those did abound.

No one was safe: not Zuckerberg, and certainly not the octogenarian Senators doing their best to understand the Facebook. A question about websites' business models has become iconic. Pro-Trump online personalities Diamond and Silk have been catapulted onto the national stage. It's been a time to reflect on just how strange our world has gotten. So we gathered up the wildest Zuckberg testimony memes the internet has to offer. And no, we still can't really believe this is happening either. And we know one wide-eyed, besuited Harvard alum who probably feels the same way.

What the Zuck

This isn’t Mark Zuckerberg’s first go-round in the meme machine. Zuck Memes were already an established format with a dedicated subreddit. They’re almost too easy: Take one of Zuck’s stilted public Facebook posts, and re-caption it with the awkwardness dialed all the way up. But while some of this week's memes were similarly dada genius

… many others were pure schadenfreude:

Senators and Censorship

Zuckerberg wasn't the only one to have his foibles under the internet's microscope. Did you really think aspiring roastmasters would pass up the chance to troll Senator Ted Cruz, whom they allege is the Zodiac Killer?

Or to take some swipes at stodgy, not-quite-tech-savvy Senators?

By far the best baby-boomer blunder though, was Senator Orrin Hatch's, who wondered how Facebook could "sustain a business model in which users don't pay for your service?" Zuckerberg's response—"Senator, we run ads"—quickly became a headline, and, of course, a meme.

(Because the meme-to-merch pipeline now flows faster than ever, there is of course a T-shirt.)

Maybe the oddest part of the Zuckerberg testimony's digital carnival was the rise of Lynette Hardaway and Rochelle Richardson, better known as pro-Trump Facebook personalities Diamond and Silk. Their outspoken support of President Trump has earned them over a million followers (including Trump himself), but soon found the social media platform limiting their post's spread. When they asked the company why, they say they received this message: “The Policy team has came to the conclusion that your content and your brand has been determined unsafe to the community. This decision is final and it is not appeal-able in any way.” Which would have been that, except that Ted Cruz cited the pair as an example of Facebook's tendency to censor conservative commentators.

Zuckerberg denied censorship, but acknowledged the concern and said Diamond and Silk were victims of an "enforcement error" he was already working to correct. Still, that's plenty of ammunition for a new wave of memes. Especially since there were visual aids.

But somehow, despite grilling him like a flank steak, the internet came to feel for Zuck. Or at least, find a way to see him as metaphor for the absurdity of our digital lives:

As well as the tension between the tech world and aging lawmakers trying to bring order to a situation they don't seem to fully understand.

So while we might shake our heads at this absolutely bananas state of affairs in internet culture, it's probably better to think of it this way: To understand this week's hottest memes, you have to have tuned into hours of Senators and a globally influential CEO talk the finer points of the attention economy and internet security policy. That may not be the same thing as high voter turnout, but it's still pretty encouraging. Today, memes; tomorrow, perhaps, genuine civic engagement!

Read more:

adminadminMark Zuckerberg’s Testimony Birthed an Oddly Promising Memepocalypse
read more

Accidental Facebook post about a toddler becomes an unexpected hit

Not Ramona, but we wish it were.
Image: Getty Images

UPDATE: Oct. 3, 2017, 12:55 p.m. UTC NPR published a story Tuesday stating that Ramona is, in fact, not a cat—she is a baby. But she does have a cat. 

Here’s hoping Ramona and her feline friend become NPR mainstays in the coming weeks.


Original story:

Accidentally posting personal stuff onto a work page is every social media manager’s worst nightmare.

This time, after the last few days, it came as a moment of relief. On Monday, an NPR staffer’s accidental Facebook post went viral, amassing more than 20,000 reactions at the time of writing. It concerned none other than a cat named Ramona, presumably belonging to the author of the post.

“Ramona is given new toy: Smiles, examines for 20 seconds, discards,” reads the original post. “Ramona gets a hug: Acquiesces momentarily, squirms to be put down.”

Yes, even the mistake reads like something from NPR.

The post was edited shortly afterward with an apology for the error, indicating that it was intended for a personal account.

But it was too late, and people had caught wind of Ramona the cat. They wanted more.

People were also calling on NPR to ensure the poster in question wouldn’t be disciplined for their minor error.

After all, what’s more certain in life than mistakes and misbehaving cats?

Read more:

adminadminAccidental Facebook post about a toddler becomes an unexpected hit
read more