"The biggest psychological experiment ever is being conducted, and we’re all taking part in it: every day, a billion people are tested online. Which ingenious tricks and other digital laws ensure that we fill our online shopping carts to the brim, or stay on websites as long as possible? Or vote for a particular candidate?
The bankruptcies of department stores and shoe shops clearly show that our buying behaviour is rapidly shifting to the Internet. An entirely new field has arisen, of ‘user experience’ architects and ‘online persuasion officers’. How do these digital data dealers use, manipulate and abuse our user experience? Not just when it comes to buying things, but also with regards to our free time and political preferences.
Aren’t companies, which are running millions of tests at a time, miles ahead of science and government, in this respect? Now the creators of these digital seduction techniques, former Google employees among them, are themselves arguing for the introduction of an ethical code. What does it mean, when the conductors of experiments themselves are asking for their power and possibilities to be restricted?"
The Guardian is running an article about a 'mysterious' big-data analytics company called Cambridge Analytica and its activities with SCL Group---a 25-year-old military psyops company in the UK later bought by "secretive hedge fund billionaire" Robert Mercer. In the article, a former employee calls it "this dark, dystopian data company that gave the world Trump."
Mercer, with a background in computer science is alleged to be at the centre of a multimillion-dollar propaganda network.
"Facebook was the source of the psychological insights that enabled Cambridge Analytica to target individuals. It was also the mechanism that enabled them to be delivered on a large scale. The company also (perfectly legally) bought consumer datasets -- on everything from magazine subscriptions to airline travel -- and uniquely it appended these with the psych data to voter files... Finding "persuadable" voters is key for any campaign and with its treasure trove of data, Cambridge Analytica could target people high in neuroticism, for example, with images of immigrants "swamping" the country. The key is finding emotional triggers for each individual voter. Cambridge Analytica worked on campaigns in several key states for a Republican political action committee. Its key objective, according to a memo the Observer has seen, was "voter disengagement" and "to persuade Democrat voters to stay at home"... In the U.S., the government is bound by strict laws about what data it can collect on individuals. But, for private companies anything goes."
"'What's on your mind?' It's the friendly Facebook question which lets you share how you're feeling. It's also the question that unlocks the details of your life and helps turn your thoughts into profits.
Facebook has the ability to track much of your browsing history, even when you're not logged on, and even if you aren't a member of the social network at all. This is one of the methods used to deliver targeted advertising and 'news' to your Facebook feed. This is why you are unlikely to see anything that challenges your world view. This feedback loop is fuelling the rise and power of 'fake news'. "We're seeing news that's tailored ever more tightly towards those kinds of things that people will click on, and will share, rather than things that perhaps are necessarily good for them", says one Media Analyst. This information grants huge power to those with access to it. Republican Party strategist Patrick Ruffini says, "What it does give us is much greater level of certainty and granularity and precision down to the individual voter, down to the individual precinct about how things are going to go". Resultantly, former Facebook journalist, Adam Schrader thinks that there's "a legitimate argument to this that Facebook influenced the election, the United States Election results."
Children refusing to put down their phones is a common flashpoint in many homes, with a third of British children aged 12 to 15 admitting they do not have a good balance between screen time and other activities.
But in the US, the problem has become so severe for some families that children as young as 13 are being treated for digital technology addiction.
One ‘smartphone rehab’ centre near Seattle has started offering residential “intensive recovery programs” for teenagers who have trouble controlling their use of electronic devices.
The Restart Life Centre says parents have been asking it to offer courses of treatment to their children for more than eight years.
Hilarie Cash, the Centre’s founder, told Sky News smartphones, tablets and other mobile devices can be so stimulating and entertaining that they “override all those natural instincts that children actually have for movement and exploration and social interaction”.
Child psychotherapist Julie Lynn Evans, who has worked with hospitals, schools and families for 25 years, said her workload has significantly increased since the use of smartphones became widespread among young people.
“It’s a simplistic view, but I think it is the ubiquity of broadband and smartphones that has changed the pace and the power and the drama of mental illness in young people,” she told The Telegraph.
A ComRes poll of more than 1,000 parents of children aged under 18, published in September 2015, found 47 per cent of parents said they thought their children spent too much time in front of screens, with 43 per cent saying this amounts to an emotional dependency."
... or a dream come true for those in power. And those in power are the same entities pushing IoT technologies.
A little background reading about JTRIG from the Snowden documents is helpful. It's the modern-day equivalent of the Zersetzung---the special unit of the Stasi that was used to attack, repress and sabotage political opponents. A power greatly expanded with a society driven by IoT.
Full article from Daily Dot:
"In 2014, security guru Bruce Schneier said, “Surveillance is the business model of the Internet. We build systems that spy on people in exchange for services. Corporations call it marketing.” The abstract and novel nature of these services tends to obscure our true relationship to companies like Facebook or Google. As the old saying goes, if you don’t pay for a product, you are the product.
But what happens when the Internet stops being just “that fiddly thing with a mouse” and becomes “the real world”? Surveillance becomes the business model of everything, as more and more companies look to turn the world into a collection of data points.
If we truly understood the bargain we were making when we give up our data for free or discounted services, would we still sign on the dotted line (or agree to the Terms and Conditions)? Would we still accept constant monitoring of our driving habits in exchange for potential insurance breaks, or allow our energy consumption to be uploaded into the cloud in exchange for “smart data” about it?
Nowhere is our ignorance of the trade-offs greater, or the consequences more worrisome, than our madcap rush to connect every toaster, fridge, car, and medical device to the Internet.
Welcome to the Internet of Things, what Schneier calls “the World Size Web,” already growing around you as we speak, which creates such a complete picture of our lives that Dr. Richard Tynan of Privacy International calls them “doppelgängers”—mirror images of ourselves built on constantly updated data. These doppelgängers live in the cloud, where they can easily be interrogated by intelligence agencies. Nicholas Weaver, a security researcher at University of California, Berkeley, points out that “Under the FISA Amendments Act 702 (aka PRISM), the NSA can directly ask Google for any data collected on a valid foreign intelligence target through Google’s Nest service, including a Nest Cam.” And that’s just one, legal way of questioning your digital doppelgänger; we’ve all heard enough stories about hacked cloud storage to be wary of trusting our entire lives to it.
But with the IoT, the potential goes beyond simple espionage, into outright sabotage. Imagine an enemy that can remotely disable the brakes in your car, or (even more subtly) give you food poisoning by hacking your fridge. That’s a new kind of power. “The surveillance, the interference, the manipulation … the full life cycle is the ultimate nightmare,” says Tynan.
The professional spies agree that the IoT changes the game. “‘Transformational’ is an overused word, but I do believe it properly applies to these technologies,” then CIA Director David Petraeus told a 2012 summit organized by the agency’s venture capital firm, In-Q-Tel, “particularly to their effect on clandestine tradecraft,” according to Wired.
Clandestine tradecraft is not about watching, but about interfering. Take, for example, the Joint Threat Research Intelligence Group (JTRIG), the dirty tricks division of GCHQ, the British intelligence agency. As the Snowden documents reveal, JTRIG wants to create “Cyber Magicians” who can “make something happen in the real…world,” including ruining business deals, intimidating activists, and sexual entrapment (“honeypots”). The documents show that JTRIG operatives will ignore international law to achieve their goals, which are not about fighting terrorism, but, in fact, targeting individuals who have not been charged with or convicted of any crime.
The Internet of Things “is a JTRIG wet dream,” says security researcher Rob Graham. But you don’t have to be a spy to take advantage of the IoT. Thanks to widespread security vulnerabilities in most IoT devices, almost anyone can take advantage of it. That means cops, spies, gangsters, anyone with the motivation and resources—but probably bored teenagers as well. “I can take any competent computer person and take them from zero to Junior Hacker 101 in a weekend,” says security researcher Dan Tentler. The security of most IoT devices—including home IoT, but also smart cities, power plants, gas pipelines, self-driving cars, and medical devices—is laughably bad. “The barrier to entry is not very tall,” he says, “especially when what’s being released to consumers is so trivial to get into.”
That makes the IoT vulnerable—our society vulnerable—to any criminal with a weekend to spend learning how to hack. “When we talk about vulnerabilities in computers…people are using a lot of rhetoric in the abstract,” says Privacy International’s Tynan. “What we really mean is, vulnerable to somebody. That somebody you’re vulnerable to is the real question.”
“They’re the ones with the power over you,” he added. That means intelligence agencies, sure, but really anyone with the time and motivation to learn how to hack. And, as Joshua Corman of I Am the Cavalry, a concerned group of security researchers, once put it, “There are as many motivations to hacking as there are motivations in the human condition. Hacking is a form of power.”
The authorities want that power; entities like JTRIG, the NSA, the FBI and the DOJ want to be able to not just surveil but also to disrupt, to sabotage, to interfere. Right now the Bureau wants to force Apple to create the ability to deliver backdoored software updates to iPhones, allowing law enforcement access to locally stored, encrypted data. Chris Soghoian, a technologist at the ACLU, tweeted, “If DOJ get what they want in this Apple case, imagine the surveillance assistance they’ll be able to force from Internet of Things companies.”
“The notion that there are legal checks and balances in place is a fiction,” Tynan says. “We need to rely more on technology to increase the hurdles required. For the likes of JTRIG to take the massive resources of the U.K. state and focus them on destroying certain individuals, potentially under flimsy pretenses—I just can’t understand the mentality of these people.”
Defending ourselves in this new, insecure world is difficult, perhaps impossible. “If you go on the Internet, it’s a free-for-all,” Tentler says. “Despite the fact that we have these three-letter agencies, they’re not here to help us; they’re not our friends. When the NSA and GCHQ learn from the bad guys and use those techniques on us, we should be worried.”
If the Internet is a free-for-all, and with the Internet of Things we’re putting the entire world on the Internet, what does that make us?
“Fish in a barrel?”
"...An instrument no bigger than an inhaler lodges a needle into the back of Benigeri’s arm. Woo removes his hand to reveal a white plate sitting just above the implant. Benigeri smiles.
Read more at https://www.businessinsider.com/san-francisco-biohacking-continuous-glucose-monitors-2017-1#To0MjhyLcHHBIhcD.99
"On March 7, the US awoke to a fresh cache of internal CIA documents posted on WikiLeaks. They detail the spy organization’s playbook for cracking digital communications.
Snowden’s NSA revelations sent shockwaves around the world. Despite WikiLeaks’ best efforts at theatrics—distributing an encrypted folder and tweeting the password “SplinterItIntoAThousandPiecesAndScatterItIntoTheWinds”—the Vault 7 leak has elicited little more than a shrug from the media and the public, even if the spooks are seriously worried. Maybe it’s because we already assume the government can listen to everything."
We train the machine so well, and it's use so ubiquitous, that it can become invisible: Google is making CAPTCHAs invisible using "a combination of machine learning and advanced risk analysis that adapts to new and emerging threats," Ars Technica reports. Emphasis added.
"The old reCAPTCHA system was pretty easy -- just a simple "I'm not a robot" checkbox would get people through your sign-up page. The new version is even simpler, and it doesn't use a challenge or checkbox. It works invisibly in the background, somehow, to identify bots from humans. [...] When sites switch over to the invisible CAPTCHA system, most users won't see CAPTCHAs at all, not even the "I'm not a robot" checkbox. If you are flagged as "suspicious" by the system, then it will display the usual challenges.
reCAPTCHA was bought by Google in 2009 and was used to put unsuspecting website users to work for Google. Some CAPTCHA systems create arbitrary problems for users to solve, but older reCAPTCHA challenges actually used problems Google's computers needed to solve but couldn't. Google digitizes millions of books, but sometimes the OCR (optical character recognition) software can't recognize a word, so that word is sent into the reCAPTCHA system for solving by humans. If you've ever solved a reCAPTCHA that looks like a set of numbers, those were from Google's camera-covered Street View cars, which whizz down the streets and identify house numbers. If the OCR software couldn't figure out a house number, that number was made into a CAPTCHA for solving by humans. The grid of pictures that would ask you to "select all the cats" was used to train computer image recognition algorithms."
"WikiLeaks has unleashed a treasure trove of data to the internet, exposing information about the CIA's arsenal of hacking tools. Code-named Vault 7, the first data is due to be released in serialized form, starting off with "Year Zero" as part one. A cache of over 8,500 documents and files has been made available via BitTorrent in an encrypted archive. Password to the files is ' SplinterItIntoAThousandPiecesAndScatterItIntoTheWinds'.
The documents reveal that the CIA worked with MI5 in the UK to infect Samsung smart TVs so their microphones could be turned on at will. Investigations were carried out into gaining control of modern cars and trucks, and there is even a specialized division of the CIA focused on accessing, controlling and exploiting iPhones and iPads. This and Android zero days enables the CIA to "to bypass the encryption of WhatsApp, Signal, Telegram, Wiebo, Confide and Cloackman by hacking the "smart" phones that they run on and collecting audio and message traffic before encryption is applied."
"....As technology becomes smaller, the way we carry it has progressed from luggable, to wearable and now towards devices that reside inside the human body, or insertables. This trend is particularly observable in many medical devices, such as pacemakers that were once large stand-alone devices and are now completely inserted into the body. We are now seeing a similar trajectory with non-medical systems. While people once carried keys to open office doors, these have been mostly replaced with wearable access dongles, worn around the neck or clipped to clothing. Some individuals have voluntarily taken the technology from these dongles and inserted it directly into their body. In this paper we introduce insertables as a new interaction device of choice and provide a definition of insertables, classifying emerging and near future devices as insertables. This paper demonstrates this trajectory towards devices inside the human body, and carves out insertables as a specific subset of devices which are voluntary, non-surgical and removable."
Read more at http://www.firstmonday.dk/ojs/index.php/fm/article/view/6214/5970
"If you pull out your phone to check Twitter while waiting for the light to change, or read e-mails while brushing your teeth, you might be what the American Psychological Association calls a “constant checker.” And chances are, it’s hurting your mental health.
Last week, the APA released a study finding that Americans were experiencing the first statistically significant stress increase in the survey’s 10-year history. In January, 57 percent of respondents of all political stripes said the U.S. political climate was a very or somewhat significant source of stress, up from 52 percent who said the same thing in August. On Thursday, the APA released the second part of its 1 findings, “Stress In America: Coping With Change,” examining the role technology and social media play in American stress levels.
Social media use has skyrocketed from 7 percent of American adults in 2005 to 65 percent in 2015. For those in the 18-29 age range, the increase is larger, from 12 percent to a remarkable 90 percent. But while an increase in social media usage is hardly surprising, the number of people who just can’t tear themselves away is stark: Nowadays, 43 percent of Americans say they are checking their e-mails, texts, or social media accounts constantly. And their stress levels are paying for it: On a 10-point scale, constant checkers reported an average stress level of 5.3. For the rest of Americans, the average level is a 4.4.
If the first step toward recovery, however, is admitting there is a problem, Americans are on their way. Some 65 percent of respondents said “unplugging” or taking a “digital detox” is important. But alas, knowing you have a problem is not the same as fixing it: Only 28 percent of those Americans say they take their own advice."
"General Electric will put cameras, microphones, and sensors on 3,200 street lights in San Diego this year, marking the first large-scale use of "smart city" tools GE says can help monitor traffic and pinpoint crime, but raising potential privacy concerns.
Based on technology from GE's Current division, Intel and AT&T, the system will use sensing nodes on light poles to locate gunshots, estimate crowd sizes, check vehicle speeds and other tasks, GE and the city said on Wednesday. The city will provide the data to entrepreneurs and students to develop applications.
Companies expect a growing market for such systems as cities seek better data to plan and run their operations. San Diego is a test of "Internet of things" technology that GE Current provides for commercial buildings and industrial sites.
San Diego's city council approved the lighting in December, without discussion of potential privacy issues raised by the surveillance system, and no objections arose during a pilot that began in 2014 in downtown San Diego, Lebron said.
"It's anonymous data with no personal identifiers," she said. Video is not as detailed as security camera footage.
The San Diego Police Department already uses ShotSpotter, a gunfire detection system, to help solve crimes, Lebron said. San Francisco, Chicago and more than 40 other cities use ShotSpotter, according to the company's website.
GE hopes cities will make the data available to businesses. Current's data and open software platform should allow programmers to develop applications, said John Gordon, chief digital officer at GE Current: "Everything from traffic and parking problems to finding the quietest way to walk home and have a cell phone conversation."
"A German government watchdog has ordered parents to “destroy” an internet-connected doll for fear it could be used as a surveillance device. According to a report from BBC News, the German Federal Network Agency said the doll (which contains a microphone and speaker) was equivalent to a “concealed transmitting device” and therefore prohibited under German telecom law.
The doll in question is “My Friend Cayla,” a toy which has already been the target of consumer complaints in the EU and US. In December last year, privacy advocates said the toy recorded kids’ conversations without proper consent, violating the Children’s Online Privacy Protection Act.
Cayla uses a microphone to listen to questions, sending this audio over Wi-Fi to a third-party company (Nuance) that converts it to text. This is then used to search the internet, allowing the doll to answer basic questions, like “What’s a baby kangaroo called?” as well as play games. In addition to privacy concerns over data collection, security researchers found that Cayla can be easily hacked. The doll’s insecure Bluetooth connection can be compromised, letting a third party record audio via the toy, or even speak to children using its voice.
Although the FTC has not yet taken any action against Cayla or its makers Manufacturer Genesis Toys, German data and privacy laws are more stringent than those in America. The legacy of the Stasi, the secret police force that set up one of the most invasive mass-surveillance regimes ever in Communist East Germany, has made the country’s legislators vigilant against such infringements."
"When you browse online for a new pair of shoes, pick a movie to stream on Netflix or apply for a car loan, an algorithm likely has its word to say on the outcome.
The complex mathematical formulas are playing a growing role in all walks of life: from detecting skin cancers to suggesting new Facebook friends, deciding who gets a job, how police resources are deployed, who gets insurance at what cost, or who is on a "no fly" list.
Algorithms are being used—experimentally—to write news articles from raw data, while Donald Trump's presidential campaign was helped by behavioral marketers who used an algorithm to locate the highest concentrations of "persuadable voters."
But while such automated tools can inject a measure of objectivity into erstwhile subjective decisions, fears are rising over the lack of transparency algorithms can entail, with pressure growing to apply standards of ethics or "accountability."
Data scientist Cathy O'Neil cautions about "blindly trusting" formulas to determine a fair outcome.
"Algorithms are not inherently fair, because the person who builds the model defines success," she said.
O'Neil argues that while some algorithms may be helpful, others can be nefarious. In her 2016 book, "Weapons of Math Destruction," she cites some troubling examples in the United States:
- Public schools in Washington DC in 2010 fired more than 200 teachers—including several well-respected instructors—based on scores in an algorithmic formula which evaluated performance.
- A man diagnosed with bipolar disorder was rejected for employment at seven major retailers after a third-party "personality" test deemed him a high risk based on its algorithmic classification.
- Many jurisdictions are using "predictive policing" to shift resources to likely "hot spots." O'Neill says that depending on how data is fed into the system, this could lead to discovery of more minor crimes and a "feedback loop" which stigmatizes poor communities.
- Some courts rely on computer-ranked formulas to determine jail sentences and parole, which may discriminate against minorities by taking into account "risk" factors such as their neighborhoods and friend or family links to crime.
- In the world of finance, brokers "scrape" data from online and other sources in new ways to make decisions on credit or insurance. This too often amplifies prejudice against the disadvantaged, O'Neil argues.
Her findings were echoed in a White House report last year warning that algorithmic systems "are not infallible—they rely on the imperfect inputs, logic, probability, and people who design them."
"Researchers at Stanford and Princeton universities have found a way to connect the dots between people’s private online activity and their Twitter accounts—even for people who have never tweeted.
When the team tested the technique on 400 real people who submitted their browsing history, they were able to correctly pick out the volunteers’ Twitter profiles nearly three-quarters of the time.
Here’s how the de-anonymization system works: The researchers figured that a person is more likely to click a link that was shared on social media by a friend—or a friend of a friend—than any other random link on the internet. (Their model controls for the baseline popularity of each website.) With that in mind, and the details of an anonymous person’s browser history in hand, the researchers can compute the probability that any one Twitter user created that browsing history. People’s basic tendency to follow links they come across on Twitter unmasks them—and it usually takes less than a minute.
“You can even be de-anonymized if you just browse and follow people, without actually sharing anything.”