Until the extent of Government surveillance of phone calls, emails, Web postings, searches, social-media preferences, and so forth became widely known, the commercial purpose of gathering all this private data was unclear. After all, the business model of targeting or pestering people based on algorithms of rapidly shifting likes, associations, and Web-browsing habits never made much sense. True, people corralled into a mass market can be pressured into buying things to satisfy psychological needs. Freud’s American nephew Edward Bernays perfected what he called the ‘engineering of consent’ on behalf of both business and government during the first half of the 20th century.
But once the methods of mass persuasion become generally known, as they did in the 1950s and 1960s, they lose their effectiveness. New methods that provided a plausible veneer of free choice had to be developed. This the Internet giants proceeded to do, using a variety of personal data they were privy to as a means of inferring what products and services their users were interested in. Where Bernays used Freudian theory to reveal hidden desires, the Internet giants claimed they too had a hidden pipeline, via the private data they collected, into their customers’ secret desires. And in contrast to traditional mass-market advertising, Internet ads could be ‘served’ especially to those already known (through those algorithms) to be in the market for what an advertiser was offering. Social media enhance the odds of success by using group membership and friendship-association as a basis for marketing, aware that many consumer purchase decisions are highly motivated by ‘influentials’ and by ‘reference-groups’. These new methods of advertising have been more successful at attracting investment than in generating sales for advertisers.
It took about 30 years for the public to catch on to Bernays’ methods of psychological manipulation. Such methods no longer work when people become aware of them. Similarly, the success of the Internet giants’ business model depends on concealing how they siphon private data and use it to predict and influence purchase decisions. When users block ‘cookies’, for example, or enter fictitious data about themselves into registration forms, then the accuracy that advertisers rely on is degraded. As users grow more aware of how to opt-out of being data-mined, the business model breaks down. The model fails even if only a small minority of users actually install privacy-protecting software, or intentionally post false identity or location markers, (with a proxy server or a VPN). The connection between the ingenious algortithms and actual purchase decisions becomes ever more tenuous with every ad-block, anonymizer, Web-tracking rejection, and so forth. Behind the growing awareness and usage of these protections lies a breakdown of trust, and trust — which the snoops have forgotten — is the essence of any commercial transaction.
So, without a credible commercial justification, what purpose is served by collecting all that private data? Data-obsessives have persuaded themselves and like-minded investors that it has some inherent value. But the value proposition here is based on a fatally flawed version of human nature, the notion that raw data shorn of human intelligence or analysis is valuable in itself.
This seems to be the mindset of one very large customer who needs no economic justification. Governments of all persuasions — communist, capitalist, fundamentalist, or what-have-you — share a common curiousity about the private lives of ‘their’ citizens (and others). With the false dichotomy of privacy versus security, they claim to be preventing more terror attacks through their sweeping surveillance of all communications. Left unsaid is how this protection system works and indeed whether it works better than alternative methods. ‘Traffic analysis’ has in fact helped uncover some terrorist plots. But the first (1993) WTC bombing, the attacks on embassies in East Africa and later in Libya, on the Cole, and most spectacularly on the Pentagon and the second WTC attack in 2001, eluded advance detection. It is likely that over-reliance on computer-readable data blinded the spy agencies to common-sense clues like flight-school students telling their instructors they weren’t interested in learning how to land airplanes.
Nevertheless every failure becomes an argument for additional resources, which are routinely provided by legislators regardless of austerity everywhere else. The Internet giants and phone companies gather vast amounts of data on their customers, assuring them it’s confidential and anonymized. Actually it’s neither: When presented with a secret Court order authorized by a secret law, they hand over all that data to the Government, in many instances allowing direct access to their servers and call-routing equipment. While this has apparently been going on for some time, it is only recently that the data storage and processing capabilities have become sufficient to store and analyze all communications. To this is added the data from ground and aerial surveillance videos, mobile devices, Web postings, social-media preferences, and other electronic sources. The extraordinary accuracy of detail visible from 17,500 feet (5,300 meters) up, for example, is achieved with an array of 368 ordinary phone-cameras and a lot of computer-processing to weave the images together. The Air Force can review its database, zoom in on an area or point of interest, and play back the video from that spot on that day. All of this requires facilities of unimaginable size simply to store the tsunami of data streaming in every day.
Just such a facility is already in operation at Bluffdale, Utah, supplementing the expansion of a linked facility at Fort Meade, Maryland.
Costing two billion dollars, it is five times the size of the U.S. Capitol. ‘Flowing through its servers and routers and stored in near-bottomless databases will be all forms of communication, including the complete contents of private emails, cell phone calls, and Google searches, as well as all sorts of personal data trails — parking receipts, travel itineraries, bookstore purchases’. Pictures of the inside are unavailable, but Google’s own facilities, where much of the data originates, are presumably similar. This is what ‘the cloud’, as the term is used to describe remote data storage, actually looks like.
The storage capacity is measured in something called yottabytes. Gigabytes are familiar, and now terabyte-sized hard disks are available in stores for $70. One thousand of those equal a petabyte. Multiply by another thousand, and you get a zettabyte, and that by another thousand equals an exabyte. It takes a thousand exabytes to make one yottabyte, which is a 1 followed by 24 zeros — 1,000,000,000,000,000,000,000,000. Supercomputers can search vast quantities of data, but at some point a human being must determine what to search for, and what the search results mean. Otherwise the results are too numerous or diffuse to make sense of, and their action implications hazy. A Web-search returns millions of results, the vast majority of which are unrelated in any meaningful way to what one is looking for. The user usually selects something from the first 10 pages, a problem which generated the ‘SEO’ (search-engine optimization) industry. Even the vastly more sophisticated search algorithms used by snoops have the same problem — too many results, no time to check them all, and uncertainty about how to make practical use of them. Diverting resources away from human intelligence and intuitive pattern recognition, into tasks with a very low probability of identifying specific threats, interferes with rather than enhances security, by dumbing-down the common-sense of the ‘big-picture’.
– – – – – – – –
Chess champions like Susan Polgar win by recognizing complex patterns on the chessboard that presage advantages or disadvantages. The ability to recognize such patterns is developed by tournament experience and study of thousands of past games.
This does not involve a computational talent or an ability to think many moves ahead, but rather a constantly adapting mental grasp of the ‘big picture’. The rules of chess are embedded in the pattern recognition ability. Polgar, like other grandmasters, can play winning chess without even looking at the board, so vivid is her mental grasp of the game’s dynamics. But her memory of random arrangements of chess pieces that don’t follow the rules is no better than that of an average person’s. The part of the brain responsible for pattern recognition is the same as the one that enables us to recognize faces.
The most complex feats of pattern recognition are executed instantly to differentiate those whom we know from those whom we don’t know. Other parts of the brain connect the recognized face with other memory and experience we may have about that person. Clearly, the better we are at this — the more finely tuned out discrimination is — the better our chances for survival. Sorting out the right moves ‘in a flash’ from the chaos of irrelevant noise can make the difference between life and death.
It is precisely this ability to see the ‘big picture’ that is lost when processing yottabytes of data. Only a very small fraction of the available information is actionable intelligence. Misinformation, intentional disinformation, and sheer irrelevant noise abound. Whether the task is identifying a terrorist, blocking spam, picking stocks, finding a destination in a strange city, selecting a mate, deciding whether to trust a potential business associate or buy a particular brand, the ability to ignore noise is essential. One way to do this is to rely on family and friends, trusted associates, or recognized authorities. In traditional societies, age-old ancestral patterns are followed out of respect for the past, and because they work pretty well if circumstances don’t change too much.
In modern societies too, public opinion grants enormous deference to anointed authorities even when they are out of their depth. Purported expertise often substitutes for independent analysis. People at the top of large organizations can easily fall prey to intelligence failures due to faulty filtering-out of highly relevant information. The well-known ‘executive bubble’ restricts input to a few trusted sources, excluding meaningful dialogue with the outside world, reducing all thought to an echo-chamber of the like-minded.
Modern societies also resort to modeling to reduce the available information about a problem under consideration. Climate modeling and economic forecasting are two prominent methods in widespread use. Their predictive power is assumed to be based on the superior ability of computers to take into account all the factors relevant to outcomes. By seeking to replicate all of the relevant reality, they merely avoid the critical task of discriminating the significant from the trivial, while concealing the judgments that go into the selection, quantification, and weighting of inputs (often jiggering them to produce a favored result).
Celebrity and branding, and celebrity branding, are other mental shortcuts that ‘work’ by filtering out the thinking required for important decisions. Masses of people rely on trusted brands when they buy soap, cars, or politicians. Hence the value of celebrity endorsements. Academia and government are hardly exempt from brand-reliance, as credentials often confer ersatz credibility well beyond their specialty.
Pattern recognition is thus a kind of simplification, a selection of fundamental forms that encode the larger whole.
In art as in life, this encoding of fundamental forms bypasses computation and ratiocination. It is simply too time-consuming (and tedious) to sift through all the logical possibilities before drawing a picture or acting on a decision. Intuitive forecasting, acting on a hunch, split-second decisions based on fragmentary information — these are the stuff of life. We compare the instant situation to something like it in our experience, adjust as best we can for different circumstances now, and fly at blinding speed into the canyon of choices. The patterns we recognize and act on have in large part been learned and imprinted on ourselves by our own education, experiences, memories, and oft-repeated mental associations. Advertisers and publicists spend a great deal of money to implant their brands and styles in our minds, so that we adopt them as if they came from our own innermost desires. With art we reclaim our own minds and hearts, re-discovering a personal style that really fits our experiences and preferences.
The forms and patterns of artwork are as varied as life itself. Our notions of beauty evoke recognition of pleasurable experiences we or our species or our antecedent species have had. The landscape with wind-blown grasses, a flowing river, green trees and hills in the distance, drifting clouds recalls, perhaps, a treetop-view of the savanah. Oceans, even storm-tossed seas, give a sense of our origins, our wandering, our inter-connectedness. The patterns we recognize in these and other natural forms are intimately connected to our well-being and to our very survival.
– – – – – – – – –
Additional information:
Matt Damon as the math genius in ‘Good Will Hunting‘, takes a shot at answering the question ‘Why shouldn’t I work for the NSA?’
Some free software to protect personal privacy:
Block ads: Adblock Plus: https://adblockplus.org/
Remove Local Shared Objects, or flash cookies: Better Privacy: chrome://bprivacy/content/BetterPrivacy.html
DoNotTrackMe: https://www.abine.com/how-donottrackme-works/
Secure Web browsing: https everywhere: https://www.eff.org/https-everywhere
Block scripts: Ghostery: https://www.ghostery.com/
Search w/o being tracked: duckduckgo: https://duckduckgo.com/