The NSA Spying Program Value Fits the Serendipity Economy Model

The NSA Spying Program Value Fits the Serendipity Economy Model

tothepoint_NSAI was listening to Warren Olney’s To the Point on the radio today as they discussed the NSA spying program PRISM (listen here: NSA Spying Program Puts Secretive Court in the Spotlight). One of the questions fit into The Serendipity Economy framework.: How does the government determine the value of the program?

No one could offer a valid proof of value. Why? — Because the act of collecting the data, although a great example of computerized efficiency, doesn’t produce a value, unless one employs false metrics, like the number of records stored, the speed of recall, etc.  When the industrial age measures result in meaningless metrics, then The Serendipity Economy is probably the larger influence.

Here are the six primary assertions associated with The Serendipity Economy with a brief explanation of how each applies to the NSA program.

The process of creation is distinct from value realization. The act of gathering records, no matter now productive, does not result any any direct value. The value arises when analysts identify a pattern and that pattern leads to a suspicion and when investigated, that suspicion saves lives — and this value is not connected to the productive work of gathering records, but to the analysis or such records, the realization of a threat and the action taken to mitigate the threat — the technology of gathering information does not do any of those things.
Value realization is displaced in time from the act that initiated the value. The data in the NSA program has been gathered for years. It is not clear, because of security restrictions, if anything of value has come out of the program, but it is likely that any large value, which could be linked back to PRISM, hasn’t occurred because that would create a positive story to offset the current privacy issues that are front-and-center in coverage. In The Serendipity Economy, the value of such a program may take years to realize. Contrast this with the likely information technology goals associated with such a project, like hooking into communications databases, collecting and consolidating data and providing an analytics front-end to the data warehouse. Those goals were likely achieved almost immediately after going live, but that productive work, per the first rule, is distinct from any value, which again, may take years to see.
The measure of value requires external validation. Once a pattern is identified, the system cannot determine if it is of value. The value of the pattern must be determined by a process completely separate from the data collection or the pattern recognition process, both of which may be automated. A person must look at the data, examine the conclusions based on the data and determine the value — in this case, the magnitude of the response. Low value, for instance, might result in warrants issued for deeper, perhaps active surveillance or physical searches — high value might result in military or security agency action against a person or group of persons to avert an imminent threat.

It could be argued that an automated pattern recognition system could identify a pattern of value with a high degree of threat probability. That, however, still fits into the model because the analysis, is external and separate from the collection and storage of the data. The automated pattern recognition is an example of a productivity increase because it would probably takes tens or hundreds of people to examine such data manually. This is a good example of how productivity, as measured in industrial age terms, complements and co-exists with The Serendipity Economy. In a pure production orientation, however, if data were gather for 5-years with no results, such a program would likely be called failure. Only in The Serendipity Economy, with its recognition of time displacement to value, does it make sense to continue to examine data over the long periods of time required to sense a positive result.
Value is not fixed and cannot be forecasted. No one associated with PRISM can forecast what its results will be. It may result in the identification and capture or elimination of a low-level threat, or it may avert a major international incident. There is just no way to tell because no one can forecast what data will go into the system (because future data hasn’t been created yet,) what correlations or insights may arise, what actions will be taken, if those actions are timely, etc.
Looking at a network in the present cannot anticipate either its potential for value or any actual value it may produce. If the pool of people from whom the communications metadata is drawn represents those people currently in contact with people from the United States, such a suposition does nothing to say if that group of people will create an outcome determined to be of value or not. Although it is counter-intuitive to think that in such a large group of people what is being looked for may not be found, it is no more likely that in such a group of people, such that the thing will be found. If you don’t know what you are looking for, and it hasn’t really existed in the past (or isn’t produced with regularity) any set of people conducting normal, day-to-day activities are just as likely to do something as to not do it, especially when it comes to high impact events that occur infrequently.
Serendipity may enter at any point in the value web, and it may change the configuration of the value web at any time. And finally, that pool of people being monitored is not static. The value coming from the surveillance program may arise when someone not in the current network (all people in the United States talking to people outside of the United States, for instance) enters that network. That means for years people could monitor metadata about communications and find nothing, but as soon as a new actor enters the network, they bring with them the threat being sought. It may also be that the attributes of those within the network may change, such as someone becoming radicalized, which also perturbs the value of the network.


This post is not intended in anyway to defend violations of FISA or PRISM in general, but to suggest that asking about PRIMS’s results may not yield a meaningful answer, and it may just be that the meaningful answer as yet to occur.  It also may mean that the government cannot share such an answer — but regardless, it is likely that even if it does possess a meaningful answer to the value generated by the PRISM investment it took time to deliver such value and that the realization of such a value adhered to the attributed outlined in this post.


Daniel W. Rasmus

Daniel W. Rasmus, Founder and Principal Analyst of Serious Insights, is an internationally recognized speaker on the future of work and education. He is the author of several books, including Listening to the Future and Management by Design.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.