Turbulences in the Cloud

De gacq wiki
Revisión del 19:02 22 dic 2010 de Gacq (discusión | contribuciones) (Página creada con '== Turbulences in the Cloud == Times are tough in the net. At a time when we have the most impressive peer communication tools for sharing and collective construction of knowle...')
(dif) ← Revisión anterior | Revisión actual (dif) | Revisión siguiente → (dif)
Saltar a: navegación, buscar

Turbulences in the Cloud

Times are tough in the net. At a time when we have the most impressive peer communication tools for sharing and collective construction of knowledge, we face the largest global offensive to take our fundamental rights, such as the right to privacy, the management of our information and the ability to interact freely. Welcome to the cloud!

What Is The Cloud?

When we talk about the cloud we're referring to Cloud Computing, which has its origin in the cloud-shaped drawing used for decades in technical diagrams to represent a long range network like the Internet. There are other terms that are typically used with a similar meaning, like "SaaS" (Software as a Service) or "Web 2.0". These terms, like many others created by the market, have different interpretations that vary according to who says them and what they have to offer, but we can summarize some characteristics that all of them share: we talk specifically of computer services through software installed in remote machines. These programs are installed and ran in the servers of the provider and are accessible using a data network like the Internet from thin clients that don't require their own logic. The information is stored in the server and the maintenance is performed by the provider. The complete implementation of this model subscribes to the model of an empty personal computer as a client that accesses programs and information hosted in the cloud.

Services like Blogspot, Facebook, Google (Gmail, Docs, Maps, etc.), Microsoft Windows Live, Linkedin, Salesforce, Twitter and Youtube are examples of services in the cloud in which the users lack the freedoms that define free software, so it could be said that these services are a special case of nonfree software, and we could refer to them as "nonfree services". This kind of services, besides having the restrictions of the traditional nonfree software, have the added problem related to the direct control of the information and the access to it. In the nonfree services you don't even have access to the executable binary of the programs, which eliminates the possibility of making copies to run it without the intervention of the provider or offline.

In this kind of service, the provider is involved and is omnipresent during the whole operation of the system. Breaking the contract, canceling, or ending payments are not an option any more for the users. The problems with the unique provider model are exacerbated when the provider is the middleman in all the transactions and can discontinue the service according to its own policies and priorities or even because of its possible disappearance from the market. With the canceling of the service, for whatever reason, the loss of access to the information is an immediate consequence. In many cases, getting the information back could be impossible and we may find the applications that were used for processing it are not accessible anymore.

For the surprise of most, it's not enough to use free software on our personal computers if our information and the logic that controls it resides in a cloud designed under a philosophy that has little to do with protecting our liberty and independence. All the benefits of free software can disappear in the cloud, because in this model the level of dependency and control present in the traditional nonfree software is increased. The free software that we run on our personal computers ends up being little more than a terminal that connect us to programs that run on remote servers.

Under this model, there's no real technical innovation and the technology used is the same we already know, so where's the novelty? From the users' point of view it's almost nonexistent. The truth is that the innovation is not for the users but for those that signed up to be part of the enormous business of the global information management, those who understood a long time ago that the property of the computer infrastructure, understood as means of production, is strategic and allows to generate economic profit by positioning the provider of the service as a mandatory middleman, while at the same time making it useful as an effective tool of social control.

These big multinational companies have the capacity of relating the users' information that they obtain through their different services. They have the power to know about our relationships, what we search for, what we read, where we are in real time. Never in the history of humanity has anybody had such tracking power over people. History shows us that we can't leave this kind of information under the control of multinationals or governments. This kind of information should not exist, at least not without citizen control.

Network Effect

The more users use the service the bigger the effect that the network produces. As each day goes by, the users that participate in these networks become more dependent on them and it turns harder to leave them. The more information we deposit in these services, the harder it is to leave and recover the work invested on them. If our communication is done with them in the middle, leaving means disconnecting from that social group in real life. The basic rule is simple: the more captive, the better.

The privacy problems appear constantly in the cloud. Privacy is not only related to our deepest secrets, it also means not being constantly tracked. Every click can leave a trail that's captured, centralized and stored, so it can be analyzed later by math algorithms that detect behavioral patterns and then infer how we think or deduce how we'll act.

In many legislations none of this is illegal. Moreover, in most of the cases, the information is willingly delivered by each user after accepting, without reading, long terms of use. Even in the case where data retention is effectively illegal according to local legislation, or if it violated any guarantee of the Habeas Data law, it would be difficult to control if any of it is happening. The concept of legality is always rooted on the location, and the notion of jurisdiction lacks meaning in the cloud, where the servers are both everywhere and nowhere at the same time. It's worth noting that in those terms of use the designated jurisdiction is that of the providing company, which is surely not an accessible tribunal for us, at least not without a huge amount of money to push for a lawsuit.

Captive cloud or network of peers

The nonfree cloud means that a few manage the power. Against this, the proposal of distributed/federated and peer-to-peer services show that it is possible to go without the big middlemen. Copyright is also a part of all this. The present model to distribute cultural goods and information is obsolete and it needs to capitalize on nonfree services to maintain control of the distribution, and consequently, survive against the new options that technology offers society.

The failed attempts at DRM (Digital Rights/Restriction Management) have shown a long time ago that the mere possession of the hardware, the software and the data leads –with more or less effort– to the breaking of the chains that prevent the distribution of nonfree cultural goods. The nonfree cloud came to find what DRM couldn't achieve.

Paradoxically, most of the nonfree cloud was built using free software. This happened because the GPL, at least until its 3rd version, fails on its spirit of preserving the freedom of its users allowing free software to be modified and then used to provide nonfree services without the obligation of sharing the derivative, thus exposing its users to the trap.

The AGPL (Affero General Public License) tries to solve this problem. The AGPL is similar to the GPL, with the added restriction that the source code of the program must also be shared when the software is used to provide network services. The AGPL license is only part of the solution, because it doesn't protect the information nor the privacy of its users. The concept of free software can hardly be adapted to online services, since the freedom of the users of the cloud cannot be guaranteed by four short statements. In this case, no license can protect us, only our civic responsibility at the moment of managing our information and the possibility of constructing, maintaining and disseminating our own federated networks.

Google is an example of a company that builds nonfree services with free software. They are oriented to the free-terminal / nonfree-server model. With this model, besides saving themselves the trouble of building their own operating system to compete against Microsoft, they also have the cooperation of certain people that support "open source", those who still haven't understood that the most important parts of their cloud are on the server side and aren't free. Google doesn't support free software looking for the freedom of its users, but because it found in free software the base to develop its infrastructure and then free only those parts that can give it a commercial advantage. It's not a coincidence that the free projects repository of Google doesn't allow the inclusion of projects with the AGPL license, while projects with the GPL, BSD, Apache and other licenses with the aforementioned problem are allowed.

Right now, a big part of our civilization's cultural legacy is being uploaded to the nonfree cloud and part of it may never be recovered.

There are very recent cases, like the one that happened to the "NO" section of the Página/12 Argentine newspaper. After forming a Facebook community, they found their account arbitrarily closed, without previous warning, explanations or any way to appeal which meant no more access to their information.<ref>La dictadura de Facebook (The Dictatorship of Facebook) Thursday, May 7th 2009</ref>

We must download this information and place it somewhere safe before it's definitely lost; we must return the control to the citizens. Now more than ever, we must mark a difference between free as in price and free as in freedom. The free as in price services seize our data, takes control of our communications, violates our privacy, makes us dependent of their systems. When you use free as in price services the price you pay is very high.

Because of the problems we explained so far, we can deduce that the cloud is not a model that should prosper, but for many reasons, among them some purely related to economy, an increase in the usage of shared virtualized servers with better resource management is expected. We are also going to see people using certain network services that lead to a drop in the price of access terminals. More security, availability and a positive environmental impact are other inherent advantages of a thin client model. The question we ask ourselves as part of the society is how are we going to reach a more efficient model without losing essential freedoms down that path.

The model that prevails in the computer networks will influence directly over the freedom of each of us. The construction of alternatives designed with our needs and interests in mind is vital. We need an architecture that doesn't expose us to control and subjugation. An alternative that respects the freedom of the users must be replicable and distributable as many times as needed, without patents or specifications that prevent it, it must run on free software and the users must have a way to assert control over their data.

There are services that due to their characteristics are difficult to replicate into multiple instances or it just doesn't make sense, as in the case of social networks, big files repositories or directory systems. For these, federated alternatives among peers can be used to achieve a distributed and decentralized network where the nodes can operate independently. Each day the systems become more critical and must stay stable and with the capacity to tolerate failures using node dispersion, the backups must be distributed, the services must be redundant and the information must be encrypted.

For the interconnection, data networks designed with a topology that allows high speed connections between peers are required and not a system that limits the data upload of the end-user connections, as it happens now with ADSL and cablemodem. The free networks have a major role, because if we're going to connect to a network within a community, paying for a connection is not justified since a simple antenna or cable can be used to connect us directly and faster.

From the technical point of view there are few limitations. The challenge is making society understand the importance of retaining control of the data. Companies, universities, schools, political parties and other organizations of society must take care of the safety of their information and of those that are part of them. Governments play an important part through the implementation of promoting laws, the finance of the initiatives and providing counseling and support to the organizations that look for their computing independence.

Self-management and cooperation: Argentine Users of Free Software (Usuarios de Software Libre de Argentina, USLA)

USLA is a project that was born around mid 90s with the objective of creating a community of national reach for Linux users. At that moment it was called "LUGAr" (Linux Users Group Argentina), later, to include all the free software and not just Linux, the name was changed to: Argentine Users of Free Software (Usuarios de Software Libre de Argentina, USLA). Nowadays USLA directly supports many projects of free culture in general.

One of the goals of USLA is to promote the use of free software and encourage the creation of users groups in different provinces, towns and villages, wherever there are people with the initiative to start them. For the already existing groups, USLA supports their development by integrating and promoting their activities. Among the members of USLA we can count most of the free software users groups from Argentina, projects of software development and organizations like Gleducar, Vía Libre, PyAR, BuenosAiresLibre and Wikimedia Argentina, among many others. All of these are nonprofit organizations.

Using the different collaborative tools the community stays interconnected to disseminate any news, track the development of joined projects and encourage collaboration among the different groups. USLA provides the infrastructure for the organization of events like the Free Software Regional Conference, Latinamerican Free Software Installfest (FLISol), etc.

One of the main points of USLA is to help the free software and free culture groups organize themselves by using free tools instead of nonfree services. In the year 2010 USLA maintains about 220 websites with a wide range of free applications and 200 mailing lists that belong to the free software and free culture community. The quantity and variety of the services that the members of USLA have available, and the quality with which they are delivered, would be impossible to reach if each organization had to take care of the maintenance of their own infrastructure.

The infrastructure of USLA is composed by many servers distributed among 3 data centers plus other backup servers. Free software is used for all the services. A great amount of tools are available, for instance: content management systems, wikis, versioning control systems, among others. Virtualization techniques are used, which allow for better resource management and produce an increase in the security and stability of the services.

This infrastructure is managed by a core of specialists that come from within the organizations involved. All the services are self-managed and are sustained with voluntary work. An important part of the job of USLA members is to train the newcomers so the different groups can gain their independence in the management of their services.

Regarding the organization, there is no formal governing structure and there's no physical location. Since USLA is a national group, there are very few face-to-face meetings and generally all the subjects are debated in mailing lists, chat channels and wikis. The services are implemented to cover specific necessities and once their potential usefulness is evident they are implemented for the rest of the community. The idea is that each service should have more than one administrator to guarantee there's always somebody available to deal with maintenance duties and support.

Regarding funding, the services are all available for zero price. The infrastructure is built thanks to donations, and the hosting in data centers is provided by sponsors that use free software intensively and find that supporting USLA is a way to give back what they get from the community.

USLA is an example of a community that decided to gain its independence and freedom in the network by building its own infrastructure. The work done by USLA can be used and replicated without restrictions by other organizations of society.

Glossary

Free Software: We say a piece of software is free when the user has the freedom to use it for any purpose, the freedom to adapt it, the freedom to copy it and the freedom to distribute modified copies.

Nonfree Software: A piece of software is nonfree when it doesn't respect the freedoms that free software provides.

Light/Thin terminal/client: They are low performance personal computers thought to be used as terminals of a network. Netbooks are an example of this.

Network effect: Effect by which the usefulness of a system is proportional to the amount of users it has.

Peer-to-peer: It is a network topology where each node can act as both client and server indistinctly .

Federated services: Distributed services without a central server, where each node operates independently and shares information with the rest of the network.

DRM: Systems to prevent or limit the access or reproduction of digital texts or media.

Virtualization systems: Technologies that allow setting up in a computer many instances of simulated pseudo-computers isolated from one another.

References

<references />

Credits

©Gabriel Acquistapace (2010). Este artículo se distribuye bajo una licencia Creative Commons, Atribución, Compartir obras derivadas igual de Argentina. Para más información visite http://creativecommons.org/licenses/by-sa/3.0/deed.es_AR

Este artículo forma parte de “Argentina Copyleft. La crisis del modelo de derecho de autor y las prácticas para democratizar la cultura” publicación realizada en español y alemán que Vía Libre presentará en la Feria de Frankfurt 2010

English translation by Leonardo Gastón De Luca