RIGHT OF WITHDRAWAL AND SALE OF NFT

The Case

Porsche’s recent NFT collection made a lot of noise. In the ToS at the time of minting there was a point that allowed users to obtain the right of withdrawal within 14 days of the release of the collection, whatever the new ‘floor price’ after minting.

What is the right of withdrawal? 

The right of withdrawal, commonly referred to as the ‘right to a rethink’, is one of the most important rights given to the consumer by the Consumer Code. 

The right of withdrawal allows the consumer to change his mind about the purchase made outside the seller’s business premises, freeing himself from the contract concluded without giving any reason within 14 days after the purchase. In this case, the consumer may return the goods and obtain a refund of the amount paid.

What is the reference legislation for the right of withdrawal applicable to the sale of NFT?

In Europe, the matter is regulated by the Consumer Rights Directive 2011/83/EU. Directive 2011/83/EU replaces the Distance Selling Directive (97/7/EC) and the Doorstep Selling Directive (85/577/EEC) by harmonising the rules on contracts between consumers and sellers.

Updated with Directive (EU) 2019/2161, it is a regime applicable to a wide range of contracts concluded between professionals and consumers, in particular sales contracts, service contracts, contracts for online digital content and contracts for the supply of water, gas, electricity and district heating; it covers both contracts concluded in shops and those concluded off-premises (e.g. at the consumer’s home) or at a distance (e.g. online).

The update made by Directive (EU) 2019/2161 extended the scope to contracts under which the professional provides or undertakes to provide digital services or digital content to the consumer, and the consumer provides or undertakes to provide personal data. The legislation establishes, inter alia, a number of information obligations for professionals. In particular, they must, before concluding a contract, provide consumers, in plain and intelligible language, with information such as:

  • the identity and contact details of the professional
  • the main characteristics of the product; and
  • the applicable terms and conditions, including payment terms, delivery times, the
  • performance, duration of the contract and conditions of withdrawal.

Finally, online sellers are required to inform consumers whether they are a professional or a non-professional, advising them that EU consumer protection rules do not apply to contracts concluded with non-professionals. 

Directive 2011/83/EU includes a comprehensive set of provisions on withdrawal, under which, inter alia, consumers may withdraw from distance selling contracts within 14 days of the delivery of the goods or the conclusion of the service contract, with certain exceptions, without any explanation or cost; if the consumer is not made aware of his or her rights, the withdrawal period is extended to 12 months.

Europe is not the only community that has adopted strongly protective rules for the weaker contracting party, many countries such as the United Kingdom, for example, have adopted legislation that provides the same or very similar protection.

Which companies are obliged to apply the right of withdrawal?

Article 3(4) of Directive 2011/83/EU defines the objective scope of the regulation by referring to ‘any contract’ concluded between a professional and a consumer.

Therefore, even projects that are based outside the European Union as well as other countries (e.g. the United Kingdom) may still be subject to the consumer laws (of the United Kingdom) and of the European Union and states with similar regulations when selling goods or services to consumers in these states. This is because the scope of these laws includes any company that offers goods or services to consumers in states that offer this protection, regardless of where the company is located.

This means that international companies selling to consumers, e.g. from the UK and EU, must comply with UK and EU consumer laws. Failure to comply with these laws can result in penalties for the company, including fines and legal action.

Can the right of withdrawal be excluded?

There is a casuistry which, in certain specific cases, allows the exclusion of the right of withdrawal. For example, for the matter identified here, Article 16 of Directive 2011/83/EU letter M), tells us that “Member States shall not provide for a right of withdrawal in respect of distance and off-premises contracts relating to […] the supply of digital content on a non-material medium if the performance has begun with the consumer’s prior express consent and his acknowledgement that he would lose his right of withdrawal. This is a very specific provision that, if interpreted correctly, would allow the professional to avoid heavily negative consequences for the economy of the project by remaining within a perimeter of legal compliance.

Deep dive into cybersec: exploits

In general terms, an exploit is a series of actions executed to derive the most benefit from a pre existing resource.

In computer security we could interpret it as a piece of software, a chunk of data, or sequence of commands that takes advantage of a bug or vulnerability in order to cause unintended or unanticipated behavior to occur on computer software or hardware. This can include data leakage, privilege escalation, arbitrary code execution (often used as part of a zero-day attack), denial-of-service attacks and viruses.

Going even deeper, this term is used to describe the use of low-level software instructions that exceed the intended function or design of a computer program.

Hackers are always on the lookout for vulnerabilities. They use exploits in order to gain personal data, such as credit card numbers, bank account access, social security numbers and all kind of sensitive information.

The most common vectors for an exploit is injection:

  • A SQL injection, where a bad actor (or someone posing as is) injects malicious code into an entry field in order to extract data from a database.
  • An XSS attacks, where a bad actor injects malicious bits into a website’s source code in order to extract data from the website’s database or server.


Famous exploits

There are many famous exploits and hacks but some of the ones that are most prominent in the public eye are Heartbleed, Sony PlayStation Network Hack, Target Security Breach, Eternalblue (that started the Ransomware trend).

Heartbleed is a vulnerability in OpenSSL that was discovered on April 7, 2014: it was a bug in the protocol that allowed attackers to steal information from servers without being detected. This exploit affected over 66% of all web servers globally, including sites like Yahoo!, Facebook, Google, and Amazon.

The Sony’s PSN (PlayStation Network) one happened even before: In 2011 a group of hackers stole personal information from 77 million accounts. The hackers were able to do this because they had obtained PSN usernames and passwords from an outside party who had hacked into Sony’s network earlier that year.

Target‘s security breach occurred in the 2013 holiday shopping season and saw 40 million credit cards stolen from their systems: in that case the hackers used malware sent to Target’s point-of-sale terminals to steal card data while it was being entered into the system.

Another famous example has been the WannaCry exploit, carried through Eternalblue, a vulnerability discovered by the NSA and kept secret until leaked by a group called Shadow Brokers. It is a security exploit that affects Microsoft Windows and was unpatched at the time of public disclosure. The Eternalblue exploit has been one of the most dangerous exploits in the world, and it has been used to create some of the most devastating ransomware attacks. The Eternalblue exploit has been used to spread WannaCry, NotPetya, and BadRabbit ransomware attacks.

Ransomware is a type of virus that locks up data in the victim’s computer and demands payment in order to release it. It usually spreads through email attachments, downloads from untrusted sources, or in general through a vulnerability in the system.

Stay safe out there!

As the number of people and devices connected to the internet has increased, so have the number of cyber attacks: cyber criminals are always on the lookout for new ways to exploit vulnerabilities in your system.

To prevent it, it’s important to take measures to make sure you are not vulnerable:

  • always update your software and hardware regularly
  • use strong passwords and change them regularly
  • use two-factor authentication whenever possible
  • have a security suite installed that includes antivirus protection and firewall settings that block suspicious or malicious programs from accessing your system.

The latter is especially important for Windows users, but this doesn’t mean that Linux and macOS users should feel safe: as the usage and distribution of Unix based system increase and gain popularity, so are the researches on possible attack vectors.

Updates are a crucial part of keeping your device protected. These updates not only stop exploits, but they improve the security of the device and protect it from known vectors of attack: the biggest part of the most devastating exploitations have been carried out due to unpatched systems (even after the patches have been distributed).

The internet is a digital world that is both beneficial and harmful. It was created to connect people from all over the world, but it has also created a space for hackers to exploit vulnerabilities, posing a risk to everyone without proper knowledge of the problem. That’s why It is fundamental to learn how to protect your privacy and your data, and to gain consciousness about online security and online threats.

Knowledge is power, the power of defending your data from malicious attacks in this case.

Reinforcement Learning (RL): how robots learn from their environment

Reinforcement Learning (RL) has been increasingly applied in recent years in the world of autonomous robotics, especially in the development of what have been called ‘curious robots‘, i.e. robots programmed to mimic human curiosity about the external environment.

Indeed, in general, one of the fundamental problems of autonomous robots concerns their ability to autonomously generate strategies to solve a problem, or to autonomously explore an environment. RL makes it possible to improve the robot’s performance in both these areas. Reinforcement learning is one of the three basic paradigms of machine learning, together with supervised learning and unsupervised learning. In the field of ‘open ended robotics’, RL is used to allow the robot to explore and learn from an environment even in the absence of an explicit goal. Briefly, how RL works in this context is as follows: the robot starts to explore a part of the environment with sensors and actuators, i.e. mechanical arms. As soon as the environment is known beyond a certain threshold, the RL algorithm decreases the reward, i.e. positive ‘reinforcement’ – hence Reinforcement learning – in exploring that part of the environment, and forces the robot to explore a new portion. In this way, the robot is driven, autonomously, by a curiosity-like principle. One of the major advantages of using reinforcement learning in the development of ‘curious robots’ is that it allows these robots to learn from their environment in a more natural way. Traditional programming techniques require engineers to specify every step a robot must perform to complete a task, which can be time-consuming and inefficient, especially if the robot finds applications in unpredictable and changing environments. Reinforcement learning, on the other hand, allows robots to learn autonomously from their environment and develop the best interaction strategies. These techniques can also be used to make the robot discover, in a trial-and-error procedure, which is the shortest way out of a maze. In general, RL works very well for exploratory objectives, and for interaction with extremely unpredictable environments, where normal programming techniques would certainly fail. The evolution of this approach could lead in the coming years to robots capable of exploring vast portions of the environment, for long periods of time, without the need for any human supervision. Such technology has applications in multiple fields, both civil and military.

Despite these advantages, there are also some potential risks associated with the use of reinforcement learning in curious robots. One of the main concerns is that reinforcement learning algorithms can be difficult to interpret, which makes it complex to understand how a robot makes decisions and to predict how it will behave in a given situation. Furthermore, reinforcement learning algorithms carry the risk that a robot will learn to perform sub-optimal or even harmful actions if the interpretation of environmental feedback is ineffective.

Overall, although there are certainly risks associated with the use of reinforcement learning in robotics, the advantages of this technique can be significant. By enabling robots to learn complex tasks and adapt more easily to new environments, reinforcement learning can help make robots more versatile and efficient. As long as these algorithms are used carefully and with proper supervision, they can be a powerful tool for improving performance and advancing the field of robotics.

ARTIFICIAL INTELLIGENCE: EXPLORING THE INVISIBLE INNOVATION

What is artificial intelligence

 Artificial Intelligence is a field of Information Technology (IT) aimed to allow and demonstrate how a software system can act rationally.

The earliest reference of studies of the human brain are placed around the 17th century BC with the Edwin Smith Surgical Papyrus, demonstrating how humans have been fascinated by gray matter soon after the start of civilization.
It’s only natural that, with the advent of IT, humans tried to replicate the same workflow in a machine.

BRIEF HISTORY OF ARTIFICIAL INTELLIGENCE

Contrary to common belief, Artificial intelligence is not a new field. The first studies about it started in the 1930s with the “thinking machines”, when three major actors defined the basis:

  • Norbert Wiener with his electrical network (mimicking neurons’ activation);
  • Claude Shannon, describing digital signal processing;
  • Alan Turing, that defined the rules to assess any problem from a digital point of view.

Those three key principles concretized together in 1943, when Walter Pitts and Warren McCulloch described the first Neural Network, where artificial neurons were given the task to resolve simple logic functions.

In the next years, studies continued without a real focus (or a real name), until the Dartmouth Workshop was held in 1956, with a very straightforward proposal: every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it. In that precise moment, the term Artificial Intelligence was born and study kept going on. Attention from the public and fundings were on the constant rise, except for two periods called “Winters” – 1974-1980 and 1987-1993 – that saw respectively a major cut of funds by DARPA (1974) and the collapse of LISP Machines (1987).

THE INVISIBLE COMPANION

Luckily, history proven that Artificial Intelligence isn’t only vaporware: after dark times studies begin to prosper again (with some hiccups, for example in 2010 with the EMini S&P 500 futures contracts, when a hot potato effect started unrolling between “intelligent agents”).

Fast forward to today, we can barely notice the presence of Artificial Intelligence, nonetheless it’s a very important and integral part of our lives, participating in:

  • Utilities supplies distribution;
  • Traffic control in major cities;
  • Weather forecast;
  • Food chain of transportation;
  • Logistics;
  • Social media;
  • Habits analysis;
  • Art;
  • and so on

A survey made by Ipsos for the World Economic Forum reported that 60% of the candidates polled think that AI will make their lives easier in the next 3 to 5 years, but only 52% of them think their life will actually be better.

DATA: A DIGITAL DNA

The reason for the skepticism resides in the same core of the AI: the data.

In order to make a system autonomous it needs to be fed data that will subsequently be organized into training datasets from which the machine can learn.

While a lot of data for specific applications are gathered by governments / institutions / organizations, personal data can only be collected with the use of applications like social media. Personal data are obviously very dynamic, hence needing a constant update and collection.

This raised a lot of concerns about privacy and while our data is gradually getting more and more protected thanks to regulamentations (like the GDPR for the EU), it still feels like a wild west.

While in most cases the collection is for a kinda harmless end goal (like clustering for marketing purposes), the same data could be used to manipulate people (e.g. Cambridge Analytica) or, worse, to control people’s lives.