Rules and Regulations – Terms and Conditions Apply Podcast- Episode 4

If any meaningful change to terms and conditions agreements is to happen, would it require government intervention?

Listen now:

Transcript:

On July 11, 2000, Lars Ulrich, the drummer of the band Metallica, appeared before the United States Senate Judiciary Committee. His testimony began with a story. Earlier in the year 2000, Metallica began work on what would eventually become the song “I Disappear,” which was released with the soundtrack for the movie Mission Impossible 2. However, before the song was officially released, Ulrich and the other band members began getting reports that “I Disappear” was being played on radio stations across the US. While he didn’t recount who among the band’s production team leaked the track in the first place, they learned that radio stations had acquired the track from an online service called Napster.

Napster was launched in June of 1999 as a file sharing service. While other services existed to allow file sharing across the Web, Napster was special because its primary purpose was to share music, specifically mp3 files. Keep in mind that at this point, music distribution was shifting from vinyl and cassette tapes to CDs, and since personal computers with CD-ROM drives were becoming more and more common, mp3 files of individual songs were being extracted from CDs and stored on computers for more convenient listening.

Now, using Napster, people could share these mp3s at no cost with their friends, but also using Napster’s peer-to-peer network, mp3 files could be shared with anyone who also had installed the program. The service was almost immediately a success, and would eventually reach over 60 million registered users at its height in 2001, but, as you can imagine, it wasn’t without controversy.

And it was this controversy that compelled Lars Ulrich to testify in front of the Senate committee in the first place. Music sharing was spreading like wildfire, with some systems administrators at major universities reporting that Napster file sharing accounted for up to 61% of total network use. As an aside, this led to Napster being banned on many college campuses, which led to protest efforts like savenapster.com at Indiana University (if you’re interested, you can see an archived version of this website via a link in today’s show notes).

Metallica, and likely many other musicians and bands across the world were upset that their music was being freely shared online. In his testimony, Ulrich summed this up when he said, “It’s like [Napster users] won one of those contests where you get turned loose in a store for five minutes and get to keep everything in your shopping cart. With Napster, though, there’s no time limit and everyone’s a winner except the artist.” In April of 2000, Metallica had filed a lawsuit against Napster, alleging copyright infringement and racketeering. 

The case wouldn’t be decided until March of 2001, when a judge ruled that Napster had 72 hours to develop a filter for copyrighted content or face shutting down. Napster followed through, at least on removing Metallica’s songs, but this opened the company up to many more lawsuits, which ultimately resulted in the service shutting down and filing for bankruptcy in 2002.

The rise and fall of Napster evidences just one way that the law in the US has adjusted to changes in technology. Like all of the companies we have discussed in this show so far, Napster had a Terms and Conditions Agreement in which its users agreed to comply with all local and federal laws and not use the service to “infringe upon the intellectual property rights of others in any way.” In 1998, Congress passed the Digital Millennium Copyright Act, or DMCA, to address copyright concerns related to emerging Web technology. One of the core provisions in the act is that companies or websites that hosted user-generated content were not liable for copyright infringement if infringing material was reported and removed in a timely manner. Napster took this to the extreme, hoping to hide behind the protections of the DMCA, but it was ultimately to no avail given that they repeatedly refused to remove content, even though they supplied a form for takedown notices on their website.

Courts eventually enforced copyright laws against Napster, bringing the service’s run to an end. So, in the end, the law caught up with a changing technology. As we discussed in the last episode, it seems like the law still has quite a bit of catching up to do when it comes to regulating what can and cannot appear in Terms and Conditions agreements. But this raises an even bigger question. Do these agreements need to be regulated? Will companies eventually self-regulate? Users and governments around the world are now grappling with these questions, and the results have been interesting.

Intro

If we return to Lars Ulrich sitting in front of the Senate committee, he made an interesting and very clever statement near the end of his testimony. He chose to read a section from Napster’s Terms and Conditions. Specifically, he read the clauses in which Napster stated that its trademarks could not be used for any other purpose. He followed this by saying, “Napster itself wants–and surely deserves–copyright and trademark protection. Metallica and other creators of music and intellectual property want, deserve and have a right to that same protection.”

It wasn’t long after the hearing that Napster was forced into obscurity. Eventually the name would be purchased by another streaming service, and it’s actually still around today, but in this case, what Napster put in its Terms agreement ultimately was used against them in a high-profile case.

Almost 20 years later, segments from Terms & Conditions agreements are still being read in Senate committee meetings, although this time, the stakes are potentially much higher. In Episode 1 of this podcast, we heard clips from Mark Zuckerberg’s Senate Judiciary Committee hearing following the Cambridge Analytica incident. However, Congress inviting high level tech executives to testify about issues related to privacy, data collection, election influence, and other issues has been common over the last five years. In addition to Mark Zuckerberg, Facebook’s COO Cheryl Sandberg, Twitter CEO Jack Dorsey, and Google CEO Sundar Pichai have all appeared in front of Congressional committees, and generally speaking, it hasn’t been because of great things the companies have done for the United States.

According to the ideas of a social contract we discussed previously on the show, it is the job of the United States Congress to represent the will of the people it governs. So, does Congress’s apparent concern in relation to the influence of large tech companies line up with that of the American people? It’s difficult to tell.

In March 2018, privacy startup FigLeaf surveyed over 7,500 users across five countries about common privacy concerns. Among users in the United States, 83% agreed that online privacy is an important concern, but at the same time, 39% of United States respondents believed online privacy simply wasn’t possible. FigLeaf conducted a similar study almost a year later after the Cambridge Analytica incident, and unsurprisingly, the number of users who believed privacy just isn’t possible rose to 68%. In another survey, users of the privacy service Blind were asked whether they took any action following the Cambridge Analytica incident, and 53% of respondents said they changed nothing. Of the remaining respondents, 24% said they had adjusted their privacy settings on Facebook.

So, it’s clear that at least privacy-conscious consumers show concern about data privacy issues, even if they don’t end up changing their behavior as a result. And thus, it is likely that Congress’s interest in issues surrounding privacy as it relates to large tech companies are founded in a fulfillment of the social contract.

For an example of how Congress has acted recently, we return to April of 2018, where after the Senate testimony heard in Episode 1 of this show, Mark Zuckerberg also testified before the House Energy and Commerce Committee, and his reception there wasn’t much better.

[montage of clips from hearing]

All of these are valid concerns considering Facebook’s decisions over the past ten years, and it makes sense. Congress is concerned because of the reach and influence of Facebook in their own lives and in the lives of their constituents. No one illustrated this better than Representative Joe Barton of Texas.

[clip from hearing]

In the video that accompanies this clip, Representative Barton turns his mobile phone around to show Mr. Zuckerberg that he was reading the question directly from his Facebook page, and you can see the unmistakable blue website header and comment thread formatting for the brief seconds his phone is visible to the CSPAN camera. Anytime a technology has the ability to impact the lives of so many people, it seems natural that regulatory bodies start to take notice.

This should be unsurprising, considering Congress has taken initiative in the past when new disruptive technologies arise. When the ability to record music was first conceived, no one could have predicted that music recordings would move from large gramophone records to small plastic discs and lead to the rise of a file sharing program on a global network of computers. But in 1998, the Digital Millennium Copyright Act was enacted to address the concerns about sharing media in an increasingly digital age.

The Internet has also prompted other major changes to United States law, including the Communications Decency Act, or CDA, in 1996 and the Children’s Online Privacy Protection Act in 1998. Both of these laws still have major implications on how online companies operate.

Section 230 of the CDA states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This allows online providers like Facebook, Google, and Twitter to avoid liability for content that their users share. And if you’ve ever noticed that you must be 13 years or older to join an online service or app, you can thank the Children’s Online Privacy Protection Act for that requirement.

In September 2018, just five months after Mark Zuckerberg appeared to discuss data privacy, Twitter CEO Jack Dorsey also appeared before the House Energy and Commerce Committee, this time to discuss the monitoring of content on social media. In one interesting exchange, Representative Adam Kinzinger of Illinois asks Dorsey about Twitter’s interpretation of Section 230 aligns with its Terms of Service.

[Jack Dorsey hearing clip]

Dorsey implied that Twitter’s ability to control content on its platform goes back to their Terms of Service, which were written with Section 230 in mind. Users of the service should be aware of these types of clauses. This is what allows Facebook or Twitter to remove content it deems objectionable. Occasionally this may occur because content does not align with the views or community standards of the company, but often content may be removed in order to comply with the law.

It stands to reason that if companies are currently doing their best to comply with state and federal law, any new regulation or legislation imposed would force their compliance. Over and over in his testimony, Mark Zuckerberg highlighted Facebook’s efforts to resolve the issues uncovered by the Cambridge Analytica incident. We discussed some of the changes Facebook made back in Episode 1, but Zuckerberg’s insistence that Facebook has everything under control is what led to Senator Lindsey Graham’s comments about possible regulation. Let’s listen to that exchange again:

[clip of Lindsay Graham and Zuckerberg exchange]

If that sounds familiar, it’s because you heard it in Episode 1. But the question still remains. If Congress did want to act to prevent further abuses or inaction by large online companies, what types of regulation should they propose or enact? What, if anything, has already been done? And what exactly was Senator Graham referring to when he asked Zuckerberg if the Europeans got it right? We’ll discuss all of these questions and more, right after this.

Ad break

While the United States Congress hasn’t taken major action on the issues surrounding data privacy, in 2018, the European Union began enforcement of the most comprehensive legislation on data privacy ever seen- the General Data Protection Regulation, or GDPR. It’s possible you’ve heard of this, and if not, you’ve at least seen the effects of it. Have you ever wondered why almost every website you visit has a banner offering some sort of information about cookies? This is largely due to the GDPR, since one of the provisions requires clear consent to data collection practices.

The GDPR is a long document. In fact, it’s the longest document we’ve discussed in this podcast, coming in at just over 54,000 words- about 16 times the length of Facebook’s Terms of Service. This legislation introduced many important requirements for companies which handle the data of EU citizens. Compared to the language seen in most Terms agreements, you might find this section from opening paragraph 39 to be particularly encouraging:

“Any processing of personal data should be lawful and fair. It should be transparent to natural persons that personal data concerning them are collected, used, consulted or otherwise processed and to what extent the personal data are or will be processed. The principle of transparency requires that any information and communication relating to the processing of those personal data be easily accessible and easy to understand, and that clear and plain language be used…”

The introduction to the legislation makes other similar statements as it establishes the purposes of the bill. I encourage you to read the text directly if you want to learn more. I’ve linked to the entire provision in the show notes, but I do still want to outline a few of the important requirements of the GDPR, because they’re important to the privacy debate here in the United States.

First, the GDPR establishes that “The protection of natural persons in relation to the processing of personal data is a fundamental right.” This is a strong assertion that establishes the tone for the rest of the document. If you skip down to Chapter 3, you’ll begin the section on the rights of users. This starts with Article 12 and transparency, which requires data controllers to be able to present information related to data collection in clear, plain language. This has large implications for Terms and Conditions agreements, because, as you’ll recall from Kevin Litman-Navarro’s editorial mentioned in Episode 2, the readability of current agreements still has a long way to go.


Articles 13-15 establish a user’s right to access all information that has been collected about said user. This includes information on who the data have been shared with as well as the amount of time the data controller expects to store the data. Users also have the right to request information about how the data are being processed.

Moving to Article 17, EU citizens have the right to be forgotten, that is, a user may request that a data collector or controller permanently delete all information associated with his or her account. There are some stipulations here, but this provision is an important part of the GDPR, and one of the most discussed sections.

In addition to these and other direct protections for users, the GDPR also places more responsibility on data collectors when it comes to data storage and security. This includes Article 25, which discusses the importance of data minimization, that is, that data collectors should collect as little as possible in order to perform. Oh, and in the case someone were to experience a data breach, they have 72 hours to notify users or risk fines from the EU.

There are many other provisions and clauses in the GDPR that are relevant to the topic at hand, so I encourage you to read them, but I want to focus on how this regulation could potentially impact Terms and Conditions agreements. The GDPR establishes clear rules that require data collectors or controllers to provide notice and receive consent from users for any and all data collection. This consent generally comes in the form of a Terms agreement, which users may or may not be aware they are agreeing to. Having a small banner for cookie consent is one thing, but generally, by consenting to cookie use, you’re also consenting to the privacy policy or terms and conditions as a whole, which gets us back to where we started. Uninformed consent.

The idea of clear and plain language is important here, as well. This is why there have been some changes to the layout and readability of many large companies’ Terms of Service agreements. Couple these changes with cookie banners, and you can begin to see how the GDPR has improved the data privacy landscape. In an effort to comply, companies are offering users more ways to control data during their use of a service.

But recall that the GDPR only affects citizens of the EU. Sure, we’ve seen some American companies begin to comply because of their interactions with European users, but these provisions are not required for users who live outside the EU. This has caused some controversy, as companies must now decide how they are going to proceed. How will they comply if a law like this ever passed in the United States, for example?

If we return to Mark Zuckerberg’s testimony in front of the House Energy and Commerce Committee, he was directly asked about Facebook’s compliance with certain aspects of the GDPR by Representative Gene Green of Texas. First, about the clear and plain language:

[Zuckerberg GDPR clip]

If you access Facebook today, you probably won’t see the exact feature Zuckerberg mentioned here, but if you navigate to facebook.com/your_information, you’ll see an easy-to-read layout that displays all of the information Facebook has about you.

Representative Green then asks Zuckerberg about data portability, to which Zuckerberg responds that Facebook allows users to download all of the data into one file and do with it as they wish. But then Green asks about Article 21:

[Zuckerberg GDPR clip]

You’ll notice that Zuckerberg doesn’t answer the question directly here, but I believe his answer was honest. Article 21 specifically allows users to “object at any time to processing of personal data concerning him or her for such marketing, which includes profiling to the extent that it is related to such direct marketing.” This is essentially Facebook’s main business model- using user data to build profiles that are attractive to advertisers. So, it makes sense that Zuckerberg doesn’t know how Facebook will implement this, because if users did so on a large scale, the core business might be in trouble.

And it may be that companies need to consider how to deal with these types of provisions sooner rather than later. In January of 2020, California will begin enforcement of its Consumer Privacy Act. Signed into law in 2018, the CCPA will enact similar data privacy rules and regulations to those outlined in the GDPR, including the right to know what information is being collected, and the right to have information deleted. However, in the time since the CCPA was passed, privacy advocacy groups are already calling for stronger protections, such as updated transparency rules, increased protections for children, and creation of an agency responsible for enforcing the rules. 2020 will be an interesting year for the United States legal landscape, and the CCPA will be something to watch closely, as it may encourage other states to follow suit, especially in the absence of sweeping federal reform.

It’s difficult to know what the US government might do in relation to data privacy, but any reform would likely cause a shift in how companies draft their Terms and Conditions and Privacy Policies. But do we really need the government to force companies to change their behavior? In a recent survey, the law firm Bryan Cave Leighton Paisner evaluated a random sample of Fortune 500 companies and discovered that only 13% of the companies had privacy notices that would comply with the CCPA.

Recall that in the exchange between Mark Zuckerberg and Senator Lindsey Graham, Graham asked how his constituents can trust Facebook to self-regulate. There is a certain degree of self-regulation that has occurred in the wake of major privacy scandals. Recall that Facebook changed its Terms of Service following the Cambridge Analytica incident, and it updated the ability of users to view all of their data in one place.

Another example is how Google has changed its privacy controls. In December 2018, Google CEO Sundar Pichai appeared before the House Judiciary Committee to testify about Google’s data collection techniques. He was asked specifically about Google’s Terms of Service by Representative Bob Goodlatte of Virginia:

[Clip from Sundar Pichai hearing]

Privacy Checkup has been Google’s answer to critics of just how much data they collect. In fact, before the exchange you just heard, Representative Goodlatte had characterized Google’s data collection like this:

[Clip from Sundar Pichai hearing]

When it comes to the business practices of large technology companies, they would much rather self-regulate than be subject to what we earlier heard Mark Zuckerberg refer to as “the right kind of regulation.” Of course, in the case of Google and Facebook, we’ve also seen that when regulatory authorities do step in, it’s often shrugged off. In July 2018, the EU fined Google $5 billion for abusing the dominance of its Android operating system. Then in March of this year, the EU fined Google another $1.7 billion for abusing its dominance in the online advertising space.

Also this year, the United States Federal Trade Commission made headlines when it announced a $5 billion fine on Facebook for failing to notify users of a data breach. This was in addition to a $100 million fine levied by the Securities and Exchange Commission to settle concerns about Facebook’s failure to disclose that third parties were accessing so much user data.

Compared to their yearly profits, however, these fines could be easily absorbed by both Facebook and Google, leading some to suggest that the EU and the FTC didn’t go far enough. The Electronic Privacy Information Center, or EPIC, a privacy advocacy group in the United States actually filed a lawsuit, stating that the FTC needs to reevaluate its settlement with Facebook, largely because the fine was too small, but also because the settlement allowed Facebook to avoid admission of guilt for the data mishandling during multiple data and privacy scandals.

Monetary consequences aside, data privacy scandals haven’t been able to avoid media attention in the United States. While many of you had probably never heard some of the Congressional hearing clips I’ve shared on this show, there were many online news articles and videos shared at the time, which featured sound bites from the hearings. And while the American people sometimes have a short memory when it comes to issues surrounding Terms of Service and data privacy, Congress doesn’t.

Earlier this year, Facebook announced its plan to launch a digital currency called Libra. You may have heard about this, as it was a fairly high-profile announcement. Just to give you a quick background on digital currencies, you’ve probably heard of bitcoin at some point over the past decade. Bitcoin is the largest digital token, or cryptocurrency, in existence. They aren’t physical coins, but rather each coin or part of a coin is stored as computer code in what’s called a digital wallet. There are many services that allow you to send digital currencies to other users, exchange the digital currency for your preferred fiat currency like the dollar or the Euro, or use the coins to purchase goods at stores. Digital currency advocates will also point out that all transactions involving these currencies must be verified by other users of the system on what’s called a blockchain. Think of a blockchain as a permanent record or a ledger where all financial transactions are stored and are publicly viewable. The idea of open and transparent transactions with a digital currency is what has attracted many other large financial institutions to look into digital currencies of their own, but it was only a matter of time before Facebook decided to step into the space.

Facebook’s stated goals for Libra are to provide its 2 billion worldwide users with a payment network, which would allow each user to send money quickly, easily, and as cheaply as possible to any other Facebook user across the world. This idea could allow many users in developing countries to have access to financial tools that would otherwise be impossible.

However, skepticism about Facebook’s plan began to quickly drown out any positive spin the company could offer. As you can imagine, when you put Facebook and financial transactions in the same sentence, it’s hard to avoid thinking about advertising and data collection. And this is where Congress comes in. On July 16 of this year, Libra co-creator David Marcus appeared before the Senate Banking Committee, and he was met with arguably a worse reception than Mark Zuckerberg.

[Clips from Libra hearing]

While Facebook maintains that users’ Libra activity will not be subject to its normal terms of service and privacy policy, many are still skeptical. Initially, Facebook had the backing of several other major financial institutions including Visa, Mastercard, Ebay, and PayPal, but after just a few months, The New York Times reported that all of these companies backed out, citing efforts to focus on other strategies, but also Libra’s ability to satisfy regulatory requirements.

Since we can only imagine the Terms and Conditions that will accompany a Facebook-owned payment processing service, it’s difficult to know if the Libra project will hold up over time, or whether Facebook’s move into the heavily-regulated financial sector will cause them to have second thoughts.

I want to take a moment to summarize the things we’ve learned. First, we know that major scandals cause public awareness about data privacy to increase, and this awareness does cause behavior changes for a small number of users. We’ve also seen that Congress has been willing to pass laws due to changing technology, as evidenced by the DMCA, the CDA, the response to Napster, and others. Additionally, Congress has recently attempted to hold Facebook, Google, and Twitter accountable for their roles in various scandals, including the spread of fake news and election meddling by Russian interests.

Then, there’s the European Union, which recently established the GDPR, causing major shake-ups across the world in the areas of data handling and privacy. Combine all of these factors, and you’ll see that what’s in the fine print of Terms and Conditions agreements and Privacy Policies has the potential to impact users around the world, and more and more regulatory bodies are waking up to this reality. It remains to be seen whether major legislation will arrive at the federal level in the United States, but pending the enforcement of the California Consumer Privacy Act, we may see more states move in the direction of privacy protections.

We’ve seen time after time that consumers are given little or no incentives to read the full agreements before using a service, so it’s encouraging to see some steps in the right direction, at least when it comes to awareness by more people.

Regardless of what the government does, it’s ultimately the responsibility of each user to evaluate whether the services we use are worth the prices we pay in privacy. And if you aren’t currently aware of the cost, you’re probably going to have to read the Terms and Conditions. So, if government isn’t the answer, what exactly can be done about changing the nature of boilerplate agreements in the online space? Should we expect companies to self-regulate, and if not, what are actionable things we as the users of online services can do to reflect the need for clearer Terms and better data practices? That’s next time on Terms and Conditions Apply.

References and Further Reading:

Lars Ulrich Senate testimony: https://web.archive.org/web/20071129061341/http://judiciary.senate.gov/testimony.cfm?id=195&wit_id=252

Napster on college campuses: https://web.archive.org/web/20111019152028/http://www.isp-planet.com/politics/napster.html

Savenapster.com https://web.archive.org/web/20131126200705/http://archive.savenapster.com/

Survey results

https://figleaf.com/blog/privacy-guides/the-state-of-privacy/

https://figleaf.com/blog/perspectives/powerful-privacy-protection-steps-governments-are-taking/

https://www.besttechie.com/report-majority-did-not-delete-facebook-or-tighten-their-privacy-settings/

California Privacy Rights Act: https://www.caprivacy.org/

Bryan Cave Leighton Paiser study: https://www.bclplaw.com/en-US/thought-leadership/ccpa-privacy-faqs-how-many-companies-have-already-updated-their-privacy-notices-for-the-ccpa.html

Full text of the GDPR

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679

Facebook CEO Mark Zuckerberg Hearing on Data Privacy and Protection, April 10, 2018. https://www.c-span.org/video/?443543-1/facebook-ceo-mark-zuckerberg-testifies-data-protection

Twitter CEO Jack Dorsey “Social Media and Content Monitoring” September 5, 2018 https://www.c-span.org/video/?451103-1/social-media-content-monitoring

Google CEO Sundar Pichai “Google Data Collection” December 11, 2018 https://www.c-span.org/video/?455607-1/google-ceo-sundar-pichai-testifies-data-privacy-bias-concerns

Telecommunications Act of 1996 a.k.a. Communications Decency Act: https://www.congress.gov/bill/104th-congress/senate-bill/652/text?q=%7B%22search%22%3A%5B%22Communications+Decency+Act%22%5D%7D&r=9

Google EU Fines:

https://www.npr.org/2019/03/20/705106450/eu-fines-google-1-7-billion-over-abusive-online-ad-strategies

https://www.cnbc.com/2018/07/10/eu-hits-alphabet-google-with-android-antitrust-fine.html

Facebook FTC fine: https://www.cnet.com/news/facebook-lost-control-of-our-data-now-its-paying-a-record-5-billion-fine/

Facebook SEC fine: https://www.cnet.com/news/facebook-and-the-sec-to-settle-after-probe-into-privacy-practices-risks/

EPIC lawsuit: https://www.theverge.com/2019/7/26/8932023/facebook-ftc-privacy-5-billion-settlement-privacy-group-lawsuit-epic

David Marcus Libra Senate Banking Committee Hearing: https://www.c-span.org/video/?462671-1/lawmakers-question-facebook-official-proposed-digital-currency

Libra partners back out: https://www.nytimes.com/2019/10/11/technology/facebook-libra-partners.html

Music from https://filmmusic.io

“Airport Lounge” by Kevin MacLeod (https://incompetech.com)

License: CC BY (http://creativecommons.org/licenses/by/4.0/)

“Amazing Plan” by Kevin MacLeod (https://incompetech.com)

License: CC BY (http://creativecommons.org/licenses/by/4.0/)

“No Good Layabout” by Kevin MacLeod (https://incompetech.com)

License: CC BY (http://creativecommons.org/licenses/by/4.0/)

“Hard Boiled” by Kevin MacLeod (https://incompetech.com)

License: CC BY (http://creativecommons.org/licenses/by/4.0/)

“Covert Affair” by Kevin MacLeod (https://incompetech.com)

License: CC BY (http://creativecommons.org/licenses/by/4.0/)

“Samba Isobel” by Kevin MacLeod (https://incompetech.com)

License: CC BY (http://creativecommons.org/licenses/by/4.0/)

“Mining by Moonlight” by Kevin MacLeod (https://incompetech.com)

License: CC BY (http://creativecommons.org/licenses/by/4.0/)

“Dreamer” by Kevin MacLeod (https://incompetech.com)

License: CC BY (http://creativecommons.org/licenses/by/4.0/)

Other music:

“Let That Sink In,” composed by Lee Rosevere. https://leerosevere.bandcamp.com/

Used under a Creative Commons Attribution 4.0 International License https://creativecommons.org/licenses/by/4.0/

“Royale,” performed by Josh Lippi & the Overtimers https://joshlippi.bandcamp.com/