Whilst in Arrakis

The Future Will Be Better Tomorrow

The endgame of AI

Human losing a battle against artificial intelligence
Human losing a battle against artificial intelligence, generated using SD on my laptop

We have entered the endgame of AI progress. Companies should have responded to the DeepMind AlphaGo/Zero by appointing a Chief AI person, and mostly did not. With the Bing/OpenAI announcement we have passed a barrier in the narrative, and your board should empower ASAP a team in the company with the sole and explicit purpose to disrupt your core business using AI as much as they can. Because it’s happening. LLMs are the beginning of the end, and there are several more milestones to go (ongoing learning, understanding, robotics) but for a lot of tasks they are good enough, and more importantly they not just help with existing processes, they allow users to do things that were not possible before, and this is a recipe to mess up your business – and if you think you are immune you still can’t assume that your own customers won’t be.

AIs are going to be the main interface we will use to interact with the world. If you are concerned about the dominance of Google Search or Microsoft Windows you have seen nothing yet.

Sure you can regulate or sue left and right, and maybe this will destroy one company, or give you a nice payout for one quarter. But it doesn’t matter. Like Napster did for music the appetite for this tech is high and so are the user benefits, externalities be damned.

When the history of humanity will be written centuries from now the Singularity will be initiated by the invention of the transistor, but only because it will allow us to build AIs.

The joke used to be “it is called AI when it does not work, because when it works it is called something else” and no-one is making this joke anymore.

Advertisement

Some Dragon’s Lair discoveries, 40 years on

Got sick again this week (yay) so self-medicated with massive amount of YouTube, and spent quite a bit of time watching one of my favorite subjects from my childhood: laserdisc games. In the process I unearthed some stuff that did not know about Dragon’s Lair:

  • The game was developed by three companies. RDI was trying to develop a console (the Halcyon), but the unit economics were so horrendous that they pivoted into making an arcade game.
  • Don Bluth Productions was recovering from a financial failure. Their first animated feature, The Rats of NIMH, underperformed at the box office (Bluth said that the kids were either watching ET or playing at the arcade instead of watching his movie), so they explored making video games and partnered with RDI.
  • The third company involved in making Dragon’s Lair was a publisher, Cinematronics, which had just went through a Chapter 11 restructuring the year before. With this background it is a miracle that anything shipped.
  • There are a number of unused scenes in the Dragon’s Lair disc, and it suggests the game had some limited form of choice of movement within the castle (branching paths were only added in Space Ace, Cinematronic’s following laserdisc game).

Here Dirk moves a log and reveals a trapdoor that leads to an exit, instead of taking the right door like in the normal level.

In the released game an electrified cage surrounds Dirk and one can only move forward. But here it suggests the player can choose between three paths, possibly the lava men level at the left, chewing door front, and an unused level to the right

The cave to the right takes Dirk to the unused level and it features a tentacle monster and some jumping

EDIT: Another hidden level is present on the disc, featuring three one-eyed fleshy pig orcs (porks?) throwing spears at Dirk. The rough coloring (I think there are no special effects or complex coloring palettes), plus the gameplay resembling a traditional game with very repetitive animation to me suggests this was an early gameplay test, possibly the first playable level created to show to investors before exploring a more cinematic approach to gameplay that dominates Dragon’s Lair.

In the released game Dirk also approaches the potions but is attacked by a green slime monster. In the deleted scene it suggests that the potions have each a use and the player can choose which one to pick: one is neutral, one buffs Dirk (an extra life perhaps?), and one is poured over his sword adding a flame effect – similar to the endgame sword that kills the dragon, maybe suggesting some sort of proto-inventory system (which was later expanded in Dragon’s Lair 2 with hidden objects).

EDIT: A sequel was developed, Dragon’s Lair 2: Time Warp, but sat unused until 1990 because of a lack of an agreement between Don Bluth Productions and Cinematronics. That one also had a missing level (the Pirate Ship) but it was never fully developed and instead was reused in the absolutely awful PC game Dragon’s Lair III.

EDIT: also found that RDI worked on an original prototype of a storytelling console called Shadoan in 1979, which used a roll of printing paper and a keyboard (below). It can be seen at the 4:12 mark on this collection of interviews about the making of DL. The Shadoan evolved from paper into a film projector before morphing into a proto-Halcyon, but the toy industry was not interested. You can see the obvious Dungeons & Dragons influences on the Shadoan, as well as the re-use of the wizard’s den in Dragon’s Lair.

  • Anyway just found fascinating to discover something new about a game that I thought I knew like the palm of my hand, almost 40 years later and wanted to share.
  • Given the extreme financial difficulties of the companies involved in the production, I think gameplay was cut down as well as anything involving choice would have created an interruption in a scene which would have been irritating (normally breaks are only between levels or when Dirk dies, and IIRC it could take ~1 second for the laserdisc player to find the new track to play).
  • I remember reading about the Halcyon in 1986 or so, as RDI kept pitching the idea of a console with a laserdisc player, which wasn’t a daft idea per se, and the problem was not even the unit economics of the games which pretty much required making a small animated movie (at this time outsourcing to ultracheap Korean animation was starting to get available), just that the gameplay lacked variety (it could only do what we call today quick time events) plus had limited replayability.
  • The pricetag for the Halcyon was going to be $2,500 ($6,800 in today’s prices), be voice controlled using a Z80 microprocessor, and it never saw the light of the day but there are few copies that are known to exist as prototypes. It would have beaten the PlayStation by 10 years but I guess the advancement in computing power (meaning it was capable of doing 3d) plus the CD player (which unlike the laserdisc could support digital video and data) is what ultimately won the market.
  • Apparently Cassandra Peterson (Elvira) was an investor and owns one.
  • Side note: RDI went bust and the guy became a realtor, Don Bluth Productions struggled and then relocated its animation studio to Ireland due to generous tax breaks, and is partly responsible for bootstrapping the Irish animation industry and could be why there is one of the best animation studios, Cartoon Saloon, in Kilkenny (of Song of the Sea and Book of Kells fame). Cinemation went through multiple acquisitions and now they offer gambling apps.
  • I have read that an interactive laserdisc adaptation of The Lord of the Rings was also planned (by a Japanese company IIRC but can’t find the name) but unclear if it ever entered production and if it was just going to reuse part of the Bakshi 1978 movie.

The next Google search engine will be Generative AI

(explore this prompt on Lexica)

The Old

One of the earliest [1] incarnations of finding stuff on the web was a directory service called Yahoo which was a glorified yellow pages directory [2] that was human curated. While in the beginning this was a good way to navigate the web for new users, very quickly the rate of growth of content became so steep that human curation was not longer the best way to experience the internet. [3]

In the meantime the first web crawlers (programs that would read a webpage and index its text, which is something that parallelizes very well) started to appear, either Sun SPARC servers, on Alpha DEC, but mostly the impact of the internet was underestimated and these companies used search engines as use cases to sell to corporates, instead of understanding that the web was *the* market. So their product was designed not for the customers that were using it. [4]

Linux started getting better [5], and computer clusters based on it became cheaper, based on custom built PCs, so web crawlers were able to have bigger and bigger indexes, and caching more web content. This was the state of the art of search engines. The ~28th search engine to hit the market, Google, was much better than the other, I personally used it because it had the best archive for programming and technical stuff (all the others just returned garbage because they were looking for natural language and became confused where you were searching for code with weird characters and operators). Google took off, more or less became a monopolist globally and here we are still.

Since then some attempt to challenge Google happened either with bigger indexes (Cuil, whose archive – to their dismay – was mostly porn I believe), semantic search (True Knowledge and Powerset, later integrated in Microsoft Bing), privacy (DuckDuckGo, which is neither a search engine nor private, ironically) and more recently You.com (which I am currently using) which allows users to rank results based on their preferences.

But really Google was the last search engine to enter the market. A market that has grown from a few million users to a few billion ones and that defines human culture.

The New

What I am trying to say is that the way search engines work is not because it’s the best way to search, but because of the best technology available in the late 1990s and we are stuck in that paradigm. Maybe there is a better way, but change is hard.

Search engine design, both mobile and desktop, is stuck in a local maxima but the content we consume has changed. A lot of it is in graph form (social networks), data streams (social feed), video content (Youtube and TikTok), ecommerce, and authoritative knowledge (Wikipedia), apps etc. I think a lot of it found that local centralization was more efficient.

I think Google cannot be challenged in their core territory. I mean it’s not my opinion but more a fact; it’s what has happened for 24 years.

[Machine Learning has entered the chat]

I think that the advancements in Generative AI present a point of disruption for multiple industries, but specifically they are a way to break Google’s hegemony on search engines.

Instead of using a big database and searching on it, we need to use that database as training data, and generate results with a neural network. 

EDIT: one of the reasons why this is a big deal is that trained models are really tiny compared to the training data. Stable Diffusion is like ~2 gigabytes but the training data is 100 terabytes (with an estimated training cost of ~$600k for 256 A100s running for 150k hours), and the consensus is that the size of the model could be reduced by one or two orders of magnitude frther still, possibly more. So if you think that the internet is several petabytes, it’s not unreasonable that it could be reduced to a 100 gigabyte model, which is somewhat portable (a 10Gbit/sec fibre connection soon could not be out of the ordinary in wealthy urban areas). So one can see why one could have their own search engine installed locally and not having to rely on Google. The computing cost of running the model would be negligible, as the burden would be on training (which opens another can of worm as we do not have continuous training yet, the model would have to be retrained and redownloaded in its entirety again every time there is new updated info, so there is a great deal of technology that needs to be invented before Google can be replaced, but as others have suggested there might be a hybrid solution in the meantime – to some extent Google already tries to generate some query results when asked some questions and over time they have tried to reduce the traffic to non-Alphabet properties).

Instead of searching for something, and then opening the first few results and scanning for the content we want, while fighting millions of popups, ads, and weight loss scams, one needs to be able to generate the answer they are looking for.

This would also change distribution as installing the entire Google archive on your laptop is not feasible, but a ML model can definitely run (again my money would be on Stability.AI, as this is something DeepMind would be prevented from pursuing [6])

I have been thinking about this post for a while, and in the meantime one search engine already launched (not quite what I am proposing, but it’s using language models to generate URLs https://metaphor.systems/ ). Also Adept is working on an AI assistant which is very impressive, even if (IMHO) they have not nailed the use case yet. Lexica is a search engine for prompts which is also interesting (they start with art, but doesn’t mean that they end up there).

This would completely bypass the distribution monopoly and advertising business of incumbents, as training a model is expensive (for now) but running it is not (the marginal cost of a Google search is negligible, but the cost of maintaining their infrastructure is very much not).

And I am not arguing that this should be only about text. We have the ability to generate multimedia content, so in the near term you could generate a synthetic video on how to replace a faucet or how to solve a calculus problem.

Generative AI is very exciting, and as we backport it to old industries we might find that the incumbent is no longer needed.

Notes:

[1] yes I know there was stuff before Yahoo but hypergrowth on the internet happened with the web and allowing commerce on it, and all the other engines (Lycos, InfoSeek etc) have been dead for a very long time

[2] no idea when they (if?) they stopped producing them but it was a printed local phone book of all the businesses in the area, organized by trade, and it was the main way to acquire customers in the telephone age. In the early ‘90s as the cost of printing kept going down these things were huge in terms of pagecount, but in 5 years or so they became ridiculously small until they disappeared

[3] Yahoo pivoted multiple times and it’s still around, if you want to experience what it looked like you can check yahoo.co.jp (does not work if you are from Europe, but you can image search it to see screenshots), which was licensed to SoftBank decades ago and barely updated since

[4] AltaVista was the most interesting example, as they briefly controlled the market but because in their mind the product was used to index corporate documents inside a private network, they never built any kind of spam protection and very very quickly the top results were just spam garbage and everyone moved to Google. Their demise was lightning fast 

[5] I first used Linux in 1995 and although infuriating at times, it was very promising. By 1998 when Google started it was quite good even for enterprise production, and most definitely better than Windows for servers while Unix was fragmented by different vendors making sure their products were as incompatible as possible which doomed them as developers did not bother

[6]  which IMO should be the primary reason why they should do it as a priority, but I guess one is incentivized to forget about the Innovator’s Dilemma when they are being measured by quarterly performance

Metaverse, wtf

After yesterday’s MSFT/ATVI deal announcement several people are dunking on Kotick’s NBC comments on the metaverse, and the whole concept in general. Now leaving aside the allegations against him that will hopefully will be addressed soon, he is somewhat right. The game industry has been at a forefront of several technologies (GPUs, networking, UI design, AI and I’m sure there are a dozen more), it remains behind compared to the tech industry on the one metric that matters: who has root on the computer upon which stuff runs.

Apart from two console manufacturers (one of which – Sony – is not a serious contender in the software market), the fight is moving onto data centers, mobile devices and the “metaverse” [1] will be just the name of the content running on some infrastructure.

And whoever owns that infra will ask for rent.

Correctly, game companies are realizing they are just targets for M&A and having scale benefits the offsetting of seasonality of game releases (a game can take 5-7 years from concept to release, with up to 500 full time employees doing nothing else).
Incidentally this is why Epic is kicking up a stink against Apple. Epic doesn’t have a hardware strategy (for now?) and if they don’t have an open distribution platform, all their profits are going to be taken away by the platform holders. Same goes for Snap making AR glasses, and Meta investing on VR.
I suspect the M&A feeding frenzy will continue this year. The streaming platform are also moving into this space (Apple Arcade, Netflix), while traditional media and entertainment falls further behind. It’s a massive market that is only getting bigger.

There has never been a better time to start a game company.

[1] We use the term metaverse for lack of a better word. Before the internet or the web, we used terms like “information superhighway” which also meant fuck all in terms of clarity.

Idle thoughts on gaming and web3

(a World of Warcrfta account – not mine – currently on sale on eBay)

I think the intersection of gaming and crypto/web3 is super interesting, mostly because it’s founded on a well known concept in games: virtual items. Because I get asked about it often I just wanted to share some idle notes about it. I am not touching new forms of governance such as DAOs etc which I think they are super interesting but my thinking is still evolving there. Anyway.

There are going to be many implementations but for now cryptocurrencies have a lot of drawbacks: they are a scourge on climate due to their extraordinary energy usage, and force companies to invest in ever increasing computational capabilities to resist 51% attacks, not to mention the technology is immature and the tech stack still evolving – with the added benefit that a mistake can be fatal.

(LPs should perform a closer scrutiny on crypto VCs as most of cryptocurrency investments invalidate any type of ESG stance they might have – and I argue it should invalidate their green credentials and put them in the doghouse.)

However, I believe in practice a lot of game studios might dabble with the occasional item on Ethereum, but they will run their systems in a way that is not fully distributed. And to be fair it doesn’t matter much as your database can be as is centralized as you want, but it matters naught since the game developer has full control over how the item is represented in game anyway.

They will still call it web3, but it probably won’t be crypto or blockchain.

I would also argue that the largest gaming companies were not created by just making a better game, because incumbents have pretty good teams and execution (try to poach an art director or a senior engineer from Riot/Take Two as a startup and see what happens), but by changing the economics of making a game (which could be a new platform, using new tech, new business model, or a change in available infrastructure for customers), which in turn enables a new customer acquisition channel.

As we figure out the tech around web3, the first implementation of this new world should be not technical but legal, specifically in the terms & conditions of a game and the attitude of a developer/publisher.

Game companies for the most part block any kind of secondary market (CS:GO and TF2 being exceptions), and if I were to dip my toes into web3 as a game developer, that’s the one thing I would change first.

I mean let’s look at World of Warcraft, where the re-sale of accounts is banned. And selling services (like having a team carry you in a dungeon above your power or expertise level) for money in-game is banned, but not virtual gold – which can be purchased with a credit card. So what is exactly being banned here?? 🤷‍♂️

A black market economy already exists as a result of Blizzard asking users to jump through hoops. And because it has to exist outside the game developer’s reach, it’s pretty shady, although with less fraud that one might think, but it’s still huge because there is lot of demand.

There are also aggregators of these services, like https://www.wowtcgloot.com that allow people (like me) to buy items for a debilitating amounts of money.

So I would recommend game devs wanting to experiment with web3 to start thinking about virtual property as assets that can be traded, and consider how it would impact game balance. Which items should be bound to an account forever, and which ones could be traded away. Diablo 3 at launch had an real-money trading auction house, but it failed terribly mostly because people (including myself) wanted to play DIABLO THREE, as in DIABLO TWO JUST BETTER, and instead got something else that was fundamentally broken because the RTM AH impacted gameplay rendering it not fun. Some ideas were not bad.

The same game if launched today, by another studio, and not called D3, would probably be a huge success and considered a shining example of a play-to-earn game.

Artificial scarcity or not is an element that can be considered at a later date. For now consider the integration of web3 into a game as an Auction House similar to Warcraft’s own, that could be accessed from outside the game, and that would supports multiple currencies. It would definitively create some unwanted behavior (who hasn’t cornered seversal commodities in the Warcraft AH 😏), generally speaking higher pricing or at least stable across servers and closer to their optimal price… but doesn’t matter because as the controllers of the in-game economy developers could always spawn more items and inflate the market thus making sure the game remains fun.
I would also argue that scarcity dictated by (virtual) money is not fun and would 100% cause squatting/rent-seeking behavior, so focus on scarcity by skill (and here there are a plethora of options as gaming skills are multidimensional).

But because everything is fractal, it’s not just in-game items as virtual property that could be traded – the account itself should be something that can be sold. I recently stopped playing Warcraft and my mini-pet collection (top 250 worldwide thank you very much 😅) and virtual gold was worth few thousand US dollars – without taking into account the mounts and other rare items.

As a customer benefit, having a facility to sell my account would be great, and the devs would get a cut of that – but there is more, I would love to have the ability to move a game license from Steam to Epic (think of the game store as a wallet/bank, and the game license as currency, and users pay transfer fees to the devs every time they want to move – IMO win-win). The long term implications of how to price games are unclear to me at this point, but replicating the same rights I have over physical objects like books is something that the market is adopting, and at this point have tried for decades despite opposition from everyone (remember eBay banning trading EverQuest items?) but it has passed the tipping point.

Some notes on the Tesla AI demo day

The level of technical detail shown at demo day was really surprising. Tesla built their own infrastructure and chipsets for training, and based on their claims they outperform almost everyone including a lot of well funded startups (like seriously an entire class of neuromorphic startups are going to be wiped out by NVIDIA and Tesla based on what I have seen this year)

On computing power alone Tesla’s infrastructure would be the 5th largest supercomputer cluster in the world. Just really shows the difference between building a tech company (that happens to make stuff with wheels) vs other automakers dealing with a decaying portfolio of fossil fuel engines plus an inefficient dealer network

But the chipsets they designed have massive bandwidth, and they are using the data from the cars to train future models, and it’s really an AI-first company – I mean it has to be just that the rest of the industry hasn’t figured it out yet

Tesla is unstoppable in terms of how far away they are in terms of AI compared to any other carmaker

They also showed a robot with a bullshit dancer routine, and the genius of Musk is to make sure it got most of the press coverage (30k articles and counting) – while most of the journalists couldn’t really appreciate the level of sophistication of their computing infrastructure

However the robot is not too bullshit, and the 2020s is the dawn of the age of androids. Boston Dynamics could not find a market in the military, which is going to end up buying drones by the cartload instead (I expect autonomous drone warfare to become a separate branch of the military, or absorb the air force in the process). So all these robotics companies are going to sell for consumer and entertainment – just today there is an article about Disney also investing in robotics as well https://www.nytimes.com/2021/08/19/business/media/disney-parks-robots.html?smid=tw-nytimes&smtyp=cur
(we also have a company in this space at ConceptionX so keep an eye out for demo day in November)

Unclear wether they can really dominate personal robotics yet, probably not, but this will kickoff consumer interest, and start a new market that will have a massive impact (and to some extent easier than cars because using AI indoors is much easier)

I just wish all companies were investing this hard in future tech

Sunset on Intel

So,

the transition away from the x86 architecture is a discussion that I have heard all my life. Even back when I started it was considered sub-optimal and needed a rewrite from the ground up, because the demand for DOS/Windows was skyrocketing, it could not be attempted. And this problem of a shaky foundation remained for literally decades. While at the same time Intel lagging behind in terms of die shrink, and performance gains not due to architectural advancements but more processors, more speed, and stuff like speculative execution (which ends up causing unfixable security issues). Basically there was a performance wall that x86 was going against, and they were brute forcing it.

The market dominance was their biggest advantage, but also the biggest barrier to developing a leaner alternative.

In new markets, they did develop an alternative. As server computing grew they developed Itanium, which was IA-64 and not x86, but its development was based on old assumptions (or tests, depending on who you ask) – meaning that it was optimized for the previous generation of supercomputers, which were mostly about scientific computation. But the server market that was growing was cloud computing, where IA-64’s performance just was not there, so they ended up releasing an expensive server processor that would not perform better than their previous architecture, and that anyway the x86 didn’t want. Itanium was finally killed last year.

Around the mid-2000s, for reason known only to them, as the market for smartphones was booming like crazy, they decided to sell their mobile chipset division (again, “mobile” meant “how to squeeze PC performance and power consumption in a phone” rather than developing a new product designed around the constraint of a new device).

Better/leaner/lower power consumption alternatives were actually being funded (Transmeta being one), but because they targeted an existing market, they all faced the headwinds of developers not wanting to recompile programs for something that would ship 100,000 devices at best. With the release of the iPhone this changed, their ARM architecture got better at every cycle, until reaching the point (and this is MIND BLOWING) of outperforming x86 in emulation mode.

I suspect (this is a huge speculation) Jim Keller left when he told Intel that the only solution was to ditch x86 and write something else.

The monopoly days of Intel CPUs in desktops are over. Microsoft adjusted from going to ~90% computing market share to ~8% by moving onto the cloud, and they have done it extremely successfully. I don’t think Intel can pull the same feat.

The biggest issues of Intel, in my opinion, were:

No funding external innovation.

they did not invest in the competition. Finding a VC backing a silicon startup was very hard in the Bay Area. Ironically it was one of the areas where more funding was available in Cambridge (which is almost never the case)

Not investing in mobile

decided against investing in R&D to win the iPhone business, because it was too expensive. Around 2005-2010 (IIRC) there were about 750m PCs, and 2bn phones. PC shipments were slowing down for the first time in history, and it was clear that all phones would have had a dual processor in a decade. The era of PC was not over, just that mobile was going to be a bigger market. By then it was clear that Itanium was not going to be a solution for commercial cloud computing, web hosting etc.

Getting out of the GPU gaming business

This was harder to plan, but (to some extent correctly) they got out of gaming. Alas it turned out that a lot of problems use linear algebra besides gaming (machine learning, crypto), and it was clear by early 2000s that having some sort of programmable GPUs could offload computation and speed up programs massively. In the early 2000s programming for it was clunky, so can understand the skepticism, and even in 2010 it was not obvious how big deep learning was going to be (ironically not even to NVIDIA themselves)

TLDR; none of those can be reversed in time now that their monopoly is over. A company like Intel is huge, they are a key national defense supplier, so they can keep going decades, but the decline is unstoppable.

A Personal Star Wars story

I never really pestered my kids with Star Wars, just because it was a huge pillar in my childhood doesn’t mean that they care about it. So was pleasantly surprised this week when my youngest asked me to have a SW marathon, in preparation for Chapter IX.

I think his biggest influence in this decision was how much he enjoys watching The Mandalorian.

I have not really watched A New Hope since the 1997 cinema re-release and certainly had never seen it in 4k, and wow, even something that I know so intimately gave such powerful goosebumps. Rarely I get to relive elements of my childhood that stand the test of time.

I just finished the first one, and really glad I did.

Anyway here’s the planned list in order:

IV
Star Wars Canteen (LEGO version, obviously)
Rogue One
Xmas Special (only the animated segment)
V
I
II
III
VI
VII
VIII
Solo

Heroes never die

So,

Was digging through some old folders and found a couple of documents that might be of interest. These are very early documents of our work on EverQuest Next. At the time we wanted to do AI storytelling but Sony was clear that they were only interested in combat AI and that we should stick to it. These were our proposals and we did other work, with a much bigger scope after that.

The first document is the initial pitch after we were told the scope of the game that is now public (at this point we hadn’t seen anything about the game) and it wasn’t clear how many features we would have to develop for the final product. As you can see Magic (the card game) was a huge influence on the prototyping stage.

https://www.dropbox.com/s/jxzgb52360v04dl/EQN%20AI%20Proposal%20-%20RR.pdf?dl=0

 

The second document was our proposal for a demo of the AI combat system, and that was what helped us advance the discussion for our involvement in EQN. It’s pretty dry but page 2 gives you an idea of how we felt about the problem, and that adding AI combat for us was just a small step in a grander vision of an AI-driven game.

https://www.dropbox.com/s/9kb03hot2552osx/Storybricks%20Combat%20AI%20Proposal.docx?dl=0

 

Have some more documents but the earliest NDA with SOE I have found was signed after these documents were produced. I have done my due diligence and if there is a disagreement please produce the documents that show otherwise.

A lot of people contributed to this, mostly Stéphane Bura, Dave Mark, Sam Baker, and Brian Green.

I have several non-EQN related documents about Storybricks but re-reading them I get really embarrassed by the naïveté I had at the time related to game design (the AI was surprisingly spot-on) so not really keen to share. Others were game pitches of stuff that I or Stéphane might do one day (we built a lot of concepts) so no dice.

The design for the slides was shamelessly ripped off from Tim Cain.

Again want to stress how great is to use Magic to prototype game ideas. Try it.

Good old OnLive

You were the gift that kept on giving. Raised $200m at a valuation of $1.8bn:

July 2011: Cloud-based video games will put pirates out of business

July 2011: OnLive CEO says his pet wireless project has sub-millisecond latency

January 2012: OnLive can turn your iPad into a remote desktop with live-streamed MS Office apps and more

January 2012: Review: The futuristic OnLive Desktop runs Windows apps on the iPad

February 2012: OnLive delivers ridiculously fast web browsing on the iPad

March 2012: OnLive brings its amazing Windows desktop access to Android tablets

October 2012: OnLive’s assets were sold for only $4.8 million

March 2014: OnLive is reborn with IGN founder as chief and two new cloud-based gaming services

April 2015: OnLive will shut down its cloud gaming services and sell assets to Sony

(all these articles were written by one outlet, the only one that would give extensive coverage to OnLive and provide good reviews despite any absence of customer traction or any questioning of the company fundamentals)