Whilst in Arrakis

The Future Will Be Better Tomorrow

The next Google search engine will be Generative AI

(explore this prompt on Lexica)

The Old

One of the earliest [1] incarnations of finding stuff on the web was a directory service called Yahoo which was a glorified yellow pages directory [2] that was human curated. While in the beginning this was a good way to navigate the web for new users, very quickly the rate of growth of content became so steep that human curation was not longer the best way to experience the internet. [3]

In the meantime the first web crawlers (programs that would read a webpage and index its text, which is something that parallelizes very well) started to appear, either Sun SPARC servers, on Alpha DEC, but mostly the impact of the internet was underestimated and these companies used search engines as use cases to sell to corporates, instead of understanding that the web was *the* market. So their product was designed not for the customers that were using it. [4]

Linux started getting better [5], and computer clusters based on it became cheaper, based on custom built PCs, so web crawlers were able to have bigger and bigger indexes, and caching more web content. This was the state of the art of search engines. The ~28th search engine to hit the market, Google, was much better than the other, I personally used it because it had the best archive for programming and technical stuff (all the others just returned garbage because they were looking for natural language and became confused where you were searching for code with weird characters and operators). Google took off, more or less became a monopolist globally and here we are still.

Since then some attempt to challenge Google happened either with bigger indexes (Cuil, whose archive – to their dismay – was mostly porn I believe), semantic search (True Knowledge and Powerset, later integrated in Microsoft Bing), privacy (DuckDuckGo, which is neither a search engine nor private, ironically) and more recently You.com (which I am currently using) which allows users to rank results based on their preferences.

But really Google was the last search engine to enter the market. A market that has grown from a few million users to a few billion ones and that defines human culture.

The New

What I am trying to say is that the way search engines work is not because it’s the best way to search, but because of the best technology available in the late 1990s and we are stuck in that paradigm. Maybe there is a better way, but change is hard.

Search engine design, both mobile and desktop, is stuck in a local maxima but the content we consume has changed. A lot of it is in graph form (social networks), data streams (social feed), video content (Youtube and TikTok), ecommerce, and authoritative knowledge (Wikipedia), apps etc. I think a lot of it found that local centralization was more efficient.

I think Google cannot be challenged in their core territory. I mean it’s not my opinion but more a fact; it’s what has happened for 24 years.

[Machine Learning has entered the chat]

I think that the advancements in Generative AI present a point of disruption for multiple industries, but specifically they are a way to break Google’s hegemony on search engines.

Instead of using a big database and searching on it, we need to use that database as training data, and generate results with a neural network. 

EDIT: one of the reasons why this is a big deal is that trained models are really tiny compared to the training data. Stable Diffusion is like ~2 gigabytes but the training data is 100 terabytes (with an estimated training cost of ~$600k for 256 A100s running for 150k hours), and the consensus is that the size of the model could be reduced by one or two orders of magnitude frther still, possibly more. So if you think that the internet is several petabytes, it’s not unreasonable that it could be reduced to a 100 gigabyte model, which is somewhat portable (a 10Gbit/sec fibre connection soon could not be out of the ordinary in wealthy urban areas). So one can see why one could have their own search engine installed locally and not having to rely on Google. The computing cost of running the model would be negligible, as the burden would be on training (which opens another can of worm as we do not have continuous training yet, the model would have to be retrained and redownloaded in its entirety again every time there is new updated info, so there is a great deal of technology that needs to be invented before Google can be replaced, but as others have suggested there might be a hybrid solution in the meantime – to some extent Google already tries to generate some query results when asked some questions and over time they have tried to reduce the traffic to non-Alphabet properties).

Instead of searching for something, and then opening the first few results and scanning for the content we want, while fighting millions of popups, ads, and weight loss scams, one needs to be able to generate the answer they are looking for.

This would also change distribution as installing the entire Google archive on your laptop is not feasible, but a ML model can definitely run (again my money would be on Stability.AI, as this is something DeepMind would be prevented from pursuing [6])

I have been thinking about this post for a while, and in the meantime one search engine already launched (not quite what I am proposing, but it’s using language models to generate URLs https://metaphor.systems/ ). Also Adept is working on an AI assistant which is very impressive, even if (IMHO) they have not nailed the use case yet. Lexica is a search engine for prompts which is also interesting (they start with art, but doesn’t mean that they end up there).

This would completely bypass the distribution monopoly and advertising business of incumbents, as training a model is expensive (for now) but running it is not (the marginal cost of a Google search is negligible, but the cost of maintaining their infrastructure is very much not).

And I am not arguing that this should be only about text. We have the ability to generate multimedia content, so in the near term you could generate a synthetic video on how to replace a faucet or how to solve a calculus problem.

Generative AI is very exciting, and as we backport it to old industries we might find that the incumbent is no longer needed.

Notes:

[1] yes I know there was stuff before Yahoo but hypergrowth on the internet happened with the web and allowing commerce on it, and all the other engines (Lycos, InfoSeek etc) have been dead for a very long time

[2] no idea when they (if?) they stopped producing them but it was a printed local phone book of all the businesses in the area, organized by trade, and it was the main way to acquire customers in the telephone age. In the early ‘90s as the cost of printing kept going down these things were huge in terms of pagecount, but in 5 years or so they became ridiculously small until they disappeared

[3] Yahoo pivoted multiple times and it’s still around, if you want to experience what it looked like you can check yahoo.co.jp (does not work if you are from Europe, but you can image search it to see screenshots), which was licensed to SoftBank decades ago and barely updated since

[4] AltaVista was the most interesting example, as they briefly controlled the market but because in their mind the product was used to index corporate documents inside a private network, they never built any kind of spam protection and very very quickly the top results were just spam garbage and everyone moved to Google. Their demise was lightning fast 

[5] I first used Linux in 1995 and although infuriating at times, it was very promising. By 1998 when Google started it was quite good even for enterprise production, and most definitely better than Windows for servers while Unix was fragmented by different vendors making sure their products were as incompatible as possible which doomed them as developers did not bother

[6]  which IMO should be the primary reason why they should do it as a priority, but I guess one is incentivized to forget about the Innovator’s Dilemma when they are being measured by quarterly performance

Advertisement

Metaverse, wtf

After yesterday’s MSFT/ATVI deal announcement several people are dunking on Kotick’s NBC comments on the metaverse, and the whole concept in general. Now leaving aside the allegations against him that will hopefully will be addressed soon, he is somewhat right. The game industry has been at a forefront of several technologies (GPUs, networking, UI design, AI and I’m sure there are a dozen more), it remains behind compared to the tech industry on the one metric that matters: who has root on the computer upon which stuff runs.

Apart from two console manufacturers (one of which – Sony – is not a serious contender in the software market), the fight is moving onto data centers, mobile devices and the “metaverse” [1] will be just the name of the content running on some infrastructure.

And whoever owns that infra will ask for rent.

Correctly, game companies are realizing they are just targets for M&A and having scale benefits the offsetting of seasonality of game releases (a game can take 5-7 years from concept to release, with up to 500 full time employees doing nothing else).
Incidentally this is why Epic is kicking up a stink against Apple. Epic doesn’t have a hardware strategy (for now?) and if they don’t have an open distribution platform, all their profits are going to be taken away by the platform holders. Same goes for Snap making AR glasses, and Meta investing on VR.
I suspect the M&A feeding frenzy will continue this year. The streaming platform are also moving into this space (Apple Arcade, Netflix), while traditional media and entertainment falls further behind. It’s a massive market that is only getting bigger.

There has never been a better time to start a game company.

[1] We use the term metaverse for lack of a better word. Before the internet or the web, we used terms like “information superhighway” which also meant fuck all in terms of clarity.

Idle thoughts on gaming and web3

(a World of Warcrfta account – not mine – currently on sale on eBay)

I think the intersection of gaming and crypto/web3 is super interesting, mostly because it’s founded on a well known concept in games: virtual items. Because I get asked about it often I just wanted to share some idle notes about it. I am not touching new forms of governance such as DAOs etc which I think they are super interesting but my thinking is still evolving there. Anyway.

There are going to be many implementations but for now cryptocurrencies have a lot of drawbacks: they are a scourge on climate due to their extraordinary energy usage, and force companies to invest in ever increasing computational capabilities to resist 51% attacks, not to mention the technology is immature and the tech stack still evolving – with the added benefit that a mistake can be fatal.

(LPs should perform a closer scrutiny on crypto VCs as most of cryptocurrency investments invalidate any type of ESG stance they might have – and I argue it should invalidate their green credentials and put them in the doghouse.)

However, I believe in practice a lot of game studios might dabble with the occasional item on Ethereum, but they will run their systems in a way that is not fully distributed. And to be fair it doesn’t matter much as your database can be as is centralized as you want, but it matters naught since the game developer has full control over how the item is represented in game anyway.

They will still call it web3, but it probably won’t be crypto or blockchain.

I would also argue that the largest gaming companies were not created by just making a better game, because incumbents have pretty good teams and execution (try to poach an art director or a senior engineer from Riot/Take Two as a startup and see what happens), but by changing the economics of making a game (which could be a new platform, using new tech, new business model, or a change in available infrastructure for customers), which in turn enables a new customer acquisition channel.

As we figure out the tech around web3, the first implementation of this new world should be not technical but legal, specifically in the terms & conditions of a game and the attitude of a developer/publisher.

Game companies for the most part block any kind of secondary market (CS:GO and TF2 being exceptions), and if I were to dip my toes into web3 as a game developer, that’s the one thing I would change first.

I mean let’s look at World of Warcraft, where the re-sale of accounts is banned. And selling services (like having a team carry you in a dungeon above your power or expertise level) for money in-game is banned, but not virtual gold – which can be purchased with a credit card. So what is exactly being banned here?? 🤷‍♂️

A black market economy already exists as a result of Blizzard asking users to jump through hoops. And because it has to exist outside the game developer’s reach, it’s pretty shady, although with less fraud that one might think, but it’s still huge because there is lot of demand.

There are also aggregators of these services, like https://www.wowtcgloot.com that allow people (like me) to buy items for a debilitating amounts of money.

So I would recommend game devs wanting to experiment with web3 to start thinking about virtual property as assets that can be traded, and consider how it would impact game balance. Which items should be bound to an account forever, and which ones could be traded away. Diablo 3 at launch had an real-money trading auction house, but it failed terribly mostly because people (including myself) wanted to play DIABLO THREE, as in DIABLO TWO JUST BETTER, and instead got something else that was fundamentally broken because the RTM AH impacted gameplay rendering it not fun. Some ideas were not bad.

The same game if launched today, by another studio, and not called D3, would probably be a huge success and considered a shining example of a play-to-earn game.

Artificial scarcity or not is an element that can be considered at a later date. For now consider the integration of web3 into a game as an Auction House similar to Warcraft’s own, that could be accessed from outside the game, and that would supports multiple currencies. It would definitively create some unwanted behavior (who hasn’t cornered seversal commodities in the Warcraft AH 😏), generally speaking higher pricing or at least stable across servers and closer to their optimal price… but doesn’t matter because as the controllers of the in-game economy developers could always spawn more items and inflate the market thus making sure the game remains fun.
I would also argue that scarcity dictated by (virtual) money is not fun and would 100% cause squatting/rent-seeking behavior, so focus on scarcity by skill (and here there are a plethora of options as gaming skills are multidimensional).

But because everything is fractal, it’s not just in-game items as virtual property that could be traded – the account itself should be something that can be sold. I recently stopped playing Warcraft and my mini-pet collection (top 250 worldwide thank you very much 😅) and virtual gold was worth few thousand US dollars – without taking into account the mounts and other rare items.

As a customer benefit, having a facility to sell my account would be great, and the devs would get a cut of that – but there is more, I would love to have the ability to move a game license from Steam to Epic (think of the game store as a wallet/bank, and the game license as currency, and users pay transfer fees to the devs every time they want to move – IMO win-win). The long term implications of how to price games are unclear to me at this point, but replicating the same rights I have over physical objects like books is something that the market is adopting, and at this point have tried for decades despite opposition from everyone (remember eBay banning trading EverQuest items?) but it has passed the tipping point.

Some notes on the Tesla AI demo day

The level of technical detail shown at demo day was really surprising. Tesla built their own infrastructure and chipsets for training, and based on their claims they outperform almost everyone including a lot of well funded startups (like seriously an entire class of neuromorphic startups are going to be wiped out by NVIDIA and Tesla based on what I have seen this year)

On computing power alone Tesla’s infrastructure would be the 5th largest supercomputer cluster in the world. Just really shows the difference between building a tech company (that happens to make stuff with wheels) vs other automakers dealing with a decaying portfolio of fossil fuel engines plus an inefficient dealer network

But the chipsets they designed have massive bandwidth, and they are using the data from the cars to train future models, and it’s really an AI-first company – I mean it has to be just that the rest of the industry hasn’t figured it out yet

Tesla is unstoppable in terms of how far away they are in terms of AI compared to any other carmaker

They also showed a robot with a bullshit dancer routine, and the genius of Musk is to make sure it got most of the press coverage (30k articles and counting) – while most of the journalists couldn’t really appreciate the level of sophistication of their computing infrastructure

However the robot is not too bullshit, and the 2020s is the dawn of the age of androids. Boston Dynamics could not find a market in the military, which is going to end up buying drones by the cartload instead (I expect autonomous drone warfare to become a separate branch of the military, or absorb the air force in the process). So all these robotics companies are going to sell for consumer and entertainment – just today there is an article about Disney also investing in robotics as well https://www.nytimes.com/2021/08/19/business/media/disney-parks-robots.html?smid=tw-nytimes&smtyp=cur
(we also have a company in this space at ConceptionX so keep an eye out for demo day in November)

Unclear wether they can really dominate personal robotics yet, probably not, but this will kickoff consumer interest, and start a new market that will have a massive impact (and to some extent easier than cars because using AI indoors is much easier)

I just wish all companies were investing this hard in future tech

Sunset on Intel

So,

the transition away from the x86 architecture is a discussion that I have heard all my life. Even back when I started it was considered sub-optimal and needed a rewrite from the ground up, because the demand for DOS/Windows was skyrocketing, it could not be attempted. And this problem of a shaky foundation remained for literally decades. While at the same time Intel lagging behind in terms of die shrink, and performance gains not due to architectural advancements but more processors, more speed, and stuff like speculative execution (which ends up causing unfixable security issues). Basically there was a performance wall that x86 was going against, and they were brute forcing it.

The market dominance was their biggest advantage, but also the biggest barrier to developing a leaner alternative.

In new markets, they did develop an alternative. As server computing grew they developed Itanium, which was IA-64 and not x86, but its development was based on old assumptions (or tests, depending on who you ask) – meaning that it was optimized for the previous generation of supercomputers, which were mostly about scientific computation. But the server market that was growing was cloud computing, where IA-64’s performance just was not there, so they ended up releasing an expensive server processor that would not perform better than their previous architecture, and that anyway the x86 didn’t want. Itanium was finally killed last year.

Around the mid-2000s, for reason known only to them, as the market for smartphones was booming like crazy, they decided to sell their mobile chipset division (again, “mobile” meant “how to squeeze PC performance and power consumption in a phone” rather than developing a new product designed around the constraint of a new device).

Better/leaner/lower power consumption alternatives were actually being funded (Transmeta being one), but because they targeted an existing market, they all faced the headwinds of developers not wanting to recompile programs for something that would ship 100,000 devices at best. With the release of the iPhone this changed, their ARM architecture got better at every cycle, until reaching the point (and this is MIND BLOWING) of outperforming x86 in emulation mode.

I suspect (this is a huge speculation) Jim Keller left when he told Intel that the only solution was to ditch x86 and write something else.

The monopoly days of Intel CPUs in desktops are over. Microsoft adjusted from going to ~90% computing market share to ~8% by moving onto the cloud, and they have done it extremely successfully. I don’t think Intel can pull the same feat.

The biggest issues of Intel, in my opinion, were:

No funding external innovation.

they did not invest in the competition. Finding a VC backing a silicon startup was very hard in the Bay Area. Ironically it was one of the areas where more funding was available in Cambridge (which is almost never the case)

Not investing in mobile

decided against investing in R&D to win the iPhone business, because it was too expensive. Around 2005-2010 (IIRC) there were about 750m PCs, and 2bn phones. PC shipments were slowing down for the first time in history, and it was clear that all phones would have had a dual processor in a decade. The era of PC was not over, just that mobile was going to be a bigger market. By then it was clear that Itanium was not going to be a solution for commercial cloud computing, web hosting etc.

Getting out of the GPU gaming business

This was harder to plan, but (to some extent correctly) they got out of gaming. Alas it turned out that a lot of problems use linear algebra besides gaming (machine learning, crypto), and it was clear by early 2000s that having some sort of programmable GPUs could offload computation and speed up programs massively. In the early 2000s programming for it was clunky, so can understand the skepticism, and even in 2010 it was not obvious how big deep learning was going to be (ironically not even to NVIDIA themselves)

TLDR; none of those can be reversed in time now that their monopoly is over. A company like Intel is huge, they are a key national defense supplier, so they can keep going decades, but the decline is unstoppable.

A Personal Star Wars story

I never really pestered my kids with Star Wars, just because it was a huge pillar in my childhood doesn’t mean that they care about it. So was pleasantly surprised this week when my youngest asked me to have a SW marathon, in preparation for Chapter IX.

I think his biggest influence in this decision was how much he enjoys watching The Mandalorian.

I have not really watched A New Hope since the 1997 cinema re-release and certainly had never seen it in 4k, and wow, even something that I know so intimately gave such powerful goosebumps. Rarely I get to relive elements of my childhood that stand the test of time.

I just finished the first one, and really glad I did.

Anyway here’s the planned list in order:

IV
Star Wars Canteen (LEGO version, obviously)
Rogue One
Xmas Special (only the animated segment)
V
I
II
III
VI
VII
VIII
Solo

Heroes never die

So,

Was digging through some old folders and found a couple of documents that might be of interest. These are very early documents of our work on EverQuest Next. At the time we wanted to do AI storytelling but Sony was clear that they were only interested in combat AI and that we should stick to it. These were our proposals and we did other work, with a much bigger scope after that.

The first document is the initial pitch after we were told the scope of the game that is now public (at this point we hadn’t seen anything about the game) and it wasn’t clear how many features we would have to develop for the final product. As you can see Magic (the card game) was a huge influence on the prototyping stage.

https://www.dropbox.com/s/jxzgb52360v04dl/EQN%20AI%20Proposal%20-%20RR.pdf?dl=0

 

The second document was our proposal for a demo of the AI combat system, and that was what helped us advance the discussion for our involvement in EQN. It’s pretty dry but page 2 gives you an idea of how we felt about the problem, and that adding AI combat for us was just a small step in a grander vision of an AI-driven game.

https://www.dropbox.com/s/9kb03hot2552osx/Storybricks%20Combat%20AI%20Proposal.docx?dl=0

 

Have some more documents but the earliest NDA with SOE I have found was signed after these documents were produced. I have done my due diligence and if there is a disagreement please produce the documents that show otherwise.

A lot of people contributed to this, mostly Stéphane Bura, Dave Mark, Sam Baker, and Brian Green.

I have several non-EQN related documents about Storybricks but re-reading them I get really embarrassed by the naïveté I had at the time related to game design (the AI was surprisingly spot-on) so not really keen to share. Others were game pitches of stuff that I or Stéphane might do one day (we built a lot of concepts) so no dice.

The design for the slides was shamelessly ripped off from Tim Cain.

Again want to stress how great is to use Magic to prototype game ideas. Try it.

Good old OnLive

You were the gift that kept on giving. Raised $200m at a valuation of $1.8bn:

July 2011: Cloud-based video games will put pirates out of business

July 2011: OnLive CEO says his pet wireless project has sub-millisecond latency

January 2012: OnLive can turn your iPad into a remote desktop with live-streamed MS Office apps and more

January 2012: Review: The futuristic OnLive Desktop runs Windows apps on the iPad

February 2012: OnLive delivers ridiculously fast web browsing on the iPad

March 2012: OnLive brings its amazing Windows desktop access to Android tablets

October 2012: OnLive’s assets were sold for only $4.8 million

March 2014: OnLive is reborn with IGN founder as chief and two new cloud-based gaming services

April 2015: OnLive will shut down its cloud gaming services and sell assets to Sony

(all these articles were written by one outlet, the only one that would give extensive coverage to OnLive and provide good reviews despite any absence of customer traction or any questioning of the company fundamentals)

Snapchat is now augmented reality-first

So,

rpo

Facebook did the deal with Oculus mostly so that they would own the post-mobile hardware platform. PCs are the old platform and Wintel won, current generation platforms are dominated by Google Android and Apple iOS, and it’s unclear if there is room for a third competitor at scale[1]. Facebook decided against making their own mobile phone and instead invest in what was coming next.

So Facebook acquired Oculus.

Virtual reality was always a stepping stone to augmented reality[2], as today you need to find time to actually use a VR headset. As the chart below shows, attention is a very fought-over currency.

CR49C8oVEAAoLq4

This is not the case with augmented reality, as you can double stack minutes doing other stuff and still consumer AR content (via a pair of glasses for example[3]).

Snapchat is rumored to be tossing their hat in the space and go for an AR play but I would argue that Snapchat is an augmented reality company. In the same way Google pivoted into an artificial intelligence company around the time of the Deepmind acquisition [4] Snapchat started with self-deleting messages but their lenses [5] have clearly taken over and their are driving the virality for new users. If you take that experience further then to me the mission of Snapchat becomes clear: it’s the lens through which you experience the world. A better, more beautiful, and augmented world.

michelle-obama-snapchat

 

 

Once you acknowledge that, there is no point in limiting it to a phone. Instead use lenses to mediate interaction with reality making a pair of glasses the perfect fit [6] and this is when it gets really interesting.

There are other companies in the AR space but they don’t have the passionate user base of Snapchat or they focus too much on games, which is cool and a big market in itself, but entertainment is tiny compared to say communications or commerce. A Snapchat reality lens powered by artificial intelligence and computer vision would be silly, incredible, and have wide adoption all at the same time.

It would also be a  great developer platform with clear ways to monetize. Expanding the lens to incorporate objects different than faces (e.g. cans of soda, so if you are looking at a Coke can it shows some cool animation) or different input lenses (e.g. an audio lens that generates custom clips when something is played or replaces ambient sound – possibly done with noise cancelling headphones). Like lenses today it doesn’t have to be dominated by advertisements blocking your reality.

Oculus is essentially a game console manufacturer, and it’s not going great with hardcore gamers. The product caters to gamers who can spend big 💸💸💸 on a new PC, but it can only be played for a bit before getting a headache. The design is not quite there yet and will realistically need several iterations more. And the locking down of games is pushing consumers and game developers away from it and onto the HTC Vive [7].

Here’s what I believe is going to happen:

Facebook’s mission is simple and it’s about sharing experiences and content with others. To date Oculus doesn’t really fit that mission and possibly never will. Because of this I expect Oculus to be spun off or sold to a game company[8] or potentially to Microsoft as HoloLens is not a credible product (or even a product) and Oculus could be part of the Xbox unit.

Snapchat however is a fit and also a strategic threat to Facebook already (it’s half the size of Instagram). More importantly its users are all about sharing. So it could fit the vacant space of AR left by Oculus. It’s well known that Facebook tried to buy them for $3bn but the offer was rejected, eerily similar as to when Google tried to buy Facebook for $15bn [9]. Hopefully Facebook will remember that.

Snapchat’s self deleting messages are no longer the main reason why people join, it’s about AR and messaging (with AR). So expect them doubling down on lenses and possibly become a full augmented reality platform with their own hardware. But here’s the big difference with other AR vendors: they are all either stacked into enterprise edge cases (Google Glass at Work) or games. Snapchat is different. Snapchat is Minecraft. It’s simple rules and (mostly young) people have appropriated and explored and created their own language and culture on top of it. It has that new corinthian leather feeling that Facebook used to have. I believe this culture is Snapchat’s biggest asset.

This culture will adopt a new “toy” in the form of AR and go with it, while other competing platform will struggle by trying to meet the expectations set by their bullish hype.

TLDR: Snapchat is an augmented reality-first company, Facebook will divest Oculus and instead try to acquire Snapchat.

 

[1] Even though there could be an opportunity for a Cyanogen, or a new OEM to conquer critical market share by making it easy to install an OS on a phone. Blackberry is probably toast and ready to be absorbed by some large company like HP.

[2]  But solving VR means solving a subset of technical problems and it can be easier that going AR native.

[3] Even though it could give life to pretty weird scenarios like this “Hyper-Reality” one https://vimeo.com/166807261. Although my point is that they want to make it fun for users to engage with brands and will have the brands pay for it. If the users are not engaged they won’t use the platform and therefore no critical mass for ads – therefore users > advertisers so not expecting weird stuff to happen.

[4] Yes Google was always an AI company but not really. It was a giant search engine with ads. That process was already under way but the Deepmind M&A was a clear demarcation point and right now we are seeing the results. See ‘Inside Sundar Pichai’s Plan To Put AI Everywhere‘.

[5] It’s the video filter that makes the silly faces.

[6] Glasses today and contact lenses in the future.

[7] This is the first time in more than a decade anyone is actually excited about HTC products. Incredible.

[8] I really don’t know who could afford this beyond actual console manufacturers. Plus game studios have no idea how to manage hardware. The other alternative is that they keep Oculus and starve the unit of resources until it’s small enough that can be folded into “Facebook VR Games”.

[9] IIRC can’t find the article, I think it was Megan Smith that was tasked with that directly from Larry Page.

Magic mirrors and AI

If you haven’t yet, read My Bathroom Mirror Is Smarter Than Yours do it before this post.

alice-in-wonderland-fashion-model-walking-in-mirror.jpeg

Smart mirrors show consumers want a multi sensory experience. Touch, vision are primary drivers of Human-Computer Interaction. Voice is a novelty that can build on top but asking people to change behavior in HCI without a clear benefit is madness.

So far only benefit of voice is while driving, people who don’t know how to write or people with disabilities. Clearly not growing markets.

Using voice we can’t receive complex data structures like lists, charts etc easily and the bandwidth is an issue.

 
Reading 250 words per minute
Listening 150 wpm
Speaking 105 wpm (this assumes the AI will understand 100% of the time. Sci-fi)
Typing 33 wpm

Compare the reception of all the Nest / Echo etc products with the smart mirror. That guy accidentally discovered a potential product market fit with a Medium post (world first?)

 
Soon we’ll see more smart mirror equipment coming for sale, DIY kits, APIs. Citymapper just released an update that would make sense on a mirror (tells you if there is going to be a delay on your route before you ask for it). In fact while the Apple Watch is a “personal” device, a lot of the apps there might make more sense on a mirror.

I think there is an opportunity to exploit multi sensory AI and nobody has cracked that, I also believe that glass as a material will have an ever increasing role in our lives (Corning mentioned using Gorilla glass for windscreens for example, and there are transparent solar cells coming to the market.)

If you are an entrepreneur looking to build magic mirrors, or a VC looking to fund one (and you should) I’d love to discuss the AI behind it.