The NeverEnding Game: How AI Will Create a New Category of Games Andreessen Horowitz
This helps us think about the types of outcomes Wave 2 applications will deliver, and how they’ll differ from Wave 1 outcomes. Below, we try to offer some examples to bring the comparisons to life, but they are by no means meant to be comprehensive. We believe LLMs will need to focus on synthesis and analysis — SynthAI — that improves the quality and/or speed of decision-making (remember our B2B diagram above), if not make the actual decision itself.
Not only is that a ton of upfront investment to get adequate levels of accuracy without relying too much on humans (otherwise, what is the point?), but it also results in diminishing marginal returns on capital invested. Many AI products need to ensure they provide high accuracy even in rare situations, often referred to as “the tail.” And often while any given situation may be rare on its own, there tend to be a lot of rare situations in aggregate. This matters because as instances get rarer, the level of investment needed to handle them can skyrocket. There are only 2 on the list, but they drive significant traffic—Civitai (for images) and Hugging Face both rank in the top 10. This is especially impressive because consumers are typically visiting these sites to download models to run locally, so web traffic is likely an underestimate of actual usage. We relied on web traffic vs. app traffic to “qualify” companies for the list, as most consumer GenAI products have been website-first so far (more on this below!).
The complete list of Netflix Games
This has less to do with the process of decisioning relevant data and more to do with collecting and synthesizing the relevant data. LLM-powered workflow software for underwriters could drive down underwriting time and cost while increasing accuracy. In the long-run, the holy grail is to use AI models for real-time world building. We see a potential future where entire games are no longer rendered, but generated at run-time using neural networks.
In the AI era, the same market forces are already emerging at the model layer, with OpenAI leading the pack in a competitive setup similar to the early years of AWS. However, the content generated by AI today is passable largely for repetitive and low-stakes work. For example, generative AI is good for writing short copy for ads or product descriptions; we have seen many B2B applications demonstrate impressive growth in this area. By shifting the paradigm from knowledge excavation to an action orientation, adding arms and legs has the potential to unlock a range of use cases across companies and user types.
Andreessen Horowitz (a16z) on the Architecture of LLMs
A neural language model chatbot that enables open-ended conversation with characters you create. Other hardware options do exist, including Google Tensor Processing Units (TPUs); AMD Instinct GPUs; AWS Inferentia and Trainium chips; and AI accelerators from startups like Cerebras, Sambanova, and Graphcore. Intel, late to the game, is also entering the market with their high-end Habana chips and Ponte Vecchio GPUs. Sign up for news and resources to Yakov Livshits navigate the world of B2B technology, from AI and data, to security and SaaS, and more. So even though there doesn’t seem to be obvious defensibility endemic to the tech stack (if anything, it looks like there remain perverse economics of scale), we don’t believe this will hamper the impending market shift. What’s even more compelling than the rapid early growth is its sustained nature and scale beyond the novelty of the product’s initial launch.
- Though generative AI will unlock opportunities across consumer services, it will have an outsized impact in industries that haven’t been penetrated by e-commerce.
- If successful, these systems could combine to create new categories of AI-first games that entertain, engage, and retain players for a very long time.
- Generative AI refers to a subset of artificial intelligence that focuses on creating new data or content based on existing data.
- Users have also fine-tuned a version of OpenAI’s GPT-4 to play D&D with promising results.
- We encourage entrepreneurs interested in this space to focus their efforts on the areas that are still unexplored, such as “Runway for Games”.
Though we think most of the value created by these AI-native companies will accrue to consumers in the form of consumer surplus, we roughly estimate the total opportunity will soon be 5–20x+ higher than it is today. Having a highly personalized experience is what consumers value the most about in-person services—think of tutors, personal trainers, and doctors. Though companies offer some customized features like recommendation engines, they haven’t yet had the technology to deeply personalize their services. Most of the amazing AI products we see today are the result of no-less-amazing research, carried out by experts inside large companies and leading universities. Lately, we’ve also seen impressive work from individuals and the open source community taking popular projects into new directions, for example by creating automated agents or porting models onto smaller hardware footprints. Beyond product photos, brands create many types of content that can now be supercharged by AI.
Solid Bash’s Matthew Zoern On An Approach That Amplifies A Game’s “Fun Factor”
Founder of the DevEducation project
A prolific businessman and investor, and the founder of several large companies in Israel, the USA and the UAE, Yakov’s corporation comprises over 2,000 employees all over the world. He graduated from the University of Oxford in the UK and Technion in Israel, before moving on to study complex systems science at NECSI in the USA. Yakov has a Masters in Software Development.
On the other hand, the slippery slope is not a fallacy, it’s an inevitability. They will do this up to and including in ways that are nakedly felony crimes. This cycle in practice can run apparently forever, with the enthusiastic support of authoritarian hall monitors installed throughout our elite power structures. This has been cascading for a decade in social media and with only certain exceptions continues to get more fervent all the time.
Efforts to move model workload on-device are promising but still relatively early. We’ll likely also need to figure out new norms around parasocial relationships with agents. While early, a few AI-powered gameplay areas we’ve seen that are intriguing include generative agents, personalization, AI storytelling, dynamic worlds, and AI copilots. If successful, these systems could combine to create new categories of AI-first games that entertain, engage, and retain players for a very long time. In 2019, we wrote about why we believe in many, if not most cases, data moats may be an empty promise.
Pokemon Day: How are The Pokemon Company’s recent mobile releases performing?
We are now seeing the next iteration of these chatbot platforms, such as Charisma.ai, Convai.com, or Inworld.ai, meant to power fully rendered 3D characters, with emotions, and agency, with tools to allow the creator to give these characters goals. This is important if they’re going to fit within a game or have a narrative place in advancing the plot forward, versus purely being window dressing. We’re starting to see companies using generative AI to generate audio to complement the work already happening on the graphics side.
Building a good AI storyteller today requires lots of human rule-setting to create the narrative arcs that define a good story. Memory and coherence is important – a storyteller needs to remember what happened earlier in a story and maintain consistency, both factually and stylistically. Interpretability remains a challenge for many closed-source LLMs which operate as black boxes, whereas game designers need to understand why a system behaved the way it did to improve the experience.
This causes some people to propose, well, in that case, let’s not take the risk, let’s ban AI now before this can happen. Unfortunately, AI is not some esoteric physical material that is hard to come by, like plutonium. It’s the opposite, it’s the easiest material in the world to come by – math and code.
Previously it was very specific and purpose-driven, but large language models (LLMs), the backbone of generative AI, have rewritten the script. The “4” in GPT-4 is a way of describing how much complexity and scale the model can handle. GPT-3 was 175 billion parameters, and the newly available GPT-4 is 170 trillion parameters. Generative AI isn’t just a new craze or a shiny toy; it’s set to bring about major macroeconomic effects — including adding $7 trillion to the global GDP and lifting productivity growth by 1.5%.