A People-First Perspective on the AI Economy: Prioritizing Human Impact and Innovation

Today marks nine months since the launch of ChatGPT and six weeks since we unveiled our AI Start seed fund. Through discussions with numerous AI founders and influential CXOs (Chief Experience Officers), I can confidently say we are experiencing an exhilarating period in the tech landscape.

In less than a year, AI investments have become essential for any portfolio, new unicorns are emerging weekly, and the belief that AI will contribute to a stock market surge is gaining traction. A growing number of individuals outside the tech sector are becoming acquainted with new terminology, such as:

- Large language models

- ChatGPT

- Deep-learning algorithms

- Neural networks

- Reasoning engines

- Inference

- Prompt engineering

- CoPilots

Leading strategists and thought leaders are sharing insights on how AI will revolutionize business, unlock potential, and enhance human well-being.

While uncertainties remain, and it's wise to remain cautious about the risks involved with any new technology (“Oppenheimer,” anyone?), a strong conviction fuels my optimism. At Mayfield, we adhere to a “people-first” principle, where the founder's visionary ambition elevates their product's customer and sparks a vibrant community. When applied to AI, this people-first philosophy takes on even greater significance. I believe two key dynamics will merge to position AI as a transformative force that enables humans to evolve into what I call Human2—meaning “human squared.”

First, our primary mode of communication with devices will shift to conversational interactions. We have evolved from command lines, GUIs, and mobile applications to engaging in rich, nuanced conversations with computers. This transformation will be amplified by a second dynamic: for the first time, technology will perform cognitive tasks that enhance our own abilities.

Rather than merely automating repetitive tasks, AI will generate innovative solutions similar to human creativity. This collaboration means we can amplify our capabilities with a human-like co-pilot—whether that be a teammate, coach, assistant, or genie. AI x Human = Human2. And given the immense potential power of AI, it is critical that we prioritize responsible development.

Human Automate Cognitive Tasks Accelerate Productivity Amplify Creativity Superhuman

We have tailored our people-first framework specifically for AI companies, using it to inform our investment strategies. Today, we are excited to share the five core pillars of this framework, aimed at promoting responsible AI investing:

Mission and Values Matter

Foundational values shape company culture and cannot simply be added as a company grows. We witnessed this in the missions of three of our most successful portfolio companies over the last decade. Lyft aimed to enhance lives through superior transportation; Poshmark centered on people at the heart of commerce; HashiCorp developed essential infrastructure allowing innovative leaps.

Now, we engage with AI-first founders to gauge their human-centric missions and core values, ensuring alignment in their vision and technology impact.

GenAI Must Be Ingrained

The recent surge in AI has stemmed from groundbreaking contributions from researchers, ethicists, and technologists. We seek founders immersed in this environment, as they are better equipped to create people-first AI businesses.

When engaging with founders, we're on the lookout for:

- A core belief that AI augments human capabilities rather than replaces them—positioning AI as a teammate or co-founder.

- A founding team with experience in generative AI—whether through academia, applied innovation, or a unique entry into the generative AI domain.

- A passion for design and user experience that highlights AI's capabilities in human-computer interactions.

- Solutions powered by generative AI technologies like LLMs, proprietary models, datasets, and conversational interfaces.

- A clear value proposition that focuses on offloading mundane tasks.

Trust and Safety Must Come First

We recognize that AI can have adverse effects, including but not limited to hallucinations, bias, privacy violations, and the misuse of intellectual property. We urge founders to assess their models' trustworthiness and explore comprehensive model evaluation efforts, such as those conducted at Stanford. It’s essential to evaluate trust throughout a model's lifecycle—from development to deployment—while adhering to growing regulations and guidelines regarding responsible AI usage.

Data Privacy is a Fundamental Right

We assert that privacy must be a distinct focus rather than just a facet of trust and safety. Thanks to various regulations like CCPA, GDPR, and others, companies are progressively implementing robust data controls. As generative AI continues evolving, ensuring ethical data handling becomes crucial, especially concerning intellectual property.

Governance aspects that companies need to address include data discovery and inventory, detection and classification of sensitive information, and understanding data access and consent parameters. We encourage founders to build proactive safeguards to mitigate risks before issues arise.

Superhuman Impact is Measurable

We believe that people-first AI can elevate society, and we are developing a design framework to assess that potential during founder engagements.

Reflecting on our previous successes—Lyft, Poshmark, and HashiCorp—each elevated their respective communities, fostering growth and dynamism. Founders faced tough choices to remain true to their missions but found fulfillment in empowering others.

As early-stage investors, our mission is to support entrepreneurs and help create iconic companies. We are dedicated to a people-first approach in nurturing generation-defining AI enterprises, aspiring for lasting success and a better world.

Most people like

Find AI tools in YBX