AI in the Classroom: What’s Actually Working Right Now

Stacey Seltzer
Stacey Seltzer
April 28, 2025
5-minute read
AI in the Classroom: What’s Actually Working Right Now

Originally published by The AI Journal on April 24, 2025

The discourse around AI in education often lurches between panic and hype; will it replace teachers, is it the end of thinking, is it a revolution? But in classrooms like ours — at Hudson Lab School, a project-based K–8 program just outside New York City — the conversation is less dramatic and more iterative. The question isn’t whether we should be using AI. The question is: how do we use it well?

At HLS, we’ve spent the last year treating AI not as a separate curriculum or policy mandate, but as a tool integrated into the daily work of learning. We’ve tested it in capstone projects, used it to support differentiated instruction, and introduced it in teacher workflows. We’ve experimented with a range of tools — ChatGPT, NotebookLM, Runway, Inkwire — and collaborated with entrepreneurs from our studio, Co-Created, to bring emerging AI applications into the school environment.

This article is a field report of sorts: a look at what’s actually working, where the challenges lie, and what we’re learning about the practical role AI can play in a real school.

Prompting as the New Literacy

In 2024, AI is already in the hands of students. A recent international survey showed that 86% of students report using AI tools in their academic work, with nearly a quarter engaging daily. Among American high schoolers, the numbers are even more striking — especially in writing-heavy disciplines like English and social studies. 

Yet, the level of fluency with these tools varies widely. Most students know how to ask a chatbot for help. Far fewer know how to interrogate its answers, challenge its assumptions, or build a productive back-and-forth.

At HLS, we’re treating prompting as a new core literacy — a set of metacognitive practices that help students engage with generative systems effectively and responsibly. We’re not teaching (and definitely not allowing) students to use AI to write for them. We’re teaching them to use it to learn with them.

We begin by introducing basic prompt structures in low-stakes contexts — not to produce polished work, but to explore ideas. Students might ask ChatGPT to act as a thought partner while planning their writing assignments, to generate study questions and flashcards based on their own notes, or to offer alternative perspectives on a historical event. Some use it to role-play different scenarios or help them look at multiple sides of an issue. Others prompt it to quiz them on concepts they’ve been struggling with, or to explain the same topic in multiple ways. 

Because each student is engaging the model individually, with prompts tailored to their needs, the interaction becomes deeply personalized — a kind of one-on-one tutorial that adjusts in real time to the learner’s questions, interests, and level of understanding. These early interactions aren’t about getting the “right” answer. They’re about developing the habit of thinking with a tool that responds. Over time, students stop seeing AI as a vending machine and start treating it as a dynamic, imperfect collaborator — one that helps them test ideas, surface blind spots, and stretch their thinking.

When these skills are integrated into real projects, the results are both creative and rigorous.

Student Projects: AI as Amplifier and Provocation

Take, for example, the project one of our eighth graders developed as part of their capstone — actually building a working beta version of a service called “back in my day” which allowed people to converse with individuals from their family tree. The idea emerged from a convergence of personal interests: genealogy, digital memory, and the fact that our school is co-located with a senior living facility. The student wondered: Could you build a system that allowed people to “talk to” deceased relatives by simulating their personalities, speech patterns, and stories? 

He started with family documents and oral histories, then used a combination of tools — including ChatGPT for linguistic modeling, ElevenLabs for voice generation, and a custom prompt scaffold we co-developed — to create a beta version of a persona-simulating chatbot. What started as a technical experiment quickly turned into an ethical inquiry: Should we do this? What does it mean to simulate someone’s voice, or story, or opinions? He also explored what kinds of implications this could have for grieving people and would it be positive or negative for the users. 

This wasn’t a sidebar project. It became a capstone: deeply personal, technically sophisticated, and intellectually provocative. And AI was at the center of it — not doing the thinking, but provoking more of it.

In another example, a sixth grader exploring the U.S. Constitution asked whether AI itself might gain personhood by 2075. Her culminating project was a simulated presidential election featuring AI candidates, designed and animated using Runway. She created original scripts, recorded performance footage, and prompted the tool to render campaign videos. What could have been a speculative gimmick became a lens for discussing democratic values, personhood, and rights — all refracted through the emerging reality of AI’s social presence.

These aren’t hypotheticals or case studies from a lab. These are middle schoolers using real tools to ask real questions about their world — and the one they’re inheriting.

Teachers as AI Practitioners

The shift we’ve seen in our teaching staff over the past year has been just as important — and in some ways more surprising — than the changes among students. When we first introduced generative AI in professional development sessions, the response was cautious. Some teachers saw the tools as gimmicks. Others viewed them as a threat to their professional identity and many simply didn’t see how they could be relevant to their day-to-day work.

That changed when we moved from theory to practice. As soon as teachers were given space to experiment — with support, without pressure — attitudes began to shift. They started using generative AI tools not to replace their planning, but to extend it; one teacher used ChatGPT to create differentiated reading materials from a single anchor text, adjusting the prompt to produce versions for different reading levels. Others began using it as a thought partner — brainstorming project ideas, writing prompts, rubrics, and alternate ways to explain tricky concepts. The emphasis wasn’t on perfection, it was on getting started.

NotebookLM received a lot of early attention. Teachers uploaded their weekly notes and used the tool to generate podcast-style audio summaries to accompany classroom newsletters. It was a small experiment, but an impactful one — parents reported actually listening, and it helped deepen the sense of connection between home and school.

We’ve also started piloting Goblins, an AI math tutor developed by an entrepreneur in the Co-Created network, to explore how AI might support individualized instruction in more structured subjects. It’s still early, but we’re already seeing promising signs of how targeted practice and real-time feedback can supplement classroom instruction.  Particularly interesting is how adaptive AI can be to students' different learning approaches and needs, allowing teachers to be more differentiated and personal in their approaches to teaching.  

And then there are the quiet surprises. I remember logging into an administrative view on one platform and seeing dozens of lesson plans that had been built out — not because we had mandated anything, but because teachers had simply started using the tools. They weren’t announcing it. They were just doing the work.

Platforms like Inkwire, which support the design of interdisciplinary, project-based units, have also made a noticeable impact. Teachers report spending less time searching for ideas and more time adapting and refining them — because the foundational materials are already generated. The result isn’t generic AI-driven curriculum. It’s curriculum that reflects the creativity of the teacher, accelerated by the scaffolding these tools provide.

What’s made the biggest difference, though, is targeted support on prompting. Not “how to use AI,” but how to ask better questions. How to engage in a productive dialogue. How to refine and reframe. In our sessions, we treat prompts not as commands, but as design tools — ways to push the model, and the teacher’s own thinking, into new territory. When used that way, generative AI becomes not just a productivity booster, but a source of professional inspiration.

What We’re Learning

The value of AI in the classroom, as we’re seeing it, is not about automation or efficiency. It’s about acceleration — of thought, of design, of iteration. When used well, AI tools help both students and teachers move more quickly from idea to prototype, from question to debate, from concept to execution. And, crucially, they help surface new questions that wouldn’t otherwise be asked.

But it only works when the culture supports it. At HLS, we’re fortunate to have a school structure — interdisciplinary, project-based, agile — that allows us to experiment in real time. We also benefit from our work at Co-Created, where we collaborate with entrepreneurs building the next wave of AI-powered tools for learners and educators. That cross-pollination is essential: it keeps our thinking fresh, and it ensures that our practice is informed by the frontier, not just tradition.

Final Thoughts

AI in schools is not a yes/no question. It’s a how/why/when set of questions — and the answers will vary. What’s clear from our experience is that meaningful integration doesn’t start with policy. It starts with practice. With students experimenting, with teachers testing, with school leaders asking, week by week: What worked? What didn’t? What’s next?

That’s how we’re approaching AI at Hudson Lab School — and so far, what’s actually working isn’t the tool itself. It’s the mindset that surrounds it, especially as we know that the AI we’re using today is the worst AI we’ll ever use. 

Share this post
Stacey Seltzer
Stacey Seltzer
April 28, 2025
5 min read

Sign up to our newsletter

Stay up to date on our latest updates, insights, and musings.

Insights

More from Category

Read more from the Co-Created team below.

Too many companies are racing to define their “AI strategy,” as if it needs a dedicated lane. The smartest organizations are using AI to accelerate, enhance, and sharpen today's core strategies.
Daniel Shani
Daniel Shani
May 1, 2025
5 min read

Originally published by The AI Journal on April 26, 2025

Too many companies are racing to define their “AI strategy,” as if artificial intelligence is some new business function that needs a dedicated lane. But the real opportunity isn’t about what you can build for AI—it’s about what you can unlock with it. The smartest organizations aren’t rewriting their playbooks from scratch. They’re using AI to accelerate, enhance, and sharpen the strategies they already care about.

This isn’t about replacing fundamentals. It’s about getting more leverage on the things that already drive impact.

Here are four core pillars of business strategy that are being transformed—not replaced—by working with AI.

1. Keep a Live Pulse on the Market (and Make it Actionable)

Every company tries to track what’s happening around them—competitor moves, emerging customer needs, shifting cultural signals. The problem is, most of that happens sporadically, with a heavy reliance on manual analysis, anecdotal insight, or high-level macro indicators.

AI changes that. Today, intelligent systems can sift through thousands of unstructured sources—Reddit threads, local news, LinkedIn posts, investor decks, product reviews—and convert that chaos into structured, directional insights. You’re not just reading content or collecting data; you’re mapping the market in real time.

The added value? These insights aren’t buried in a quarterly report—they can be delivered to the right teams at the right time. Some companies are even building “living” models of their market environments–constantly refreshed, customized by audience, and embedded into everyday workflows. The outcome is a strategy that doesn’t just respond to change—it contextualizes and anticipates it.

2. Elevate the Value Proposition (Not Just the Toolset)

AI can certainly enhance tools and automate tasks. But its real power shows up when it prompts a deeper rethink of how you create and deliver value.

Take, for instance, a healthcare brand that initially set out to build a product recommendation chatbot—something smart and lightweight that could guide customers to the right supplement or service. As the project progressed, the team realized the same underlying personalization engine could support onboarding, behavior change, educational nudges, and even care team handoffs. The chatbot didn’t just improve customer support—it became a doorway to reimagining the entire experience.

That kind of pivot isn’t about chasing the next tool. It’s about looking at your business through a different lens: now that we can personalize at scale, how else might we create a deeper, better relationship with the people we serve?

3. Experimentation is King (and Now You Can Do It Smarter, Faster)

One of the most powerful shifts AI brings is speed—not just in output, but in learning. Traditional experimentation takes time. You come up with a new message or offer, build the assets, run a test, wait for results… and often learn too little, too late.

AI changes the rhythm. With synthetic data and intelligent agents, you can prototype narratives, simulate reactions across segments, and generate tailored campaigns at a pace that was unimaginable a year ago. (Personal note: I’m old-school in some ways—I still love hearing directly from real people out in the world. But AI can help with that too: transcribing interviews, summarizing themes, even surfacing sentiment you might have missed.)

This shift is already reshaping creative and go-to-market teams. We’re seeing the rise of “vibe marketing”—a parallel to the “vibe coding” movement that gave us platforms like Replit, Bolt, and Lovable. Just as one developer with the right tools can now build and ship a new product in hours, one marketer with the right AI stack can 100X their output– e.g. spin up landing pages, test angles, generate collateral across channels and automate end-to-end workflows with speed and precision.

Emerging tools like PhantomBuster, Jasper, and OpenChair are enabling highly specialized, niche automation for media testing, competitive tracking, and persona-driven messaging. The direction is clear: fast, lightweight, focused systems that do one thing really well. The agency of the future might be one smart person and a “room” full of purpose-built agents.

4. Execute Better, Faster (With Tools You Design)

Every organization wants to move faster and reduce friction. But it’s not just about automating more—it’s about customizing tools that work the way your teams do.

In some forward-leaning companies, teams are building internal libraries of GPT-style agents tuned to specific workflows—from customer service to product research to compliance. In one example, a growth-stage startup built over 100 internal agents, each supporting a specific business process. More importantly, the functional teams themselves drove the design—flagging tedious repetitive tasks, brainstorming better flows, iterating on what worked, and benefiting from the AI leverage.

The result? A culture of active optimization, where AI isn’t imposed top-down, but developed ground-up in service of the work that actually needs doing. Building smarter tooling became everyone’s job.

And the long-term effect? Less internal drag. Fewer handoffs. More time focused on creative and strategic thinking—the stuff humans are still uniquely good at.

Reality Check: You Still Have to Change (Just Not Everything)

Of course, working with AI doesn’t mean business as usual. Some shifts are non-negotiable:

  • Teams need to build new muscles—prompting, interpreting results, and course-correcting rapidly.

  • Strategy has to move from static planning to continuous, feedback-fed evolution.

  • Data systems must become more integrated, so insight and execution aren’t siloed.

  • Proprietary advantage will increasingly depend on how companies use, integrate, and learn from their own data. Closing the feedback loop—between what your AI outputs and what actually works—creates better results, better models, and better strategy.

In short: the fundamentals stay, but the game speeds up. The teams that win will be the ones that can adapt in-flight, not just in the offsite.

Conclusion: Build with AI, Not for AI

The companies getting ahead right now aren’t the ones spinning up isolated AI pilots or innovation labs off to the side. They’re the ones embedding AI into the heart of what they already do—understanding their market better, elevating the customer experience, iterating faster, and executing with less drag.

You don’t need an “AI strategy” that lives apart from the rest of your business. You need a strategy that uses AI to get sharper, faster, and more responsive. Don’t build something for AI. Build something better with it.

The discourse around AI in education often lurches between panic and hype; will it replace teachers, is it the end of thinking? The question isn’t whether we should be using AI but how to use it well?
Stacey Seltzer
Stacey Seltzer
April 28, 2025
5 min read

Originally published by The AI Journal on April 24, 2025

The discourse around AI in education often lurches between panic and hype; will it replace teachers, is it the end of thinking, is it a revolution? But in classrooms like ours — at Hudson Lab School, a project-based K–8 program just outside New York City — the conversation is less dramatic and more iterative. The question isn’t whether we should be using AI. The question is: how do we use it well?

At HLS, we’ve spent the last year treating AI not as a separate curriculum or policy mandate, but as a tool integrated into the daily work of learning. We’ve tested it in capstone projects, used it to support differentiated instruction, and introduced it in teacher workflows. We’ve experimented with a range of tools — ChatGPT, NotebookLM, Runway, Inkwire — and collaborated with entrepreneurs from our studio, Co-Created, to bring emerging AI applications into the school environment.

This article is a field report of sorts: a look at what’s actually working, where the challenges lie, and what we’re learning about the practical role AI can play in a real school.

Prompting as the New Literacy

In 2024, AI is already in the hands of students. A recent international survey showed that 86% of students report using AI tools in their academic work, with nearly a quarter engaging daily. Among American high schoolers, the numbers are even more striking — especially in writing-heavy disciplines like English and social studies. 

Yet, the level of fluency with these tools varies widely. Most students know how to ask a chatbot for help. Far fewer know how to interrogate its answers, challenge its assumptions, or build a productive back-and-forth.

At HLS, we’re treating prompting as a new core literacy — a set of metacognitive practices that help students engage with generative systems effectively and responsibly. We’re not teaching (and definitely not allowing) students to use AI to write for them. We’re teaching them to use it to learn with them.

We begin by introducing basic prompt structures in low-stakes contexts — not to produce polished work, but to explore ideas. Students might ask ChatGPT to act as a thought partner while planning their writing assignments, to generate study questions and flashcards based on their own notes, or to offer alternative perspectives on a historical event. Some use it to role-play different scenarios or help them look at multiple sides of an issue. Others prompt it to quiz them on concepts they’ve been struggling with, or to explain the same topic in multiple ways. 

Because each student is engaging the model individually, with prompts tailored to their needs, the interaction becomes deeply personalized — a kind of one-on-one tutorial that adjusts in real time to the learner’s questions, interests, and level of understanding. These early interactions aren’t about getting the “right” answer. They’re about developing the habit of thinking with a tool that responds. Over time, students stop seeing AI as a vending machine and start treating it as a dynamic, imperfect collaborator — one that helps them test ideas, surface blind spots, and stretch their thinking.

When these skills are integrated into real projects, the results are both creative and rigorous.

Student Projects: AI as Amplifier and Provocation

Take, for example, the project one of our eighth graders developed as part of their capstone — actually building a working beta version of a service called “back in my day” which allowed people to converse with individuals from their family tree. The idea emerged from a convergence of personal interests: genealogy, digital memory, and the fact that our school is co-located with a senior living facility. The student wondered: Could you build a system that allowed people to “talk to” deceased relatives by simulating their personalities, speech patterns, and stories? 

He started with family documents and oral histories, then used a combination of tools — including ChatGPT for linguistic modeling, ElevenLabs for voice generation, and a custom prompt scaffold we co-developed — to create a beta version of a persona-simulating chatbot. What started as a technical experiment quickly turned into an ethical inquiry: Should we do this? What does it mean to simulate someone’s voice, or story, or opinions? He also explored what kinds of implications this could have for grieving people and would it be positive or negative for the users. 

This wasn’t a sidebar project. It became a capstone: deeply personal, technically sophisticated, and intellectually provocative. And AI was at the center of it — not doing the thinking, but provoking more of it.

In another example, a sixth grader exploring the U.S. Constitution asked whether AI itself might gain personhood by 2075. Her culminating project was a simulated presidential election featuring AI candidates, designed and animated using Runway. She created original scripts, recorded performance footage, and prompted the tool to render campaign videos. What could have been a speculative gimmick became a lens for discussing democratic values, personhood, and rights — all refracted through the emerging reality of AI’s social presence.

These aren’t hypotheticals or case studies from a lab. These are middle schoolers using real tools to ask real questions about their world — and the one they’re inheriting.

Teachers as AI Practitioners

The shift we’ve seen in our teaching staff over the past year has been just as important — and in some ways more surprising — than the changes among students. When we first introduced generative AI in professional development sessions, the response was cautious. Some teachers saw the tools as gimmicks. Others viewed them as a threat to their professional identity and many simply didn’t see how they could be relevant to their day-to-day work.

That changed when we moved from theory to practice. As soon as teachers were given space to experiment — with support, without pressure — attitudes began to shift. They started using generative AI tools not to replace their planning, but to extend it; one teacher used ChatGPT to create differentiated reading materials from a single anchor text, adjusting the prompt to produce versions for different reading levels. Others began using it as a thought partner — brainstorming project ideas, writing prompts, rubrics, and alternate ways to explain tricky concepts. The emphasis wasn’t on perfection, it was on getting started.

NotebookLM received a lot of early attention. Teachers uploaded their weekly notes and used the tool to generate podcast-style audio summaries to accompany classroom newsletters. It was a small experiment, but an impactful one — parents reported actually listening, and it helped deepen the sense of connection between home and school.

We’ve also started piloting Goblins, an AI math tutor developed by an entrepreneur in the Co-Created network, to explore how AI might support individualized instruction in more structured subjects. It’s still early, but we’re already seeing promising signs of how targeted practice and real-time feedback can supplement classroom instruction.  Particularly interesting is how adaptive AI can be to students' different learning approaches and needs, allowing teachers to be more differentiated and personal in their approaches to teaching.  

And then there are the quiet surprises. I remember logging into an administrative view on one platform and seeing dozens of lesson plans that had been built out — not because we had mandated anything, but because teachers had simply started using the tools. They weren’t announcing it. They were just doing the work.

Platforms like Inkwire, which support the design of interdisciplinary, project-based units, have also made a noticeable impact. Teachers report spending less time searching for ideas and more time adapting and refining them — because the foundational materials are already generated. The result isn’t generic AI-driven curriculum. It’s curriculum that reflects the creativity of the teacher, accelerated by the scaffolding these tools provide.

What’s made the biggest difference, though, is targeted support on prompting. Not “how to use AI,” but how to ask better questions. How to engage in a productive dialogue. How to refine and reframe. In our sessions, we treat prompts not as commands, but as design tools — ways to push the model, and the teacher’s own thinking, into new territory. When used that way, generative AI becomes not just a productivity booster, but a source of professional inspiration.

What We’re Learning

The value of AI in the classroom, as we’re seeing it, is not about automation or efficiency. It’s about acceleration — of thought, of design, of iteration. When used well, AI tools help both students and teachers move more quickly from idea to prototype, from question to debate, from concept to execution. And, crucially, they help surface new questions that wouldn’t otherwise be asked.

But it only works when the culture supports it. At HLS, we’re fortunate to have a school structure — interdisciplinary, project-based, agile — that allows us to experiment in real time. We also benefit from our work at Co-Created, where we collaborate with entrepreneurs building the next wave of AI-powered tools for learners and educators. That cross-pollination is essential: it keeps our thinking fresh, and it ensures that our practice is informed by the frontier, not just tradition.

Final Thoughts

AI in schools is not a yes/no question. It’s a how/why/when set of questions — and the answers will vary. What’s clear from our experience is that meaningful integration doesn’t start with policy. It starts with practice. With students experimenting, with teachers testing, with school leaders asking, week by week: What worked? What didn’t? What’s next?

That’s how we’re approaching AI at Hudson Lab School — and so far, what’s actually working isn’t the tool itself. It’s the mindset that surrounds it, especially as we know that the AI we’re using today is the worst AI we’ll ever use. 

Sense is an AI-powered market intelligence platform to help strategy, product, innovation, and growth teams answer complex questions, identify market shifts, and drive decision-making faster than ever
Daniel Shani
Daniel Shani
April 23, 2025
5 min read

Co-Created Launches AI-Powered Intelligence Platform to Assess Business Growth Opportunities 

Today, we’re announcing the launch of Sense, an AI-powered market intelligence platform to help strategy, product, innovation, and growth teams answer complex questions, identify market shifts, and drive executive-level decision-making—faster than ever.

Sense was initially developed by Co-Created to supercharge our venture building teams and reduce the time, investment, and resources needed to validate growth opportunities and bring new scalable solutions to market.

Unlike traditional research tools, Sense combines public, premium, and private data—everything from press releases and financial filings to social media posts and online video reviews to internal research reports, transcripts, and team notes. Sense gives users control over which data sources to use (and trust) and structures all this information to surface key patterns, detect shifts, and frame decisions based on an organization’s strategy and real-world goals. Leveraging 15 years of venture-building expertise, Sense's reasoning engine and analytical workflows deliver smarter decisions faster. 

Every insight is supported with clickable sources for full transparency—not just summaries, but structured, defensible analysis designed for executive use. Final outputs and reports are customizable, on-brand, shareable slides, memos, or reports.

How It Works & Why Sense is Different

  • Data Sources Appropriate for Big Strategic Questions: Maintain control and choice over the data sources included in the analysis to ensure you get high-quality, trusted and traceable insights. Easily add or remove data sources to enrich findings, and combine disparate unstructured inputs to inform more comprehensive perspectives. 
  • Differentiated Insights You Can Prove: Use AI to synthesize massive amounts of data and detect meaningful trends, surface anomalies, and highlight the most relevant insights tailored to specific strategic objectives. Sense provides full transparency and auditability throughout the process to preserve root sources and evidence (i.e. rather than a black box, Sense offers a “glass box” your compliance teams will appreciate – double-click anywhere along the process to investigate citations and go deeper). 
  • Decision-Ready Outputs, Built for the Boardroom: Receive structured, formatted outputs that distill complex findings into clear, actionable intelligence for leadership teams. Sense creates customizable, shareable branded outputs in the layouts and filetypes most useful and digestible to your team (i.e. no need to copy and paste text outputs into separate presentations).

Rich Wilding, Partner at Co-Created, shared, “AI-powered tools are rapidly becoming embedded in the day-to-day operations of corporate teams. But adoption often remains fragmented— many organizations use AI to automate small, discrete tasks rather than to transform how they think and act at a strategic level. Sense is built to bridge that gap. Instead of focusing only on efficiency gains, it’s designed to expand the way teams approach strategy, investment, and growth. It doesn’t just automate research; it accelerates understanding, reduces blind spots, and ensures that leaders are seeing the full landscape—not just the part that’s most visible.”  

Sense is now available to select partners and teams. Learn more or request early access at sense@co-created.com

The best way to predict the future is to sense it and respond before anyone else does.

Click here to see the press release.

View all