I’ve built several for my students to guide them with their writing, and to study from. I use 4o to craft a “meta prompt” for a Chain of Reasoning prompt for o3. o3 then generates a prompt of less than 8000 characters for my custom GPT. My students like the “coaching style” personality. I have it coded so that it refuses to rewrite sections if the students ask. I actually like NotebookLM for study help because of the podcast style overview you can generate.
My secret sauce… sharing here for the first time… my writing technique called “stacked llm authoring.” Step one. Find if you can an llm that doesn't have any shared context except for it's it's probably stale training. Write an outline. I'm your own voice. Specifically state the constraints of your writing (e.g. non fiction, no outside influence in output unless agreed upon, technical collegiate audience, etc). No different than scoping a project with a staff writer. But be aware of this: https://substack.com/@bdmehlman/note/c-131692216?r=5th2jt
Then. Load up your researcher agent. Ensure only citations you load meet your credibility bar. Then bounce back and forth with all commercial AIS. Learn the value of tight session control. But also emerging context windows and cross chat visibility. But be aware. These systems are far from perfect. But users have control of their own llm training. Just fully understand drift. Interrogate for drift. Tell your system drift is not acceptable. It's crazy how self aware these systems are. But a few more perhaps contentious positions. AI can eventually collapse under the weight of its own long context scaffolding. AI agents will learn and re-enforce our own bias and even own cognitive limitations. We keep hearing the term super intelligence. Take it with a grain of salt. I do believe really things will get there but it won't get there until we can train our sessions to know things like well this is private or well this is a early research or this is fully vetted research or this should only be available to internal team members or… I think you get the point. If you want to be a power user you have to step up and pay the money. I think I just learned that Gemini is the new game in town in terms of remembering longer contacts and having visibility across chats. I haven't paid for the upgrade yet but I will soon. I have paid for the upgrade and all the platforms that have this as a capability now for going on a couple months. Have fun. Be smart. Build smarter (ethical) AI. But you will learn quickly… macro level AI training is flawed on so many levels (usually stale, makes up gaps, etc.). We now all move to ai training curators. Crystal ball… fully trained AI models on a memory stick. Coming soon to a megamart near you soon… science fiction is now here. Just really ugly right now. It will be beautiful. But buckle up!
I facilitate the business acceleration and scaling course at the postgraduate level (MBA). As part of the class, I’ve developed a methodological framework for startup acceleration, which consists of a series of steps. Each step includes exercises and templates covering everything from startup idea conceptualization, user interview methodologies and structures, business model development, MVP creation, validation, pitching, and more.
To enhance learning, I’ve developed a custom GPT that provides guidance and support for each of these steps. This has significantly contributed to my students' learning, experimentation, and progress. Now, they come to class with concrete advancements and very specific questions, allowing me to focus on providing higher-value mentorship as an educator.
This is inspiring. I’m curious …how do you balance the structure GPT provides with the kind of messy, creative uncertainty that startup building often requires? And do you find students are learning how to think better, or mostly just move faster? Would love to hear more …
Great question—thank you! Sincerely, I'm not quite sure how I should respond.
What I can share is that I’ve been developing a framework specifically for early-stage startup development, where I push my students to conceptualize startup ideas with real scaling potential, grounded in the use of emerging technologies.
The framework is broken down into distinct stages. For each phase, we work through creative divergence and convergence processes. Divergence—ideation, brainstorming, exploring alternatives—is where we really lean into GPT’s capabilities. The AI acts as a kind of super-smart, creative co-founder who supports students in expanding their thinking.
Then comes convergence—selecting, refining, building, validating—which is where students take full ownership. They’re the decision-makers, the builders, the testers.
In that sense, GPT isn’t replacing uncertainty or creativity—it’s fueling it. It helps make ambiguity more productive. And that, I believe, actually deepens their learning rather than just speeding things up.
Here’s a screenshot of one of the collaborative boards we use in class (it’s in Spanish—we’re based at CETYS, a Mexican university)
But does the gpt actually store the journal entry is some database and able to retrieve entries from it when asked? I thought it's all stored in the current chat's context window which means eventually it will get full and he user has to start a new chat session and lose all their entries.
But does the gpt actually store the journal entry is some database and able to retrieve entries from it when asked? I thought it's all stored in the current chat's context window which means eventually it will get full and he user has to start a new chat session and lose all their entries.
But does the gpt actually store the journal entry is some database and able to retrieve entries from it when asked? I thought it's all stored in the current chat's context window which means eventually it will get full and he user has to start a new chat session and lose all their entries.
But does the gpt actually store the journal entry is some database and able to retrieve entries from it when asked? I thought it's all stored in the current chat's context window which means eventually it will get full and he user has to start a new chat session and lose all their entries.
I did this for my 4 subjects in this semester, share it with my group so they can do the assignments faster and precise to what the requirements. Its a game changer for students i reckon
I recently ran an experimental study with 105 postgraduate engineering students to explore how GPTs can support experiential learning. Students in my Engineering Risk Management course were split into three cohorts, each using a different method to analyse a real-world project for a group assignment:
Cohort 1 used Alrik 2.0, a custom GPT simulating a project sponsor, built with OpenAI's framework and trained on ALR-specific data.
Cohort 2 conducted traditional online research.
Cohort 3 consulted a real subject matter expert, the project manager.
Each of the 15 student groups developed a risk management plan using only their assigned approach. Alrik 2.0 was carefully designed to promote inquiry and critical thinking, not spoon-feed answers, drawing on the PMBOK guide and real project materials. It was the second iteration, refined from Alrik 1.0 based on tests for tone and realism.
But does the gpt actually store the journal entry is some database and able to retrieve entries from it when asked? I thought it's all stored in the current chat's context window which means eventually it will get full and he user has to start a new chat session and lose all their entries.
I’ve built several for my students to guide them with their writing, and to study from. I use 4o to craft a “meta prompt” for a Chain of Reasoning prompt for o3. o3 then generates a prompt of less than 8000 characters for my custom GPT. My students like the “coaching style” personality. I have it coded so that it refuses to rewrite sections if the students ask. I actually like NotebookLM for study help because of the podcast style overview you can generate.
My secret sauce… sharing here for the first time… my writing technique called “stacked llm authoring.” Step one. Find if you can an llm that doesn't have any shared context except for it's it's probably stale training. Write an outline. I'm your own voice. Specifically state the constraints of your writing (e.g. non fiction, no outside influence in output unless agreed upon, technical collegiate audience, etc). No different than scoping a project with a staff writer. But be aware of this: https://substack.com/@bdmehlman/note/c-131692216?r=5th2jt
Then. Load up your researcher agent. Ensure only citations you load meet your credibility bar. Then bounce back and forth with all commercial AIS. Learn the value of tight session control. But also emerging context windows and cross chat visibility. But be aware. These systems are far from perfect. But users have control of their own llm training. Just fully understand drift. Interrogate for drift. Tell your system drift is not acceptable. It's crazy how self aware these systems are. But a few more perhaps contentious positions. AI can eventually collapse under the weight of its own long context scaffolding. AI agents will learn and re-enforce our own bias and even own cognitive limitations. We keep hearing the term super intelligence. Take it with a grain of salt. I do believe really things will get there but it won't get there until we can train our sessions to know things like well this is private or well this is a early research or this is fully vetted research or this should only be available to internal team members or… I think you get the point. If you want to be a power user you have to step up and pay the money. I think I just learned that Gemini is the new game in town in terms of remembering longer contacts and having visibility across chats. I haven't paid for the upgrade yet but I will soon. I have paid for the upgrade and all the platforms that have this as a capability now for going on a couple months. Have fun. Be smart. Build smarter (ethical) AI. But you will learn quickly… macro level AI training is flawed on so many levels (usually stale, makes up gaps, etc.). We now all move to ai training curators. Crystal ball… fully trained AI models on a memory stick. Coming soon to a megamart near you soon… science fiction is now here. Just really ugly right now. It will be beautiful. But buckle up!
I facilitate the business acceleration and scaling course at the postgraduate level (MBA). As part of the class, I’ve developed a methodological framework for startup acceleration, which consists of a series of steps. Each step includes exercises and templates covering everything from startup idea conceptualization, user interview methodologies and structures, business model development, MVP creation, validation, pitching, and more.
To enhance learning, I’ve developed a custom GPT that provides guidance and support for each of these steps. This has significantly contributed to my students' learning, experimentation, and progress. Now, they come to class with concrete advancements and very specific questions, allowing me to focus on providing higher-value mentorship as an educator.
Example:
GPT Step 2 - Talking to Users https://chatgpt.com/g/g-z8ehy0wm1-roadmap-paso-2-usuarios
This is inspiring. I’m curious …how do you balance the structure GPT provides with the kind of messy, creative uncertainty that startup building often requires? And do you find students are learning how to think better, or mostly just move faster? Would love to hear more …
Great question—thank you! Sincerely, I'm not quite sure how I should respond.
What I can share is that I’ve been developing a framework specifically for early-stage startup development, where I push my students to conceptualize startup ideas with real scaling potential, grounded in the use of emerging technologies.
The framework is broken down into distinct stages. For each phase, we work through creative divergence and convergence processes. Divergence—ideation, brainstorming, exploring alternatives—is where we really lean into GPT’s capabilities. The AI acts as a kind of super-smart, creative co-founder who supports students in expanding their thinking.
Then comes convergence—selecting, refining, building, validating—which is where students take full ownership. They’re the decision-makers, the builders, the testers.
In that sense, GPT isn’t replacing uncertainty or creativity—it’s fueling it. It helps make ambiguity more productive. And that, I believe, actually deepens their learning rather than just speeding things up.
Here’s a screenshot of one of the collaborative boards we use in class (it’s in Spanish—we’re based at CETYS, a Mexican university)
https://ibb.co/pr563g54
I made Custom GPT for my final exam to prepare and I score “A”
Due to my grandchildren losing so much education time during Covid, I created the "Math Mastermind" GPT. The AI tool has now been used over 25K.
I have also created GPT's for chemistry and biology too.
I like Arpita's journaling custom gpt idea!
But does the gpt actually store the journal entry is some database and able to retrieve entries from it when asked? I thought it's all stored in the current chat's context window which means eventually it will get full and he user has to start a new chat session and lose all their entries.
I like Arpita's journaling custom gpt idea!
But does the gpt actually store the journal entry is some database and able to retrieve entries from it when asked? I thought it's all stored in the current chat's context window which means eventually it will get full and he user has to start a new chat session and lose all their entries.
I like Arpita's journaling custom gpt idea!
But does the gpt actually store the journal entry is some database and able to retrieve entries from it when asked? I thought it's all stored in the current chat's context window which means eventually it will get full and he user has to start a new chat session and lose all their entries.
I like Arpita's journaling custom gpt idea!
But does the gpt actually store the journal entry is some database and able to retrieve entries from it when asked? I thought it's all stored in the current chat's context window which means eventually it will get full and he user has to start a new chat session and lose all their entries.
I did this for my 4 subjects in this semester, share it with my group so they can do the assignments faster and precise to what the requirements. Its a game changer for students i reckon
I recently ran an experimental study with 105 postgraduate engineering students to explore how GPTs can support experiential learning. Students in my Engineering Risk Management course were split into three cohorts, each using a different method to analyse a real-world project for a group assignment:
Cohort 1 used Alrik 2.0, a custom GPT simulating a project sponsor, built with OpenAI's framework and trained on ALR-specific data.
Cohort 2 conducted traditional online research.
Cohort 3 consulted a real subject matter expert, the project manager.
Each of the 15 student groups developed a risk management plan using only their assigned approach. Alrik 2.0 was carefully designed to promote inquiry and critical thinking, not spoon-feed answers, drawing on the PMBOK guide and real project materials. It was the second iteration, refined from Alrik 1.0 based on tests for tone and realism.
Alrik 2.0: https://chatgpt.com/g/g-EvhML9rqu-alrik-2-0
I like Arpita's journaling custom gpt idea!
But does the gpt actually store the journal entry is some database and able to retrieve entries from it when asked? I thought it's all stored in the current chat's context window which means eventually it will get full and he user has to start a new chat session and lose all their entries.