**Type**: `$= const tags = dv.current().file.tags || []; if(tags.includes("#idea")) { "Idea" } else if(tags.includes("#insight")) { "Insight" } else { "Unclassified" }`
**Growth Stage**: `= this.stage`
**Last Tended**: ``= dateformat(this.file.mtime, "yyyy-MM-dd")``
**Topics**: #personaldevelopment #selfexperimentation #AI
---
>[!Directors commentary]
> #insight Prompted by colleagues to share my own personal approach to learning, practicing and keeping up to date with the latest trends and advancements in AI - here are my sources and process:
# The Science of Learning
It can feel overwhelming, keeping up with a trend - even more daunting when you are playing catch up - and this AI revolution is not slowing up. So where do you start? Nascent, emerging technology in a dynamic environment such as this, isn't exactly rich with reliable courses and solid foundations of proven 'how-to's.' Instead, the last technological revolution gifted us social platforms for content creation and conversation where large tribes of early adopters and the organisations they follow can broadcast and proclaim the way forward on a minute by minute basis.
The lessons are out there - but I have found that to acquire the knowledge I have needed a more scientific mindset, rather than an academic one.
An academic approach would be to read the established literature, the case studies, learn the processes and patterns and apply the instructions practically to retain the knowledge for repeated application. This is all well and good, but is simply not sufficient to keep up with pace of change - by the time you have grounded a set of skills based on your initial learning, the technology and it's implications are moving on. While you can layer up upon this with the academic approach, it's going to feel like always being in the chasing pack in a marathon. I would encourage this mode as complimentary, but secondary to taking a more agile path - an approach that puts you in the leading pack, with a chance to influence the outcome.
So how is a 'scientific approach' any different? Let's start with **the Scientific Method** - "a systematic approach to investigating natural phenomena, testing ideas, and building knowledge." - importantly its a process that is observation-led and ensures that knowledge is self-correcting. The advantage of this mindset are two-fold; i) the need to investigate a phenomena, experiment and progressively acquire learning is going to better suit the environment we find ourselves in and ii) (here's the exciting bit!) you don't need to be academically inclined to break into that leading pack.
There are some steps to follow, but even these are more flexible - with an experimental inclination, you aren't forced to learn before you do... sometimes you can get right into the thing, before making your hypothesis and circling back to find the knowledge that supports your observations. But if you were to put these into a sort of process it would be:
1. Observation
2. Question
3. Hypothesis
4. Experiment
5. Analyse (your findings)
6. Draw conclusions
7. Get feedback & repeat
I'll offer an even simpler model: **Experiment <-> Question/Research & Publish**.
It doesn't really matter where you start, but inspiration and motivation play an important role: so to make take action easier - go with where you feel least resistance- that could be diving into using an AI/LLM for a variety of tasks or it could be launching into research after questioning something you read online. I do both, all the time, at the same time. The flywheel of trial, error, correction, question and learn rapidly spits out conclusions that can be validated in the moment and applied - for as long as the constraints and limits of the technology remain true. When that changes, run the experiments again, there is probably a new conclusion.
So what are my personal sources or reading and research? What tools am I using to conduct these experiments?
# My Sources
I'll group these into two: Individuals and Collectives - collectives might be platforms, channels or publications, not excluding those individuals I list, and likely where I came across them. Both groups are subjective choices, personal preference and valid primarily as "I like reading or listening to what they have to say, and the way they say it" - they are also valid as professionals in their own fields too - but here is where the '**Question**' part of the approach is critical - it is ironic that while AI may pose a threat to our long-term ability to critically think, learning in this way insists upon critical thinking. (and it is at this point I might lose those who would argue it is then easier to take the academic route). It's always important to question and examine your sources - and so you should mine too - they are a personal and somewhat narrow view: simply because my primary job isn't spend every hour gathering from all the possible sources out there; but then science isn't done that way either, its based on a big enough sample - critically analysed, tested against and peer reviewed.
## Individuals (specific to AI):
- **Kevin Roose** – author of “Futureproof: 9 Rules for Humans in the Age of AI,” technology columnist for The New York Times, and co-host of the podcast “Hard Fork” – delves into the societal impacts of artificial intelligence and automation. His work offers practical guidance on thriving in an AI-driven world, making him a valuable resource for understanding and adapting to technological advancements.
- **Ethan Mollick** – author of “Co-Intelligence: Living and Working with AI,” associate professor at the Wharton School of the University of Pennsylvania, and writer of the “One Useful Thing” newsletter – actively explores the integration of AI in education and business. His work provides practical guidance on leveraging AI for enhanced learning and productivity, making him a valuable resource for understanding and experimenting with AI applications.
- **Adam Grant** – author of “Think Again,” organisational psychologist at the Wharton School of the University of Pennsylvania, and host of the podcast “ReThinking” – explores the evolving role of AI in creativity and human interaction. In his podcast, he discusses topics such as AI’s impact on empathy and creativity, providing valuable insights into the intersection of technology and human behavior.
- **Andrew Huberman** – neuroscientist and tenured professor at Stanford University School of Medicine, and host of the “Huberman Lab” podcast – delves into topics such as brain development, neural plasticity, and the application of AI in neuroscience. His discussions often explore how AI can enhance learning and research methodologies, providing valuable insights into the intersection of technology and neuroscience.
- **Deepak Chopra** – author of “Digital Dharma: How AI Can Elevate Spiritual Intelligence and Personal Well-Being,” founder of The Chopra Foundation, and creator of DigitalDeepak.ai – explores the integration of artificial intelligence into personal growth and spirituality. His work demonstrates how AI can serve as a guide through various levels of human potential, offering innovative approaches to well-being and self-discovery.
- **Mo Gawdat** – author of “Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World,” former Chief Business Officer at GoogleX , and host of the “Slo Mo” podcast – examines the rapid advancement of AI and its potential societal impacts. His work emphasises the importance of ethical considerations and proactive engagement with AI development, offering insights into how individuals can influence the future trajectory of artificial intelligence.
## Collectives
*Publications:*
- **New Scientist** – A weekly science and technology magazine that covers a broad range of topics, including artificial intelligence, robotics, and emerging technologies. Its articles are concise and often highlight cutting-edge research that has yet to reach mainstream attention.
- **The Long Now Foundation** – Established in 1996, this nonprofit organization promotes long-term thinking and responsibility over the next 10,000 years. By focusing on long-term projects and perspectives, it offers a unique lens to assess the societal and humanitarian implications of technologies like AI, moving beyond immediate trends and hype.
- **The Wall Street Journal** – A leading financial publication that provides in-depth coverage of economic and market reactions to the latest trends and technological developments, including those in artificial intelligence. It offers valuable insights into how AI impacts businesses, markets, and the global economy.
*Podcasts:*
- **The TED AI Show** – Hosted by creative technologist Bilawal Sidhu, this podcast features conversations with experts to explore the future of AI, discussing both its thrilling possibilities and potential challenges.
- **Futureproof** – Hosted by Jonathan McCrea, this podcast delves into the latest advancements in science and technology, with frequent discussions on AI developments and their implications for the future.
- **ReThinking** – Hosted by organisational psychologist Adam Grant, this podcast examines the evolving role of AI in creativity and human interaction, providing insights into the intersection of technology and human behaviour.
- **Huberman Lab** – Hosted by neuroscientist Andrew Huberman, this podcast discusses topics like brain development and neural plasticity, often exploring how AI can enhance learning and research methodologies.
- **Deep Questions with Cal Newport** – Hosted by computer science professor Cal Newport, this podcast addresses the impact of technology on society, including discussions on AI’s role in our lives and strategies for maintaining focus in a digital world.
- **How to Be a Better Human** – Hosted by comedian and writer Chris Duffy, this podcast explores self-improvement topics, occasionally touching on how technology, including AI, influences human behaviour and personal growth.
- **Shop Talk** – Hosted by Chris Coyier and Dave Rupert, this podcast focuses on front-end web design and development, occasionally discussing emerging technologies like AI relevant to web development.
- **High Performance Podcast** – Hosted by Jake Humphrey and Professor Damian Hughes, this podcast centres on personal development and high performance, sometimes featuring discussions on technology’s role in enhancing performance.
- **The Diary of a CEO** – Hosted by entrepreneur Steven Bartlett, this podcast features in-depth conversations with business leaders, occasionally exploring how AI is transforming industries and leadership.
- **A Bit of Optimism** – Hosted by author Simon Sinek, this podcast explores various topics related to optimism and leadership, occasionally touching upon how technology influences society and leadership.
*Organisations:*
- **OpenAI** – A leading AI research organisation known for developing advanced models like GPT-4. OpenAI emphasizes the safe and ethical deployment of AI technologies and has collaborated with the U.S. AI Safety Institute to enhance AI safety research.
- **Anthropic** – An AI safety and research company dedicated to creating reliable and interpretable AI systems. Anthropic has partnered with the U.S. AI Safety Institute to advance AI safety research and testing.
- **Groq** – A technology company specialising in developing high-performance computing hardware tailored for AI and machine learning workloads. Groq focuses on delivering efficient and scalable solutions to support AI advancements.
- **United Nations (UN)** – An international organisation addressing global challenges, including the ethical and societal implications of AI. The UN has initiated discussions and advisory bodies to guide the responsible development and deployment of AI technologies.
- **U.S. Government Policy** – The U.S. government actively engages in AI policy through initiatives like the U.S. AI Safety Institute, which collaborates with AI companies to ensure the safe development and deployment of AI systems.
- **EU and UK Policy** – The European Union and the United Kingdom are developing regulatory frameworks to govern AI, focusing on safety, ethics, and innovation. The UK has established the AI Safety Institute to lead in AI safety research and policy development.
# Tools / Technologies
**Favourite Models:**
1. Sonnet 3.5 - Anthropic - My preferred 'project' collaborator, for its canvas UI, tendency to be a bit more creative and the quality and accuracy of code too.
2. 4o - Open AI - Still the best generalist for low-med order tasks and queries, simply because it gets your intention from a simple 1-shot. Also great for writing - it's tone feels more 'me' with few prompts.
3. Llama - 3.x - Meta - Most generally useful offline open source model - great for poor connections and travelling.
4. o1 - OpenAI - Good to have a reasoning model on standby for the higher order stuff - but to be honest I'm still looking for use cases where it makes a material difference.
**Favourite AI Tools:**
**Mobile:** Mirrors my model preference - ChatGPT for query, Claude for projects or dictation/ideation
**Research:** Perplexity gets a shoutout here - source/citation-first approach with a journal based feel & unique 'feeds' that let you chat directly with sources.
**Quick Tasks** - ChatGPT & 4o
**Writing:** ChatGPT & my personal GPT & a shoutout for Obsidian, when paired with AI Co-Pilot Plugins to chat with notes and auto-complete writing.
**Projects:** Claude for the canvas and interface for storing docs and cross referencing multiple sessions in a project space. - I keep hitting usage limits though!
**Private Work/Experimentation** - MSTY (what I'd call an AI playground) offers fantastic knowledge and context capabilities + split chat functionality to compare model outputs.
**Professional & Secure:** IBM's Consulting Advantage Platform (no surprise there!) - Secure for sensitive information and variety of models and agentic workflows for productivity.
# Putting it all into practice
So what does this look like in the daily mix of work and personal commitments? Although somewhat chaotic and often flexible; I have formed habits that follow a typical pattern of activity. While it's represented linearly there are entry points at each stage, which always allow for looping back through other steps:
![[999-VAULTS/_attachments/Discovery & Experimentation Model.svg|1337]]
The final, and important step in the process is documenting and sharing what you have learned and what is working for you. Peer review, feedback and engaging in discussion might feel uncomfortable and risky - but in a fast moving, and ever changing environment - it is harder to be outright wrong, always possible to update our point of view and the scientific approach - responding to new data, allows you to gracefully move with the changing world.
# Observations & open questions
**Embracing the Learning Cycle**
Navigating AI’s rapid evolution requires more than just passive learning—it demands an agile, experimental mindset. By adopting a scientific approach, we stay adaptable, continuously refining our understanding through observation, experimentation, and iteration. While this method comes with challenges—potential biases, the need for critical thinking, and the ever-present risk of narrow perspectives—it also offers the flexibility to stay ahead of change rather than chasing it.
Ultimately, learning AI isn’t about having all the answers, it’s about staying curious, questioning assumptions, and being open to new insights. The more we share, test, and refine our knowledge collectively, the better prepared we are to shape AI’s future, rather than simply react to it.
**Open Questions for Reflection:**
• How do we balance rapid experimentation with the need for deeper, more structured learning?
• In what ways can we ensure that our sources and insights remain diverse, rather than reinforcing our existing biases?
• As AI continues to evolve, how do we decide when to double down on learning a specific skill versus when to pivot?
• How can we encourage more people to adopt this agile, scientific approach—especially those who may feel overwhelmed by the pace of change?
• What ethical considerations should we keep in mind as we experiment with AI, particularly in areas where its impact is still uncertain?
---
*This is a living document in my Digital Garden. It grows and evolves with my thinking and represents my personal thoughts and opinions, and is not part of my work at IBM. However, it is part of my desire to contribute a broader conversation on how we 'get things done' - exploring the impact of tools and techniques aligned to my mission to help individuals and organisations create the settings for sustained growth.*
---
## ## Growth Log
- 2025-02-06: Initial seed planted
- 2025-02-06: Major revision
- 2025-02-06: Published on https://open.substack.com/pub/chrismoreton/p/ai-discovery-and-experimentation?r=52iryb&utm_campaign=post&utm_medium=web