illustration

Copyright© Schmied Enterprises LLC, 2025.

Misconceptions frequently surround machine learning (ML), often overshadowing established facts. This discussion aims to clarify several of these points.

A primary recommendation is to thoroughly understand the specific industry you intend to serve. Artificial intelligence excels at analyzing existing data and knowledge bases to answer questions, often generating insightful or creative interpretations based on learned patterns. However, current AI systems seldom produce entirely novel discoveries comparable to Nobel Prize-winning breakthroughs. This limitation stems partly from the unique creative capabilities inherent in human cognition and the absence of AI agents capable of independently designing and executing real-world experiments. Human ingenuity remains crucial for certain types of creative problem-solving.

A deep understanding of your industry enables you to recognize the current capabilities and constraints of the workforce and identify opportunities where AI can augment their performance. Furthermore, this knowledge facilitates effective communication regarding the potential benefits of AI implementation with key decision-makers.

Possessing relevant industry knowledge significantly aids in assessing the potential impact of an automation project. When clients articulate their requirements, this expertise allows you to accurately understand their needs and evaluate the feasibility and value of potential AI solutions.

Key considerations involve evaluating whether the proposed AI solution can demonstrably increase revenue. Will it enhance the experience for existing customers or attract new ones? It is also crucial to identify whether other significant bottlenecks - beyond the scope of the potential AI implementation - might impede business growth. For instance, clients' own limitations in time or funding could restrict expansion potential regardless of AI adoption.

Another critical assessment is cost reduction potential. Can the AI initiative lower operational costs, specifically impacting variable or fixed costs as reflected in the contribution margin analysis? While job elimination is a possible outcome, strategic foresight is necessary. Widespread layoffs across an industry could potentially dampen overall market demand, negatively affecting revenues even for companies reducing headcount. Adopting a growth mindset is often more beneficial; AI is frequently applied to automate existing processes, thereby simplifying employees' tasks, increasing efficiency, reducing errors, and improving operational smoothness.

Job elimination may not be the most common outcome. Employees often possess valuable domain expertise and an understanding of the business context (an "owner mindset"). When AI handles routine tasks, motivated employees can focus on ensuring the AI systems function correctly and contribute effectively. Furthermore, the time saved can be reinvested into strategic activities that drive future company growth.

In cases where AI does lead to workforce reduction, the impact often falls on roles associated with variable costs. For example, streamlining manufacturing processes might reduce the need for certain production roles. However, this can create opportunities to redeploy employees towards expanding production capacity, minimizing waste, exploring new upstream supply chain options, or contributing to the development and testing of new product lines.

Change initiatives framed positively and with a forward-looking perspective are generally better received, simplifying the implementation process. Clear, rational explanations for such projects tend to gain acceptance more effectively than purely promotional campaigns.

Identifying a business opportunity necessitates a learning phase. While a common myth suggests that AI requires advanced university-level mathematics, practical application often relies more on understanding concepts than deep theoretical expertise, although foundational mathematical principles are involved. Machine learning frequently utilizes Graphics Processing Units (GPUs) because they excel at the parallel matrix calculations (a core component of linear algebra) fundamental to many algorithms, performing these operations much faster than standard CPUs found in typical PCs or Macs. The field is constantly evolving; AI research and implementation practices progress rapidly. For instance, the rise of Large Language Models (LLMs) indicates a potential shift towards architectures incorporating graph theory and natural language processing techniques, complementing earlier methods heavily reliant on numerical optimization techniques sometimes found in fields like operations research.

Familiarity with current terminology is essential for effective communication and credibility within the AI field. Technological fields often develop specific vocabularies; using outdated or imprecise terms can hinder collaboration and affect perceptions during evaluations, such as tender processes. For example, understanding the context where 'tensor' is preferred over 'matrix' (as tensors are a generalization often used in deep learning frameworks) demonstrates currency with the field's language.

Key terms frequently encountered in AI consultancy include: augmenting, artificial intelligence (AI), convolutional transformations, corpus, deep learning (DL), fine-tuning, generative artificial intelligence, grounding, Graphics Processing Units (GPUs), hyperparameters, hidden layers, inference, labeling, large language models (LLMs), machine learning (ML), overfitting, Retrieval-Augmented Generation (RAG), supervised learning, tensors, training, training data, underfitting, and unsupervised learning.

It is crucial to recognize that distinctions between related technical terms, such as 'training' versus 'fine-tuning' a model, can represent vastly different scopes of work and correspondingly large variations in project costs, potentially ranging from thousands to millions of dollars.

Furthermore, measuring the impact of AI projects requires objective, data-driven methods. Subjective assessments or surveys, while potentially suitable for general interest articles, often lack the rigor expected by C-level executives and financing bodies. Securing investment or project approval typically necessitates thorough risk assessment and analysis that yields actionable, verifiable data.

A recommended approach for implementation is often a phased rollout, deploying the AI system incrementally across the organization, perhaps targeting user groups sequentially (akin to peeling layers of an onion). This allows for measuring the productivity impact on specific users and individual performers at each stage. Such demonstrated value often facilitates securing further investment more effectively than proposing a large, single-budget expenditure upfront.

Consider the potential for saturation effects. As AI systems become widespread, their interactions can lead to complex outcomes. Moreover, unlike controlled experiments in some physical sciences (where methodologies like Randomized Controlled Trials originated), AI systems deployed in human environments face adaptive behavior. Both humans and potentially other intelligent systems can learn the model's behavior, consciously or unconsciously mitigating or exaggerating its effects (reflexivity). This introduces risks analogous to psychosomatic or placebo effects in medicine and can lead to scenarios where users 'game the system'. Such adaptive dynamics could potentially result in unintended consequences or even make the overall project outcome suboptimal for the organization.

Transparently disclosing these potential risks and complexities to client companies is a crucial aspect of responsible consultancy, similar to how potential side effects of medications are disclosed.

A potential action plan for establishing an AI consultancy could encompass the following stages:

1. Develop deep expertise in a specific industry or select an industry where you already possess significant knowledge.

2. Invest in continuous learning through relevant literature and reputable training programs. Foundational knowledge can be gained from books, while specific certifications or vendor training can enhance credibility.

3. Master the relevant AI terminology and begin initial client outreach or market testing.

4. Initially, target individuals or smaller entities who may have immediate, specific needs (e.g., custom chatbot development, AI coaching, model fine-tuning), as adoption might start at a personal or small-scale level.

5. Progress to engaging with larger companies. Businesses often work with multiple vendors for reasons including cost comparison, risk mitigation (avoiding single-vendor dependency), and ensuring operational backups. Frame offerings accordingly.

6. Build a strong brand reputation, which may eventually allow for diversification into areas like offering specialized training or certifications.

7. Consider the role of emerging technologies like robotics. While potentially expensive and subject to supply chain constraints (e.g., advanced motors influenced by geopolitical factors), demonstrations involving robotics can serve as effective marketing tools, generating visibility.

The rationale for establishing new AI consultancies stems from persistent unmet needs and inefficiencies across various sectors. Opportunities exist wherever complex problems remain unsolved or processes are suboptimal - whether it involves easing the burden on professionals like accountants or software engineers facing demanding tasks, or exploring how AI might assist in addressing societal challenges like access to support services. Identifying and addressing these gaps effectively represents the core opportunity for a new AI venture.