Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Less frightened. More fatigued. That’s where many of us reside with AI. Yet, I am in awe of AI. Despite the plethora and platitudes of AI promising to reshape industry, intellect, and how we live, it’s vital to approach the noise and hope with a fresh excitement that embraces complexity. One that encourages argument and sustains a healthy dose of skepticism. Operating with a skeptical mindset is liberating, pragmatic, challenges convention, and nourishes what seems to be a frequently missing sense of sanity, especially if you’re restless with endless assumptions and rumor.
We seem to be caught in a chasm or battle of ‘hurry up and wait’ as we monitor the realities and benefits of AI. We know there’s an advertised glowing future, and the market size of global AI is estimated to be more than $454 billion by the end of 2024, which is larger than the individual GDPs of 180 countries, including Finland, Portugal, and New Zealand.
Conversely, though, a recent study predicts that by the end of 2025, at least 30% of generative AI projects will be abandoned after the proof-of-concept stage, and in another report, “by some estimates more than 80% of AI projects fail — twice the rate of IT projects that do not involve AI”.
Blossom or boom?
While skepticism and pessimism are often conflated descriptions, they are fundamentally different in approach.
Skepticism involves inquiry, questioning claims, a desire for evidence and is typically constructive laden with a critical focus. Pessimism tends to limit possibility, includes doubt (and maybe alarm), perhaps anticipating a negative outcome. It may be seen as an unproductive, unappealing, and unmotivating state or behavior — although if you believe fear sells, well, it’s not going away.
Skepticism, rooted in philosophical inquiry, involves questioning the validity of claims and seeking evidence before accepting them as truth. The Greek word “skepsis” means investigation. For modern-day skeptics, a commitment to AI inquiry serves as an ideal, truth-seeking tool for evaluating risks and benefits, ensuring that innovation is safe, effective, and, yes, responsible.
We have a sound, historical understanding of how critical inquiry has benefited society, despite some very shaky starts:
- Vaccinations faced heavy scrutiny and resistance due to safety and ethical issues, yet ongoing research led to vaccines that have saved millions of lives.
- Credit cards led to concerns about privacy, fraud, and the encouragement of irresponsible spending. The banking industry improved the experience broadly via user-driven testing, updated infrastructure, and healthy competition.
- Television was initially criticized for being a distraction and a potential cause of moral decline. Critics doubted its newsworthiness and educational value, seeing it as a luxury rather than a necessity.
- ATMs faced concerns including machines making errors or people’s distrust of technology controlling their money.
- Smartphones were doubtful given they lacked a keyboard, had limited features, battery life, and more, yet were alleviated by interface and network improvements, government alliances, and new forms of monetization.
Thankfully, we have evolving, modern protocols that — when used diligently (versus not at all) — provide a balanced approach that neither blindly accepts nor outright rejects AI utility. In addition to frameworks that aid upstream demand versus risk decision-making, we do have a proven set of tools to evaluate accuracy, bias, and ensure ethical use.
To be less resistant, more discerning, and perhaps a hopeful and happy skepsis, a sampling of these less visible tools include:
Evaluation Method | What it does… | Examples | What it’s seeking as ‘truth’… |
Hallucination detection | Identifies factual inaccuracies in AI output | Detecting when an AI incorrectly states historical dates or scientific facts | Seeks to ensure AI-generated content is factually accurate |
Retrieval-augmented generation (RAG) | Combining results from trained models with additional sources to include the most relevant information | An AI assistant using current news articles to answer questions about recent events | Current and contextually relevant information from multiple inputs |
Precision, recall, F1 scoring | Measures the accuracy and completeness of AI outputs | Evaluating a medical diagnosis AI’s ability to correctly identify diseases | Balance between accuracy, completeness and overall AI model performance |
Cross-validation | Tests model performance on different subsets of data | Training a sentiment analysis model on movie reviews and testing it on product reviews | Seeks to ensure the model performs consistently well across different datasets indicating reliability |
Fairness evaluation | Checks for bias in AI decisions across different groups | Assessing loan approval rates for various ethnic groups in a financial AI | Equitable treatment and absence of discriminatory patterns and does not perpetuate biases |
A/B testing | Running experiments to compare the performance of a new AI feature against an existing standard | Testing an AI chatbot against human customer service representatives | Validation, improvements or changes from compared performance metrics |
Anomaly detection checks | Using statistical models or machine learning algorithms to spot deviations from expected patterns. | Flagging unusual financial transactions in fraud detection systems | Consistency and adherence to expected standards, rubrics, and/or protocols |
Self-consistency checks | Ensures AI responses are internally consistent | Checking that an AI’s answers to related questions don’t contradict each other | Logical coherence and reliability; results are not erratic or random |
Data augmentation | Expands training datasets with modified versions of existing data | Enhancing speech recognition models with varied accents and speech patterns | Improved model generalization and robustness |
Prompt engineering methods | Refining prompts to get the best performance out of AI models like GPT | Structuring questions in a way that yields the most accurate responses | Optimal communication between humans and AI |
User experience testing | Assesses how end-users interact with and perceive AI systems | Testing the usability of an AI-powered virtual assistant | User satisfaction and effective human-AI interaction |
4 recommendations for staying constructive and skeptical when exploring AI solutions
- Demand transparency: Insist on clear technology explanations with referenceable users or customers. In addition to external vendors and industry/academic contacts, have the same level of expectation setting with internal teams beyond Legal and IT, such as procurement, HR, and sales.
- Encourage people-first, grassroots participation: Many top-down initiatives fail as goals may exclude the impacts on colleagues and perhaps the broader community. Ask first: As non-hierarchical teammates, what is our approach to understanding AI’s impact, versus immediately assigning a task force listing and ranking the top five use cases.
- Rigorously track (and embrace?) regulation, safety, ethics, and privacy rulings: While the European Union is deploying its AI ACT, and states such as California attempt to initiate controversial AI regulation bills, regardless of your position, these regulations will impact your decisions. Regularly evaluate the ethical implications of these AI advancements prioritizing human and societal impacts over scale, profit, and promotion.
- Validate performance claims: Request evidence and conduct independent testing when possible. Ask about the evaluation methods listed above. This is especially true when working with new ‘AI-first’ companies and vendors.
Skepticism is nourishing. We need methods to move beyond everyday chatter and commotion. Whether you’re in malnourished doubt or discerning awe, this is not a zero sum competition. A cynic or pessimist’s gain does not lead to an equivalent loss in others’ optimism. I am in awe of AI. I believe it will help us win, and our rules for success are grounded in humble judgment.
In a way, albeit with provocation, skepticism is a sexy vulnerability. It’s a discerning choice that should be in every employee manual to ensure new technologies are vetted responsibly without unattractive alarm.
Marc Steven Ramos is chief learning officer at Cornerstone.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read More From DataDecisionMakers
FAQs
Q: What is the importance of skepticism in evaluating AI?
A: Skepticism plays a crucial role in questioning claims, seeking evidence, and ensuring responsible innovation in the field of AI.
Q: How can individuals foster a skeptical mindset when exploring AI solutions?
A: By demanding transparency, encouraging grassroots participation, tracking regulations, and validating performance claims, individuals can stay constructive and skeptical in their approach to AI.
Q: Why is it necessary to consider ethical implications in AI advancements?
A: Prioritizing ethical considerations in AI advancements ensures that human and societal impacts are given precedence over profit and promotion, leading to more responsible use of AI technology.
Credit: venturebeat.com