The Age of AI is a monthly nationally-representative survey series measuring AI adoption and usage at work, school, and home. This is the 2nd and latest edition in the series.
• 1,500 nationally representative Americans
• AI in the workplace, classroom, and society
• All data from Verasight's verified panel
Learn more about how AI is changing people’s lives.
This section explores how widely AI is being used in daily life.
63% of U.S. adults say they used an AI tool in the last month, and a quarter now use one every day. Writing help, recommendations, and organization are the most common uses.
This section looks at how AI is changing the way people find information online.
4 in 5 adults say they’ve seen AI-generated summaries in their search results, and younger adults are three times more likely than older Americans to skip search entirely and go straight to chatbots.
This section explores where the public stands on oversight.
Most Americans support requiring labels on AI content, safety tests for new models, and even a pause on advanced development. But only 28% are confident the government can actually keep up.
Verasight released its inaugural report on public opinion toward artificial intelligence (AI) in July. Overall, we found a public that is quickly incorporating AI tools into all areas of work, school, and personal life — and which is both optimistic and anxious about our AI future.
In this report, we update our findings with fresh survey data, and address new areas of impact. While our first report focused on school life, in this report we add questions on government regulation and the way AI is impacting the way people use the internet. We update our AI use typology, finding a slight increase in the rate of both frequent use of all tools as well as general use of niche tools.
Overall, 63% of respondents say they have used AI tools — such as ChatGPT, Google Gemini, and Microsoft Copilot— at least once in the last month. Most of those users are not regular users, though a significant proportion report interacting with AI once (10%) or several times (15%) per day. In total a quarter of U.S. adults say they use AI tools on a daily basis.
AI usage has increased slightly since our last survey in July 2025, though changes are inside the surveys' margins of error (61% monthly, 23% daily use).
Figure 1
The small increase in usage is driven by an increase in the share of the population interacting with AI on a more-than-daily basis, and a decrease in the share of the population who say they "never" use tools. These changes are within the margin of error of our last survey.
The most common use cases for AI tools include writing assistance (used by 45% of AI users), getting recommendations (40%), personal organization and fitness tips (32% each), coding assistance (28%), and entertainment (e.g., storytelling, games, music: 28%). A third of workers each say they use AI chatbots and AI integrations with productivity software.
Leading AI models have achieved very high brand awareness over the last few years, with OpenAI's ChatGPT leading the pack at 86%. A majority of Americans have also heard of the models from other select large tech companies, including Google's Gemini (80%) and Microsoft's Copilot (61%).
Lesser-used assistants and tools, such as the AI-assisted search engine from Perplexity and Meta's Llama, still meaningfully lag the competition. And while many Americans have heard of tools including Grok, Claude, and Deepseek, few use them.
Figure 2
Most respondents who report using AI tools in the past month use one of the big 3 models: ChatGPT (43%), Gemini (37%), or Copilot (18%). The chat interfaces for the former two dominate use cases, while the latter is more popular for its integration with Microsoft's existing suite of office productivity applications (especially its email software, Outlook).
Figure 3
Models from lesser-known competitors all fail to cross 10% adoption, with xAI's Grok model (integrated in the Twitter/X application) topping the pack at 6%. Three percent of the public each report using Claude AI and Perplexity, specialized tools for Code and Search.
Just 1% of Americans report using Meta's AI model, Llama, in the last 30 days.
The concentration of usage among the top three platforms reflects the importance of marketing and user bases. ChatGPT's success as a standalone service contrasts with Copilot's integration strategy and Gemini's leverage of Google's search to drive awareness, but both have successfully converted users.
In contrast, smaller players face significant challenges in user acquisition. While roughly one-in-five Americans have heard of Claude, only one-in-thirty report using it regularly. This suggests that existing default service providers enjoy a sort of "incumbency advantage," and that platform integrations play a crucial role in determining adoption.
The 37% of Americans who did not use AI tools in the past month report a variety of reasons for abstention. The most common (around 20% of Americans) is that users see no need for the assistants. Abstainers report the following reasons:
The prominence of "no need" as the primary barrier suggests that many Americans remain unaware of practical applications for AI in their daily lives. This represents both a challenge and opportunity for AI companies, indicating that better education and demonstration of use cases could significantly expand the user base. AI companies can increase usage most easily by explaining how their tools can be useful to people, and in explaining how to use the models more generally.
The relatively low cost sensitivity (only 5% cite expense as a barrier) of consumers suggests that pricing is less of an obstacle than perceived value and trust. Though some AI models have expensive subscription tiers (access to Pro models from OpenAI runs users $200 per month, on average, respondents say they spend just $4.15 per month on AI tools.
We have sorted Americans into one of five groups based on how frequently they use AI, which tools they use, and (if they don't use AI) their reasons for not doing so.
This typology has 5 buckets
The breakdown for each group across both our surveys is visualized below:
Figure 4
Relative to our July report, there has been a modest increase in the share of adults Verasight classifies as an Early Adopters, but all month-to-month changes are within our surveys' margins of error.
In July, the Pew Research Center reported results from a study of 900 users' browsing activity (with data gathered in Spring 2025) that the increasing use of AI summaries in search results, such as Google Search, has decreased the amount of web traffic that these services send to websites.
According to Pew, only about one in one hundred users who saw an AI-generated summary of results clicked on a link to the website where the information originated. Compared to web traffic referrals sent by search queries without AI summaries available, referral traffic where summaries are available was 50% lower.
The ubiquity of AI summaries in search results represents a fundamental shift in how Americans consume online information. This change has significant implications for content creators, publishers, and the broader digital economy, as it potentially reduces website traffic and advertising revenue while concentrating information consumption within search platforms.
Seeking updated data, we asked Verasight's panel of Americans to recall how often they saw AI-generated summaries in their own search results. Roughly 4-in-5 adults said they recalled seeing AI summaries in their results, speaking to the increasing deployment of these models. Respondents reported the following appearance of AI summaries in their search results over the last week:
Some users are shrugging off search entirely and going straight to AI chatbots for answers. We asked respondents to tell us how often they use each of the following tools "when [they] are looking for information online":
Figure 5
Here, age breakdowns stand out. While just 5% of US seniors report using AI chatbots such as ChatGPT to "always" complete their information-gathering tasks, three times as many (15%) Americans under 30 report the same. And in total, 63% of young people report using AI tools to answer their search queries at least some of the time.
This generational divide in information-seeking behavior suggests a long-term transformation in how Americans access knowledge. Younger users, having grown up with more interactive technologies, appear more comfortable with conversational AI interfaces compared to traditional keyword-based search. This trend could accelerate as AI tools become more accurate and integrated into daily workflows.
However, respondents report issues trusting the output of these AI models used for search. When asked to rate how accurate they think chatbots' answers are on a scale from 0 to 10, the average respondent gave chatbots a score of just 6.35.
Relatedly, about a third of the public says they are not very or at all confident that they can distinguish between human-generated content and AI generated content:
Figure 6
Regardless of their trust issues, the public still thinks AI summaries are useful. When asked if they would turn Google's AI summaries feature off (if Google gave them the power to do so), a majority of respondents said they'd keep the summaries on for at least most or all of their searches:
This apparent contradiction — users find AI summaries useful despite trust concerns — highlights the pragmatic approach many Americans take toward AI tools. The convenience and time-saving benefits often outweigh accuracy concerns for routine information needs, and users likely overestimate their ability to identify AI-generated content or factual inaccuracies in reports (while retaining healthy skepticism for high-stakes decisions).
Because of their potential to spread misinformation and cannibalize existing businesses — and jobs — the public supports several regulations on how AI systems are used and developed.
Respondents expressed strong support (more than two-thirds support) for each of the following regulations:
Figure 7
Yet despite support for these regulations, the average person is not at all confident in the government's ability to guide development of AI systems. We asked people "How confident are you that government regulators can effectively enforce AI rules and keep pace with new technologies?" They gave the following responses:
Americans want regulation but doubt their government's capability to implement it effectively. This skepticism may stem from perceived regulatory failures in previous technology waves, such as social media and data privacy, combined with the rapid pace of AI development that outstrips traditional regulatory timelines.
In our first AI survey, we reported that:
Overall, Americans are navigating AI with experimentation, and caution. The technology’s social contract is actively under negotiation, with policy, education, and workforce development lagging behind consumer curiosity and corporate hype. This survey underscores the need for clear frameworks from governments, businesses, and parents on acceptable uses of AI. The most common feeling about AI is not optimism (40 percent) or excitement (33 percent), but anxiety (53 percent).
Our latest survey reaffirms this level of hesitance in the public. The most common feeling about AI is still anxiety, with 50% of U.S. adults agreeing with the statement "I feel anxious about the rise of AI" and 24% disagreeing.
In comparison, 13% of respondents said they "strongly" and 27% "somewhat" agree (40% total) with the statement "I’m excited about the possibilities AI brings to my life." A similar 39% report feeling confident they can "keep up with the changes driven by AI," while a third (30%) of adults say they are not confident in their ability to do so.
Figure 8
The persistence of anxiety as the dominant emotion around AI reflects deep-seated concerns about societal disruption, job displacement, and loss of human agency. However, this anxiety coexists with growing excitement and practical adoption, suggesting that Americans are compartmentalizing their broader concerns while embracing specific beneficial applications.
Demographic differences in sentiment are pronounced: younger adults (ages 18-29) are twice as likely to express excitement about AI in their work (40%) compared to those over 65 (20%). Similarly, college graduates show moderately higher confidence in keeping pace with AI changes (44%) compared to those with high school education or less (35%). These gaps suggest that AI's impact may exacerbate existing inequalities unless addressed through targeted education and support programs.
This report reveals a nuanced picture of American attitudes toward artificial intelligence in 2025. While adoption continues to grow steadily, with nearly two-thirds of adults using AI tools monthly, the technology landscape remains dominated by a few major players, with ChatGPT, Gemini, and Copilot capturing the vast majority of users.
Intensity of usage is accelerating, and so is breadth of disruption. The internet, in particular, may be fundamentally altered by a change in how users search for information online.
The scale of impacts to society are cause for both excitement and anxiety among our respondents. The data suggests AI is becoming embedded in their lives despite ongoing trust and societal concerns. The technology's social contract remains under negotiation, with users pragmatically adopting beneficial applications while maintaining skepticism about broader implications. Our research also shows Americans do not want AI applications replacing creative pursuits. Success for AI companies and policymakers will depend on addressing the need to demonstrate value in work and in digital spaces, to build trust with users, and in creating governance frameworks that keep pace with technological development.
Given the pace of AI development and that the primary reason for non-adoption is perceived lack of need (53%), it's likely we see wider adoption in the near future. We are at just the dawn of the integration of AI in our work, school, and life.
Explore our full topline and crosstabs available here. Access the July edition here.
Verasight collected data for this survey from July 30 - August 04, 2025. The sample consists of 1,509 United States adults.
The sampling criteria for this survey were:
The selection criteria for the final sample were:
The data are weighted to match the June 2025 Current Population Survey on age, race/ethnicity, sex, income, education, region, and metropolitan status. Verasight also weighted the survey to match Pew Research Center NPORS benchmarking survey social media use distributions, as well as to a running three-year average of NPORS partisanship distributions and population benchmarks of 2024 vote. The margin of sampling error, which accounts for the design effect and is calculated using the classical random sampling formula, is +/- 3.1%.
All respondents were recruited from the Verasight Community, which is composed of individuals recruited via random address-based sampling, random person-to-person text messaging, and dynamic online targeting. All Verasight community members are verified via multi-step authentication, including providing an SMS response from a mobile phone registered with a major U.S. carrier (e.g., no VOIP or internet phones) as well as within-survey technology, including verifying the absence of non-human responses with technologies such as Google reCAPTCHA v3. Those who exhibit low-quality response behaviors over time, such as straight-lining or speeding, are also removed and prohibited from further participation in the community. Verasight Community members receive points for taking surveys that can be redeemed for Venmo or PayPal payments, gift cards, or charitable donations. Respondents are never routed from one survey to another and receive compensation for every invited survey, so there is never an incentive to respond strategically to survey qualification screener questions.
To further ensure data quality, the Verasight data team implements a number of post-data collection quality assurance procedures, including confirming that all responses correspond with U.S. IP addresses, confirming no duplicate respondents, verifying the absence of non-human responses, and removing any respondents who failed in-survey attention and/or straight-lining checks. The Verasight data team also reviewed open-ended items to ensure no responses contained nonsensical, inappropriate, or non-sequitur text. Respondents that completed the survey in less than 30% of the median completion time were removed.
Unmeasured error in this or any other survey may exist. Verasight is a member of the American Association for Public Opinion Research Transparency Initiative.
Founded by academic researchers, Verasight enables leading institutions to survey any audience of interest (e.g., engineers, doctors, policy influencers). From academic researchers and media organizations to Fortune 500 companies, Verasight is helping our client stay ahead of trends in their industry. Learn more about how Verasight can support your research. Contact us at contact@verasight.io.