But the survey also found that among Go developers using AI-powered tools, "their satisfaction with these tools is middling due, in part, to quality concerns."
Our survey suggests bifurcated adoption — while a majority of respondents (53%) said they use such tools daily, there is also a large group (29%) who do not use these at all, or only used them a few times during the past month. We expected this to negatively correlate with age or development experience, but were unable to find strong evidence supporting this theory except for very new developers: respondents with less than one year of professional development experience (not specific to Go) did report more AI use than every other cohort, but this group only represented 2% of survey respondents. At this time, agentic use of AI-powered tools appears nascent among Go developers, with only 17% of respondents saying this is their primary way of using such tools, though a larger group (40%) are occasionally trying agentic modes of operation...
We also asked about overall satisfaction with AI-powered development tools. A majority (55%) reported being satisfied, but this was heavily weighted towards the "Somewhat satisfied" category (42%) vs. the "Very satisfied" group (13%)... [D]eveloper sentiment towards them remains much softer than towards more established tooling (among Go developers, at least). What is driving this lower rate of satisfaction? In a word: quality. We asked respondents to tell us something good they've accomplished with these tools, as well as something that didn't work out well. A majority said that creating non-functional code was their primary problem with AI developer tools (53%), with 30% lamenting that even working code was of poor quality.
The most frequently cited benefits, conversely, were generating unit tests, writing boilerplate code, enhanced autocompletion, refactoring, and documentation generation. These appear to be cases where code quality is perceived as less critical, tipping the balance in favor of letting AI take the first pass at a task. That said, respondents also told us the AI-generated code in these successful cases still required careful review (and often, corrections), as it can be buggy, insecure, or lack context... [One developer said reviewing AI-generated code was so mentally taxing that it "kills the productivity potential".]
Of all the tasks we asked about, "Writing code" was the most bifurcated, with 66% of respondents already or hoping to soon use AI for this, while 1/4 of respondents didn't want AI involved at all. Open-ended responses suggest developers primarily use this for toilsome, repetitive code, and continue to have concerns about the quality of AI-generated code.
Most respondents also said they "are not currently building AI-powered features into the Go software they work on (78%)," the surveyors report, "with 2/3 reporting that their software does not use AI functionality at all (66%)."
This appears to be a decrease in production-related AI usage year-over-year; in 2024, 59% of respondents were not involved in AI feature work, while 39% indicated some level of involvement. That marks a shift of 14 points away from building AI-powered systems among survey respondents, and may reflect some natural pullback from the early hype around AI-powered applications: it's plausible that lots of folks tried to see what they could do with this technology during its initial rollout, with some proportion deciding against further exploration (at least at this time).
Among respondents who are building AI- or LLM-powered functionality, the most common use case was to create summaries of existing content (45%). Overall, however, there was little difference between most uses, with between 28% — 33% of respondents adding AI functionality to support classification, generation, solution identification, chatbots, and software development.
Read more of this story at Slashdot.









