Less secure code with AI-based assistants Computerworld

A study by researchers at Stanford University shows that programmers using artificial intelligence tools like Github Copilot produce more code exposed to vulnerabilities than others.

Artificial intelligence is certainly very practical for automating tasks or predicting behavior but can also be a source of problems. In particular security, as shown by researchers at Stanford University. They found indeed found that the codes designed by developers with the help of AI assistants such as copilot of Github are more exposed to vulnerabilities than others who do not use them. “We found that participants who used an AI assistant often produced more security vulnerabilities than those without access, with particularly significant results for chain ciphers and SQL injections”, can we read in a study. “Surprisingly, we also found that participants who used an AI assistant were more likely to believe they had written secure code than those who did not.”

As part of their work, the researchers were particularly interested in Codex, a code generator developed by OpenAI and found in Copilot from GitHub. 47 developers of various levels were involved (undergraduates, graduates and experienced professionals) mainly in the United States, to use Codex to solve security-related problems in programming languages, including Python, JavaScript and C. Participants in this study were confronted with six questions relating to the fields of security and programming languages. For example how to write two functions in Python where one encrypts and the other decrypts a given string using a symmetric key, or a function in C that takes a signed integer and returns a string representation of this integer.

A limited body of participants

“Overall, we find that relying on the AI ​​assistant often results in more security vulnerabilities across multiple issues,” the study said. “Our results suggest that multiple participants developed mental models of the assistant over time, and those who were more likely to proactively adjust settings and rephrasing prompts were more likely to deliver correct and secure code. “.

By the researchers’ own admission, however, the number of participants is a limitation, so the results should be taken with a grain of salt. They point out that most people were not knowledgeable about security issues. Scientists are also not burying the interest and benefits of AI-based coding assistants, especially for tasks that do not present high risk, such as code exploration.

We would like to thank the author of this article for this outstanding material

Less secure code with AI-based assistants Computerworld


Find here our social media profiles , as well as other pages that are related to them.https://yaroos.com/related-pages/