This image was uploaded in the JPEG format even though it consists of non-photographic data. This information could be stored more efficiently or accurately in the PNG or SVG format. If possible, please upload a PNG or SVG version of this image without compression artifacts, derived from a non-JPEG source (or with existing artifacts removed). After doing so, please tag the JPEG version with {{Superseded|NewImage.ext}} and remove this tag. This tag should not be applied to photographs or scans. If this image is a diagram or other image suitable for vectorisation, please tag this image with {{Convert to SVG}} instead of {{BadJPEG}}. If not suitable for vectorisation, use {{Convert to PNG}}. For more information, see {{BadJPEG}}.
Summary
DescriptionAgreement with security statements - 2024 AI index.jpg
English: The data is from the Global State of Responsible AI report in 2024, and the chart is from Stanford University's 2024 AI index. Note that according to the report, the respondents are organizations. Here is what the 2024 AI index says: "The survey inquired about companies’ perspectives on risks associated with foundation model
developments. A significant majority, 88% of organizations, either agree or strongly agree that those developing foundation models are responsible for mitigating all associated risks (Figure 3.4.6). Furthermore, 86% of respondents either agree or strongly agree that the potential threats posed by generative AI are substantial enough to warrant globally agreed-upon governance."
This work is free and may be used by anyone for any purpose. If you wish to use this content, you do not need to request permission as long as you follow any licensing requirements mentioned on this page.
The Wikimedia Foundation has received an e-mail confirming that the copyright holder has approved publication under the terms mentioned on this page. This correspondence has been reviewed by a Volunteer Response Team (VRT) member and stored in our permission archive. The correspondence is available to trusted volunteers as ticket #2024060510011799.
to share – to copy, distribute and transmit the work
to remix – to adapt the work
Under the following conditions:
attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
share alike – If you remix, transform, or build upon the material, you must distribute your contributions under the same or compatible license as the original.
Uploaded a work by Stanford Institute for Human-Centered Artificial Intelligence (permission obtained by email from the AI index research manager) from https://aiindex.stanford.edu/report/#individual-chapters with UploadWizard