Legislature(2023 - 2024)BELTZ 105 (TSBldg)
01/25/2024 03:30 PM Senate STATE AFFAIRS
Note: the audio
and video
recordings are distinct records and are obtained from different sources. As such there may be key differences between the two. The audio recordings are captured by our records offices as the official record of the meeting and will have more accurate timestamps. Use the icons to switch between them.
| Audio | Topic |
|---|---|
| Start | |
| Presentation: Cloud Generative A.i. (artificial Intelligence) Overview | |
| Presentation: A.i. and Democracy | |
| Adjourn |
* first hearing in first committee of referral
+ teleconferenced
= bill was previously heard/scheduled
+ teleconferenced
= bill was previously heard/scheduled
ALASKA STATE LEGISLATURE
SENATE STATE AFFA.I.RS STANDING COMMITTEE
January 25, 2024
3:33 p.m.
MEMBERS PRESENT
Senator Scott Kawasaki, CHAIR
Senator Matt Claman, Vice CHAIR
Senator Jesse Bjorkman
Senator Bill Wielechowski
Senator Kelly Merrick
MEMBERS ABSENT
All members present
COMMITTEE CALENDAR
PRESENTATION: CLOUD GENERATIVE A.I. (ARTIFICIAL INTELLIGENCE)
OVERVIEW
- HEARD
PRESENTATION: A.I and DEMOCRACY
- HEARD
PREVIOUS COMMITTEE ACTION
No previous action to record
WITNESS REGISTER
ADDIE COOKE, A.I. Policy Lead
Google Cloud
Arlington, Virginia
POSITION STATEMENT: Presented on Cloud Generative A.I.
ILANA BELLER, Field Manager
Public Citizen
Richmond, Virginia
POSITION STATEMENT: Presented on A.I. and Democracy.
ACTION NARRATIVE
3:33:08 PM
CHAIR SCOTT KAWASAKI called the Senate State Affairs Standing
Committee meeting to order at 3:33 p.m. Present at the call to
order were Senators Bjorkman, Wielechowski, Merrick, Claman, and
Chair Kawasaki.
^Presentation: Cloud Generative A.I. (Artificial Intelligence)
Overview
PRESENTATION: CLOUD GENERATIVE A.I. (ARTIFICIAL INTELLIGENCE)
OVERVIEW
3:34:07 PM
CHAIR KAWASAKI announced the consideration of a presentation on
Cloud Generative A.I.
3:34:36 PM
CHAIR KAWASAKI introduced Ms. Cooke.
3:35:47 PM
ADDIE COOKE, A.I. Policy Lead, Google Cloud, Arlington,
Virginia, provided an overview of A.I. technology.
3:36:59 PM
MS. COOKE moved to slide 2 and stated that A.I. is a
transformational technology and Google treats it as a serious
revolutionary tool, similar to the transformational changes of
steam power, electricity, smart phones, and internet, and A.I.
will also prove as a significant shift in our lives. Governments
have the tools but need to determine the ways to put them to
use.
3:38:34 PM
MS. COOKE moved to slide 4 and said that A.I. is not a new
technology, but it has become more powerful. In 2017, Google put
out a seminal paper on the Transformer, the "T" in ChatGPT.
Google Cloud open-sourced its research to allow the entire world
to benefit from learnings. Google itself benefited from
Bidirectional Encoder Representations from Transformers (BERT),
a transformer that supports the Google search function and has
allowed users to have information readily at the fingertips. A
significant amount of progress has been made since.
Ms. Cooke said an open-source model, AlphaFold, was released in
2021 which looks at different proteins to develop new medicines.
She recollected an article that unveiled the discovery of
thousands of new pharmaceutical formulas that address various
challenges. These models are in the same vein as Google's search
model. The goal is to find patterns across volumes of
information that were previously unavailable.
However, serious considerations must be made when building
revolutionary technologies. In 2017, Google established
responsible A.I. principles. She said that as a seat holder on
one of the committees for responsible A.I. The committee and the
legal team have the opportunity to review every product for
regulatory concerns, alignment, and risks.
3:41:17 PM
MS. COOK continued that the committee offers support for the
documentation on products before the products are made available
to customers.
3:41:35 PM
MS. COOKE moved to slide 6 on large language models. She said
she used old car manuals to illustrate the capability of large
language models. By entering the information from the car
manuals into a language model, an individual would be able to
ask a question on how to fix a car and receive an answer. The
model could also identify patterns across different cars. The
power of current A.I. is that coders aren't required, so an
individual only needs to be able to read and write to access the
technology.
3:43:16 PM
MS. COOKE moved to slide 7 and said that consumer-facing
products at Google have different considerations regarding risk
management than the models for enterprises and government. A
consumer-facing model is constantly retraining on the data it is
fed, which is why Google has prioritized making an enterprise
version available. It will provide more security for enterprises
and governments.
3:45:09 PM
MS. COOKE continued to slide 8 titled "Google Cloud A.I.
Portfolio." She said Google Cloud added several foundation
models as well as a developer suite of tools. The Enterprise
Search model would have been useful in bill drafting during her
legislative role during her time in the Texas Legislature. This
technology mitigates "A.I. hallucinations," can summarize
information, and is more efficient. The duet suite of products
includes tools that support a variety of development.
3:48:34 PM
MS. COOKE noted that Google takes data very seriously. Any data
used by the Google Cloud Tenant is not retrained on user data
contrary to some competitors' structure.
3:49:44 PM
MS. COOKE moved to slide 10 on GovAI tools. As a former
legislative correspondent, she acknowledged the arduous process
of drafting legislative. New technology could help facilitate
constituent responses and assist in communication efforts. It
could quickly identify patterns and produce a complete email
using pattern identification. MIT and Stanford performed a study
observing call center agents and found that an entry level call
center agent using an A.I. powered support engine was able to
triage 20 percent more questions. A.I. technology can transform
an entry-level employee to work more efficiently and communicate
quickly, creating expertise on demand.
3:52:21 PM
MS. COOKE moved to slide 11. Using Google Search, results are at
times unexpected, so it's important to ensure that the type of
A.I. used is fit for the purpose intended. This is common sense
risk management, but Google encourages enterprises and
governments to consider intent and read the terms of contract as
a default, ensuring employees are not retraining the data or
improving its performance on an existing task by using a
different dataset.
3:54:09 PM
MS. COOKE said that Google encourages users to consider cost
controls in the decision-making process.
3:54:20 PM
MS. COOKE moved to slide 12 with examples of GovAI used today.
She said that it is important to determine when A.I. is
appropriate to addressing challenges. Some examples include:
Camera installation on emergency vehicles to anticipate pothole
triaging and assist homeowners in obtaining insurance.
She said that there are opportunities to use this technology as
long as risks are managed along the way.
3:56:02 PM
MS. COOKE moved to slide 13 and spoke to five areas of risk
management outlining the following values: privacy,
choice/value, factuality, ease of use, and security at scale.
3:56:39 PM
MS. COOKE moved to slide 16 regarding Google's A.I. principles.
She said that if technology is not in line with principles,
Google takes the appropriate steps to ensure it can be further
developed before releasing it to the public. Premature products
may be held to ensure responsible A.I., such as facial
recognition technology, which required additional time prior to
release. Google Cloud offers an image recognition system but
does not perform facial recognition. It is up to technology
users to use it responsibly. It is a shared responsibility.
3:58:24 PM
MS. COOKE moved to slide 17. She highlighted line 7 relaying
principles A.I. should be made available for uses that accord
with the following principles:
• Primary purpose and use
• Nature and uniqueness
• Scale
• Nature of Google's Involvement
3:59:09 PM
MS. COOKE moved to slide 18 showcasing risk management methods.
She said that Google Cloud has risks that are different from
other platforms such as YouTube, Google Search, or Pixel. While
each company follows the same A.I. principles, there are
disparate risk management committees. Department of Labor would
have different considerations than the Department of Health and
Human Services.
4:00:24 PM
MS. COOKE moved to slide 19 and said that data needs to be
assessed to ensure it is valid for the model intended and
obtained with permission.
4:01:02 PM
MS. COOKE moved to slide 20 showing the lifecycle of the A.I.
responsibility model:
• Define problem
• Collect and prepare data
• Train model
• Evaluate the model
• Integrate and monitor
4:01:34 PM
MS. COOKE moved to slide 22 and highlighted risk management
takeaways. It is necessary to use judgement-based decisions,
consider how these technologies might cause harm, and understand
transparency requirements, especially in government when
interacting with citizens.
4:02:45 PM
MS. COOKE moved to slide 23 and said that use within government
agencies begins with an idea. A.I. can create, summarize,
discover, and automate, and is applicable through government
agencies. She offered examples of government use cases.
4:03:21 PM
MS. COOKE moved to slide 24 and said that the graphic that lists
various considerations for assessing A.I. projects.
4:03:31 PM
MS. COOKE moved to slide 25 and said the following is a neat
list of organizational readiness considerations.
Organizational Readiness Considerations
• Risk Assessment / Align with organizational
strategy
• Project Governance
• Persona-based training and skills development
• Architecture governance
• Policies and procedures
• Software Development Lifecycle (SDLC) Integration
• Testing (internal and external)
• Reporting (internal and external)
• Regulatory watch
• Incident response
She said that enterprises need to understand the regulatory
landscape when purchasing a new technology. Testing and
reporting are core to ensuring readiness of A.I. before it is
adopted.
4:04:02 PM
MS. COOKE moved to slide 27 and said that cross-functional
teamwork is needed in government to properly manage A.I.
4:05:02 PM
SENATOR WIELECHOWSKI stated that policymakers are wrestling with
regulations and statutes that may be needed to address A.I.
concerns. Eric Schmidt, former Google CEO, said that A.I. posed
existential risk and could cause people to be harmed or killed.
Elon Musk warned of the same thing. Jeffery Hinton, the
"Godfather of A.I.," warned of A.I. dangers.
He asked if legislation should be passed to address these
concerns.
4:05:55 PM
MS. COOKE replied that key components lacking in A.I.
development are security benchmarks and testing. Google has
started working with the government to undergo testing to ensure
all proper controls are in place to prevent catastrophic harm.
Safety filters are considered in development phases. Currently,
child sexual abuse material (CSAM), terrorist content, and
harmful Chatbot language are filtered out. She noted that
filters can be circumvented, therefore, governments need to
adopt globally consolidated standards. Working with the
government and third parties to understand risks is important to
understanding how the technology works and how it can cause
risks.
^Presentation: A.I. and Democracy
PRESENTATION: A.I. AND DEMOCRACY
4:08:03 PM
CHAIR KAWASAKI announced the consideration of a presentation on
A.I. and Democracy.
4:09:30 PM
ILANA BELLER, Field Manager, Public Citizen, Richmond, Virginia,
presented on A.I. and Democracy. She said that she leads State
Artificial Intelligence for Public Citizen. Deepfake is
fabricated or fraudulent content, such as video, audio, or
images depicting a falsification of another individual's
actions. A video example was provided on slide 2 depicting Joe
Biden speaking about the film "We Bought a Zoo."
4:12:14 PM
MS. BELLER moved to slide 3 and suggested that the application
of deepfake can be incredibly serious. A couple months ago, a
major election was held in Slovakia involving a deepfake
recording of one opposing candidate. Within 48 hours of the
election, a deepfake recording went viral accusing the other
candidate of wanting to rig the election, but the targeted
candidate did not have a chance to dispel the falsified
statement. It was assumed to have influenced the outcome of the
election.
4:13:31 PM
MS. BELLER moved to slide 4 and said that deepfakes released by
candidates were developed to uplift their own attributes and
attack their opponent. This has been seen in the U.S. A recent
robocall with an A.I. voice resembling President Joe Biden
targeted thousands of New Hampshire voters urging residents not
to vote. Experts say, "2024 is going to be the first true A.I.
election in the U.S." She said the two other reasons regulations
are urgently needed are:
1. Deepfake technology is becoming rapidly more accessible.
Anyone can easily create a deepfake in two minutes.
2. Deepfake technology is improving rapidly in quality.
She stated that in August of 2023, the National Institutes of
Health (NIH) implemented a study which found that 27 to 50
percent of people could not identify a deepfake. Technologists
have said that they soon may have difficulty deciphering reality
from a deepfake.
4:16:30 PM
MS. BELLER said that these technologies are rapidly more
accessible. It is crucial these technologies are regulated as
soon as possible. It is not unrealistic that a deepfake could
swing the outcome of the U.S. election. Someone could
potentially defraud and manipulate voters into voting a certain
way.
She opined that election influence is not only the immediate
concern. There are also larger societal concerns that need to be
considered, involving deepfakes, which contribute to the erosion
of social trust. An increase in deepfake content will make it
challenging for anyone to know what to trust. A bad actor
politician could falsely blame their own wrongdoings caught on
camera on deepfake fabrication. This technology creates a whole
ecosystem of disinformation and everything becomes questionable.
Society is at this point to some degree, so further eroding
trust in our democracy should be mitigated.
4:18:49 PM
MS. BELLER moved to slide 5 and spoke to legislative solutions
and referenced five states that have passed legislation to
address deepfake concerns. Twenty-six states have introduced
legislation and 10-11 other states are actively working on
drafting legislation.
4:19:52 PM
MS. BELLER moved to slide 6 and relayed that Public Citizen has
implemented a tracker that tracks passed or introduced
legislation related to deepfakes. She emphasized that there is
no discernable patter to the groups or individuals advocating
for the legislation.
4:21:19 PM
MS. BELLER moved to slide 7 and spoke to key elements of anti-
fraudulent deepfake legislation.
[Original punctuation provided.]
Key Elements of Anti-Fraudulent Deepfake Legislation
• Prohibit distribution of unlabeled deepfakes
within [90] days of election
o Why disclosure?
• Standard: "Deceptive and fraudulent" = Shows a
person saying or doing something that they did
not say or do.
• Cover all persons not just candidates, parties
and committees
• Establish the requirement for disclosure as
prominent as other text, spoken plainly, etc.
• Establish a right for affected parties to seek
injunction to take down.
• Establish enforcement and penalties
Ms. Beller added that "disclosure" legislation is necessary when
considering first amendment concerns. One of the biggest risks
may arise from social media influencers, who might knowingly put
out fraudulent deepfake content to defraud millions of people.
4:24:48 PM
MS. BELLER moved to slide 8.
[Original punctuation provided.]
Protections
• No liability for broadcasters or platforms that
make reasonable effort to prevent deepfakes, or
that show deepfakes as part of news coverage and
describe as deepfakes
• Exception for satire
• Severability
4:25:46 PM
MS. BELLER concluded that Public Citizen is working with several
legislators to look toward regulations on deepfake. While it is
a real, dangerous issue, there is a real solution to addressing
it. Forty-one states are currently in the process of taking
legislative action.
4:26:50 PM
SENATOR CLAMAN asked why the push for legislation is on the
state rather than federal level. He suggested that it sounds
like interstate commerce.
4:27:29 PM
MS. BELLER said it would be great if this legislation was passed
on a federal level. Public Citizen supported a bipartisan group
of congresspeople, involving Senator Klobuchar and Senator
Hawley, to draft a bill, but it is unlikely to pass ahead of the
national election. The legislation would only apply to federal
election candidates rather than on the state level. Public
Citizen also put forth a petition, but the U.S. Securities and
Exchange Commission (SEC) has been slow to take action, so it is
unlikely laws would be established ahead of the election. States
play an important role regardless of whether federal legislation
is passed.
4:29:12 PM
SENATOR CLAMAN asked whether individual candidates currently
have to bring a cause of action to civil court to stop
fraudulent action or if legislation is needed.
4:29:32 PM
MS. BELLER said legislation is needed because there is no
current legislation stating that putting forth a deepfake is
illegal.
4:29:46 PM
SENATOR WIELECHOWSKI asked Ms. Cooke if technology exists to
identify whether videos are real.
MS. COOKE replied that Google has a watermarking and
verification technology in development called synth ID
(synthetic identification).
4:31:17 PM
MS. COOKE continued that it is important to continue having
conversations about deepfake. The approach must be standardized
on a transnational and global level. Resources put behind
solving the problem are resources well spent. There will always
be a solution, but it requires partnerships and a consensus
across the globe.
4:32:32 PM
SENATOR WIELECHOWSKI asked if the technology is advanced enough
to solve election concerns or if consideration has been given to
legislation requiring synthetic identification filtering.
4:32:59 PM
MS. COOKE said until an agreement about which technology is
going to be used, it is difficult to create a mandate, but a
market for these technologies remains. Even the best technology
can be subverted by sophisticated actors, so educating the
public on fact checking is crucial.
4:34:11 PM
MS. BELLER said Public Citizen supports watermarking, but it is
not currently possible to prove provenance. To clarify, deepfake
legislation focuses on the circulator rather than the creator.
4:35:22 PM
CHAIR KAWASAKI said that the legislature considered passing
A.I.-focused legislation last December. However, there were
challenges with finding a balance to recognize both the positive
and negative attributes of A.I. technology.
4:36:11 PM
MS. COOKE replied that in the context of security, A.I. must be
fought using A.I. One solution on the market is a security
large-language model. Google Search receives an enormous volume
of hacking attempts, therefore the company trained a model to
identify threats entering the system using 20 years of training.
While users weren't well educated on social media in its initial
release, there is now an opportunity to educate the public on
A.I. technology, "A.I. hallucinations," privacy protection, etc.
When working with an enterprise customer, A.I. is used to scan
data to ensure personally identifiable information (PII) has
been removed before data is accepted.
4:39:33 PM
SENATOR BJORKMAN asked about communications labeling and first
amendment concerns surrounding political advertising. He
wondered if there are additional first amendment concerns that
would prevent a law from standing a court challenge.
4:40:21 PM
MS. COOKE replied that having an exemption for satire is
important for first amendment rescinds, but Google has found no
further first amendment concerns preventing legislation from
moving forward.
4:41:10 PM
CHAIR KAWASAKI referenced the first deepfake model that was
broadcasted in New Hampshire. He asked what in this case would
prevent someone from circumventing loopholes under current laws.
4:42:01 PM
MS. COOKE replied that the case would have to play out in court.
She suggested that there may be plausible deniability for audio-
specific deepfake content.
4:43:11 PM
CHAIR KAWASAKI said that the current legislative budget allots
DPS and other agencies a large sum of money to cover the legal
cost of redacting personal information from public information
for the press. He asked for more information about the
redactions of Personal Identifiable Information (PII) using A.I.
4:43:57 PM
MS. COOKE clarified that the question is about any technical
tools available to redact state-level FOIA requests to expedite
responses to constituents. She offered to follow up on this use
case to find solutions.
4:44:45 PM
SENATOR MERRICK opined that the presentation was interesting.
She asked if law enforcement agencies have expressed concerns
about the fabrication of evidence.
4:45:09 PM
MS. COOKE responded that it has not been expressed as a concern.
4:45:12 PM
CHAIR KAWASAKI thanked the testifiers. The Legislature has been
thinking about ways to navigate A.I., and Senator Shelly Hughes
recently introduced legislation on political deepfakes.
4:46:36 PM
There being no further business to come before the committee,
Chair Kawasaki adjourned the Senate State Affairs Standing
Committee meeting at 4:47 p.m.
| Document Name | Date/Time | Subjects |
|---|---|---|
| AK preso.pdf |
SSTA 1/25/2024 3:30:00 PM |
A. I. Presentation |
| AI Deep fakes.pdf |
SSTA 1/25/2024 3:30:00 PM |
AI Deep fake |
| AI Deepfakes - AK Sen State Affairs Cmte.pdf |
SSTA 1/25/2024 3:30:00 PM |
Additional Deep fake pres |