Legislature(2025 - 2026)BELTZ 105 (TSBldg)
04/29/2025 03:30 PM Senate STATE AFFAIRS
Note: the audio
and video
recordings are distinct records and are obtained from different sources. As such there may be key differences between the two. The audio recordings are captured by our records offices as the official record of the meeting and will have more accurate timestamps. Use the icons to switch between them.
| Audio | Topic |
|---|---|
| Start | |
| SB26 | |
| SB102 | |
| SB37 | |
| SB33 | |
| SB2 | |
| Adjourn |
* first hearing in first committee of referral
+ teleconferenced
= bill was previously heard/scheduled
+ teleconferenced
= bill was previously heard/scheduled
| += | SB 37 | TELECONFERENCED | |
| *+ | SB 2 | TELECONFERENCED | |
| *+ | SB 33 | TELECONFERENCED | |
| += | SB 26 | TELECONFERENCED | |
| += | SB 102 | TELECONFERENCED | |
SB 2-AI, DEEPFAKES, CYBERSECURITY, DATA XFERS
3:56:38 PM
CHAIR KAWASAKI announced the consideration of SENATE BILL NO. 2
"An Act relating to disclosure of election-related deepfakes;
relating to use of artificial intelligence by state agencies;
and relating to transfer of data about individuals between state
agencies."
3:57:17 PM
SENATOR SHELLEY HUGHES, District M, Alaska State Legislature,
Juneau, Alaska, sponsor of SB 2 said state agencies need to use
AI responsibly, protect Alaskans' data and personal liberties,
and ensure fairness and transparency. She said while AI can help
address workforce and budget challenges by streamlining tasks,
AI use must balance innovation with safeguards against harm. She
shared an experience serving on the National Conference of State
Legislators Task Force on AI. She stressed the responsibility of
state agencies to apply AI appropriately and transparently
without hindering private sector innovation.
4:00:11 PM
SENATOR HUGHES moved to slide 2, and defined the different types
of A.I.:
[Original punctuation provided.]
Defining A.I.
ARTIFICIAL INTELLIGENCE: falls into two primary
categories:
GENERATIVE: Machine-based system designed to operate
with varying levels of autonomy that may exhibit
adaptiveness after deployment and that, for explicit
or implicit objectives, infers how to generate outputs
from input the system receives.
RULES-BASED: Computational program or algorithm
designed to process information in a logical way that
does not produce inferential output beyond its
original programming and query parameters.
4:00:33 PM
SENATOR HUGHES moved to slide 3, Why Now Why Here, and discussed
the following points:
[Original punctuation provided.]
WHY NOW? A.I. is here. It is evolving at lightning
speed. We cannot stop it. We cannot ignore it.
"A.I. is a tool and in itself is not inherently evil.
Our job is to protect against bad actors and harness
A.I. for good the very best we can."-Senator Shelley
Hughes
WHY HERE? Congress is unlikely to unite on parameters
and best practices anytime soon. State legislatures
are more nimble and ready to mitigate the harm and
bridle the benefits of A.I.
4:01:11 PM
SENATOR HUGHES moved to slide 4, Why this Focus, and discussed
the following points:
[Original punctuation provided.]
1. State Agency Use of A.I.
a) Targeting private sector development and
deployment would stifle innovation and be a fool's
errand for a state with a small population.
b) Setting the parameters for state agency use is
necessary
i. to safeguard the public
ii. to ensure appropriate deployment that will
offer efficiencies and solutions for the
workplace
2. Political Deepfakes
a) No time to waste. Elections occur every year.
b) In general, lack of trust chaos.
SENATOR HUGHES said when SB 2 was first drafted, it was the only
legislation addressing political deepfakes. Since other
legislation now covers that issue, the committee may want to
remove the political deepfake section and allow it to be handled
separately to ensure proper disclosure and accurate public
information.
4:01:53 PM
SENATOR HUGHES moved to slide 5, A Good Starting Point, and
discussed the following points:
[Original punctuation provided.]
AGREEING ON AI PRINCIPLES
• Differentiate between tool and actor
-Protect against bad actors
-Support innovation for beneficial uses
• Aim for tech neutrality
• Assign human oversight and responsibility
• Maintain transparency
• Avoid harm/injury
• Respect sensitive personal data privacy and
security
• Embrace data hygiene
• Avoid creating/reinforcing unfair bias
• Uphold laws and protect individual rights
4:03:19 PM
EIELIA PRESTON, Staff, Senator Shelly Hughes, Alaska State
Legislature, Juneau, Alaska, co-presented the slideshow for SB 2
and moved to slide 6, What it Does-High Level:
[Original punctuation provided.]
1. Adds disclosure statement requirements for
political deepfake communications.
2. Adds new sections regarding state agency use of
artificial intelligence and individuals' data.
3. Adds section to allow persons who suffers harm to
bring civil action to superior court.
4:04:03 PM
MS. PRESTON moved to slide 7, What it Does-a bit in the weeds:
[Original punctuation provided.]
Requires biennial inventory and report of AI systems
being used by state agencies published on DOA website.
1.Name and vendor of system
2.General capabilities and uses
3.Most recent impact assessment completed date
Requires biennial impact assessments to determine
efficacy and continued use of systems.
4:04:35 PM
MS. PRESTON moved to slide 8, What it Does-a bit in the weeds:
[Original punctuation provided.]
Impact Assessment
1.System efficacy
2.Human oversight
3.Accountability mechanisms
4.Decision appeals process
5.Benefits, liability, and risks to state
6.Effects on liberty, finances, livelihood, and
privacy interests of individuals, including effects
from geolocation data use.
7.Unlawful discrimination or disparate impact on
individual or group
8.Policies and procedures governing process of A.I.
system use for consequential decision-making.
4:05:07 PM
MS. PRESTON moved to slide 9, What it Does-a bit in the weeds:
[Original punctuation provided.]
Requires state agencies to
1.Notify individuals who may be legally or
significantly affected
2.Obtain individual's consent before soliciting or
acquiring sensitive personal data or sharing data with
another state agency*
3.Provide appeals process including manual human
review
4.Inform and acquire consent if AI used in hiring
interview video
5.When outsourced, multi-factor authentication must
secure system and stored data
MS. PRESTON said these matters require transparency, such as the
Department of Public Safety sharing legally required information
with the court system.
4:05:49 PM
SENATOR HUGHES commented that the asterisk on the slides
explains there is an exemption for the Department of Public
Safety.
4:05:55 PM
MS. PRESTON moved to slide 10, What it Does-a bit in the weeds:
[Original punctuation provided.]
Prohibits* state agencies from using.
1.Biometric identification e.g., facial recognition
2.Emotion recognition
3.Cognitive behavioral manipulation of individuals or
groups
4.Social scoring
5.AI systems that use data hosted in hostile nations
*With provisional exceptions for Department of Safety
4:06:24 PM
SENATOR HUGHES recommended an amendment to reference the U.S.
Code for defining foreign adversary nations. She said this would
avoid updates and provide clarity, since views on hostile
nations may differ.
4:06:53 PM
MS. PRESTON moved to slide 11 and showed examples of other
countries with issues from deepfakes during an election.
4:07:17 PM
MS. PRESTON moved to slide 12, and read the following quote:
[Original punctuation provided.]
"The fact-checkers trying to hold the line against
disinformation on social media in Slovakia say their
experience shows AI is already advanced enough to
disrupt elections, while they lack the tools to fight
back." (Morgan Meaker, The Wired, 2023)
4:07:40 PM
SENATOR HUGHES noted that while deepfakes disrupted elections
abroad in 2024, U.S. research found deepfakes spread
misinformation yet did not change outcomes. Still, 52 percent of
Americans struggle to distinguish fact from fiction in election
news, and studies show 2550 percent of deepfakes aim to
mislead. She said growing awareness has helped people spot
fakes, but disclosure, enforcement, penalties, and injunctive
relief remain important parts of the proposal.
4:09:49 PM
CHAIR KAWASAKI opined that 52 percent is a low number of people
that struggle to identify misinformation. He referenced slide 13
stating that with AI filters everything would need a content
disclosure requirement. He asked for her views on how disclosure
laws should apply to deepfakes.
4:10:57 PM
SENATOR HUGHES reiterated the definition of a deepfake:
It would have to be something that creates something
false that would appear to a reasonable person to
depict a real individual saying or doing something
that did not actually occur and provides a
fundamentally different understanding or oppression of
an individual's appearance, conduct, or spoken words.
SENATOR HUGHES replied that that AI was also used positively in
the last election, such as translating candidate speeches into
other languages. She wanted to keep SB 2 narrowly focused on
deceptive uses, like making someone appear to say or do
something that never happened.
4:12:11 PM
CHAIR KAWASAKI announced invited testimony on SB 2.
4:13:17 PM
SPENCE PURNELL, Resident Senior Fellow, Technology and
Innovation, R Street Institute, Tampa, Florida, testified by
invitation on SB 2. He agreed that deepfakes are a real problem
and supports a narrow definition to avoid overreach, favoring
disclosure over bans. He stressed government roles beyond
regulation, such as education and awareness. He endorsed SB 2 as
a well-written bill that sets responsible boundaries without
discouraging beneficial AI use. He noted the importance of
careful regulation given the technology's early stage.
4:15:51 PM
CHAIR KAWASAKI stated his belief that disclosure is effective,
though it must be done carefully. He said if everything requires
a disclosure, people may start ignoring them altogether. He
asked for an explanation on how other states have set guidelines
for the use of artificial intelligence, particularly around
disclosure.
4:16:22 PM
MR. PURNELL warned that to avoid liability, many will add
disclosure statements to political communications, which could
lessen the impact. While not a bad outcome, he stressed that the
need is for digital literacy and civic education, enabling
citizens to critically evaluate information. He noted that AI is
just the first of many emerging technologies, and long-term
resilience depends on fostering cultural change and critical
thinking rather than relying solely on policy or technology.
4:19:06 PM
DANIEL CASTRO, Vice President, Information Technology and
Innovation Foundation, Washington, D.C., testified by invitation
on SB 2 and emphasized that generative AI offers significant
benefits while posing risks, particularly with deepfakes in
elections. He highlighted the need for narrowly tailored state
policies that focus on harmful manipulation rather than
legitimate AI use. Key principles include meaningful disclosure,
timely enforcement, accountability for bad actors, and
preserving beneficial uses like translation and accessibility.
He stressed that government use of AI should be transparent and
accountable, and that policies should protect election integrity
without stifling innovation.
4:22:56 PM
CHAIR KAWASAKI shared an example of Alaska's overly broad cell
phone law that unintentionally restricted common screen devices
and had to correct the next year. He asked if other states have
similarly overregulated technology and later had to roll back or
amend the laws.
4:24:02 PM
MR. CASTRO answered yes and said some states passed AI laws with
poor definitions that overreached, creating ineffective labeling
requirements. He said over-labeling can dilute trust signals,
and such rules only bind legitimate actors, not foreign bad
actors spreading misinformation. He cautioned against imbalance
and urged for technology-neutral policies focused on deceptive
media in elections rather than AI specifically.
4:26:21 PM
NATE PERSILY, Professor, Stanford Law School, Stanford,
California, testified by invitation on SB 2 and stated that AI
amplifies the abilities of all actorselection officials,
candidates, or foreign adversariesto pursue goals. While
Americans are especially pessimistic about AI's effect on
democracy, evidence from recent elections shows little actual
use of deepfakes to sway voters. He said the greater danger is
eroding trust in authentic media, as people become better at
spotting falsehoods and worse at recognizing truth. This
distrust could harm democracy more than the deepfakes
themselves. He stated that some states have banned deepfakes,
while many others, including bills like SB 2, are under
consideration and focus on disclosure. Disclosure is viewed as a
modest yet important first step, giving voters tools to
understand what content is AI-generated without overregulating
rapidly evolving technology.
4:31:23 PM
CHAIR KAWASAKI asked how the public can be educated to better
discern truth from misinformation, especially when many people
no longer trust what they see in the news or online and can
easily be misled.
4:33:05 PM
MR. PERSILY responded that social media has replaced
authoritative news sources, creating an environment where
misinformation spreads easily. While empowering users with tools
to identify synthetic content is a step forward, lasting
solutions require building widespread critical thinking skills.
He said however, repeatedly warning people not to trust online
content risks is leading people to distrust everything, even
information that is accurate.
4:35:37 PM
CHAIR KAWASAKI asked from a legal perspective, whether penalties
for misinformation or AI misuse can serve as an effective
deterrent.
4:36:03 PM
MR. PERSILY replied that a blanket ban on AI in communications
would be unconstitutional as overly broad under the First
Amendment. However, disclosure requirements are a recognized
constitutional safe harbor. Courts, including in the Citizens
United case, have upheld strong disclosure rules. SB 2 follows
that model, treating failure to disclose AI use, especially when
intended to manipulate images, similarly to other regulatory
contexts where nondisclosure can trigger enforcement.
4:37:55 PM
CHAIR KAWASAKI requested an explanation on whether libel laws
have been used to address cases where AI makes it appear that
someone said something they did not.
4:38:21 PM
MR. PERSILY replied that libel laws have seen limited
application in AI contexts, primarily with non-consensual
intimate imagery, which poses significant risks, especially for
young people. For public figures, libel requires proving actual
malice, making it harder to pursue cases involving AI deepfakes
of officials. He said while libel shows some promise, disclosure
requirements are often a more practical regulatory tool for
election-related AI content.
4:40:16 PM
CHAIR KAWASAKI opened public testimony on SB 2.
4:40:30 PM
MIKE COONS, representing self, Wasilla, Alaska, testified in
support of SB 2 and shared personal challenges adapting to
technology and noted that AI is far more advanced today. SB 2
provides initial protection for responsible government use of
AI. He said while AI can accelerate information processing and
improve accuracy, the final product must rely on human judgment
and innovation. He named his concerns that include overreliance
on AI, the potential for deepfakes to mislead the public, and
the risk that students may lose critical skills to discern truth
from misinformation. He opined that human oversight and
responsibility are essential to ensure AI supports rather than
undermines decision-making and public trust.
4:43:12 PM
SENATOR HUGHES stated that SB 2 is technology-neutral, covering
AI and other forms of manipulation like Photoshop for deepfakes,
with disclosure required for any altered content. She said state
agencies using AI, especially in consequential decisions
affecting individuals, should follow clear parameters, obtain
consent, and ensure transparency. The fiscal note imposing high
costs is seen as unnecessary, as responsible AI use should
streamline work rather than require additional staff. She
emphasized that AI use must remain transparent, fair, and
practical, with common-sense guidelines rather than excessive
regulation. Properly implemented, AI offers long-term benefits
and potential savings for state operations.
4:46:57 PM
CHAIR KAWASAKI stated that the fiscal note includes staffing and
resources totaling $5.6 million for operations and $2.5 million
for contractual services. He said the Finance Committee would
need to review SB 2 and then it would continue to Judiciary as
the second committee of referral. He said he would work with the
bill sponsor on accelerating SB 2.
4:47:52 PM
SENATOR GRAY-JACKSON stressed the urgency of addressing AI,
noting it moves too quickly for a task force approach, and
expressed willingness to help reduce the fiscal note to move the
bill forward.
4:48:27 PM
CHAIR KAWASAKI kept public testimony open for SB 2.
4:49:05 PM
CHAIR KAWASAKI held SB 2 in committee.
| Document Name | Date/Time | Subjects |
|---|---|---|
| SB0002A.pdf |
SSTA 4/29/2025 3:30:00 PM |
SB 2 |
| SB 2 AI Sponsor Statement.pdf |
SSTA 4/29/2025 3:30:00 PM |
SB 2 |
| SB 2 Sectional Analysis.pdf |
SSTA 4/29/2025 3:30:00 PM |
SB 2 |
| SB 33 version A.pdf |
SSTA 4/29/2025 3:30:00 PM |
SB 33 |
| SB 33 Sponsor Statement version A.pdf |
SSTA 4/29/2025 3:30:00 PM |
SB 33 |
| SB 33 Sectional Analysis version A.pdf |
SSTA 4/29/2025 3:30:00 PM |
SB 33 |
| 2025 - Testimony of Daniel Castro - AK AI Deepfakes.pdf |
SSTA 4/29/2025 3:30:00 PM |
SB 2 |
| SB 2 AI Presentation S STA.pdf |
SSTA 4/29/2025 3:30:00 PM |
SB 2 |