Legislature(2023 - 2024)GRUENBERG 120
03/22/2024 01:00 PM House JUDICIARY
Note: the audio
and video
recordings are distinct records and are obtained from different sources. As such there may be key differences between the two. The audio recordings are captured by our records offices as the official record of the meeting and will have more accurate timestamps. Use the icons to switch between them.
| Audio | Topic |
|---|---|
| Start | |
| Confirmation Hearing(s): Alaska State Commission for Human Rights | |
| HB358 | |
| Adjourn |
* first hearing in first committee of referral
+ teleconferenced
= bill was previously heard/scheduled
+ teleconferenced
= bill was previously heard/scheduled
| += | HB 358 | TELECONFERENCED | |
| += | HB 107 | TELECONFERENCED | |
| + | TELECONFERENCED | ||
| + | TELECONFERENCED |
HB 358-PROHIBIT AI-ALTERED REPRESENTATIONS
1:33:11 PM
CHAIR VANCE announced that the final order of business would be
HOUSE BILL NO. 358, "An Act relating to use of artificial
intelligence to create or alter a representation of the voice or
likeness of an individual."
1:33:36 PM
REPRESENTATIVE CARPENTER moved to adopt the proposed committee
substitute (CS) for HB 358, Version 33-LS1272\U, Walsh, 3/21/24,
as the working document.
REPRESENTATIVE GROH objected for the purpose of discussion.
1:33:57 PM
REPRESENTATIVE GROH expressed concern that Version U had been
distributed on short notice.
1:34:26 PM
The committee took an at-ease from 3:34 p.m. to 1:36 p.m.
1:36:00 PM
REPRESENTATIVE GROH removed his objection.
REPRESENTATIVE CARPENTER objected for the purpose of discussion.
1:37:09 PM
BOB BALLINGER, Staff, Representative Sarah Vance, Alaska State
Legislature, gave an explanation of changes in the proposed
committee substitute (CS) for HB 358, Version U, on behalf of
Representative Vance. Section 1 addresses civil liability for
defamation based on deepfakes and allows for a claim of
defamation per se. Section 2 addresses child pornography that
has been manipulated, created, or modified using artificial
intelligence (AI) to appear to depict a child under 18 years of
age. Section 3 defines "artificial intelligence" as an
automated system that uses data input, human-defined objectives
and machine learning, natural language processing, or other
computational processing techniques of similar or greater
complexity to make a decision or facilitate human decision
making. Section 4 addresses deepfakes in electioneering
communications and allows for the following disclosure statement
when included in the communication to be used as a defense:
"This (image/video/audio) has been manipulated." Section 4 also
defines "deepfake" as any visual or audio media that is created,
altered, or otherwise manipulated by artificial intelligence,
and defines "information content provider" and "interactive
computer service" with the meanings given in 47 U.S.C. 230.
Section 5 indicates that the law would not be retroactive.
1:47:17 PM
REPRESENTATIVE SUMNER expressed concern about the phrase "human
defined objectives" in the proposed definition of AI and
suggested striking it from the bill.
CHAIR VANCE pointed out that the definition includes the words
"machine learning" and asked whether that would assuage
Representative Sumner's concern.
REPRESENTATIVE SUMNER stated that machines can be trained by
other AI programs on AI-generated data, thereby completely
removing humans from the process. He maintained his belief that
the definition would not be hindered by removing the phrase
"human defined objectives."
MR. BALLINGER opined that the definition should include the
phrase "human defined objectives" for the purpose of the
legislation, which seeks to hold an individual liable, either
criminally or civilly. Nonetheless, he agreed that if the goal
was to generally define "artificial intelligence," it would make
sense to remove it.
1:50:11 PM
REPRESENTATIVE CARPENTER removed his objection. There being no
further objection, Version U was adopted as the working
document.
1:50:43 PM
REPRESENTATIVE CARPENTER considered a hypothetical scenario and
asked whether it would fall under the definition of deepfake.
MR. BALLINGER said he did not know the answer. He explained
that it would be hard to argue that the scenario posed by
Representative Carpenter portrays a fundamentally different
understanding or impression from the altered original version.
REPRESENTATIVE CARPENTER asked whether superimposing a speech
that was made by a legislator on the House or Senate floor onto
the same legislator in a different location would be considered
a deepfake.
MR. BALLINGER suspected that it probably wouldn't rise to the
level of deepfake. He noted that the video would have to be
used in electioneering communication before considering whether
it was a deepfake.
1:55:03 PM
REPRESENTATIVE GRAY considered a scenario in which the meaning
of a video is altered to convey the opposite messaging and then
used on a political stage. He asked whether that would be
considered a deepfake.
MR. BALLINGER explained that if AI is used to convey a
fundamentally different message, it would fall under the
definition of deepfake.
REPRESENTATIVE GRAY asked whether the bill would be "hurt" by
including this scenario in the language.
MR. BALLINGER stated that it would be a policy call.
1:56:56 PM
DAVE STANCLIF, Staff, Representative Mike Cronk, Alaska State
Legislature, on behalf of Representative Cronk, prime sponsor of
CSHB 358, Version U, advised that the bill should remain
narrowly focused on deepfakes.
REPRESENTATIVE GRAY questioned the legality of altering a
person's words or image without the use of AI in political
advertising.
1:58:38 PM
IAN WALSH, Attorney, Legislative Legal Services, Legislative
Affairs Agency (LAA), shared his understanding that the creation
of falsely edited media would likely fall under defamation. He
offered to follow up with the requested information as it
pertains to election law. He noted that currently, the bill
applies to AI that modifies something to be fundamentally
different than it appears.
1:59:30 PM
REPRESENTATIVE C. JOHNSON shared his understanding that per
Version U, the disclaimer [stating "This (image/video/audio) has
been manipulated"] must be the same text size as the headline.
He expressed concern that [the size] is not workable.
MR. BALLINGER noted that this provision would only apply when a
deepfake is used in electioneering communications.
REPRESENTATIVE C. JOHNSON pointed out that not all deepfakes are
necessarily bad and gave the example of Photoshopping a dog into
a photo to induce a loving feeling.
MR. BALLINGER contended that inserting a dog would not
fundamentally change the meaning.
2:04:26 PM
REPRESENTATIVE GROH asked whether "electioneering" is defined
elsewhere in statute.
MR. BALLINGER answered yes, "in two parts," indicating that both
"electioneering" and "communication" are defined separately in
statute.
REPRESENTATIVE GROH asked how many people would need to see the
deepfake for it to become actionable.
MR. BALLINGER indicated that it would depend on whether the
material was generated with the intent to alter a person's vote.
He added that if all elements were met and damages were
demonstrated, then the individual could be held liable.
2:07:07 PM
REPRESENTATIVE GROH asked whether the disclosure in a falsified
image should be at least as big as the size of the headline.
MR. BALLINGER said it would be a policy call.
REPRESENTATIVE GROH highlighted the definition of "artificial
intelligence" on page 3, lines 19-22 of Version U. He
referenced Photoshop, ChatGPT, and Grammarly and asked what
would fit under the proposed definition of AI.
MR. BALLINGER answered yes, all of those applications are
considered AI.
2:09:47 PM
REPRESENTATIVE GROH asked whether there is a distinction between
child pornography that manipulates the likeness of a real child
versus that which is entirely generated by AI.
MR. BALLINGER shared his understanding that AI generated child
pornography [that does not depict a real child] is not
prosecutable under current laws.
REPRESENTATIVE GROH questioned whether the intent of the
legislation would be to ask the jury whether the material
depicts an identifiable, actual child.
MR. BALLINGER explained that the jury would need to consider
whether a reasonable person could think that the material
depicts an actual child.
REPRESENTATIVE GROH said he understood the dual purposes of the
legislation, which both involve complicated issues. He stressed
the importance of thinking through the implications of the bill.
2:15:17 PM
REPRESENTATIVE SUMNER suggested that the creation of deepfake
pornography based on the likeness of an adult should be, at a
minimum, defamation per se.
MR. BALLINGER clarified that it would be considered defamation
per se because damages are assumed.
2:16:46 PM
REPRESENTATIVE GRAY asked whether a campaign photo with
artificially whitened teeth would be considered a deepfake and
would require the inclusion of a disclosure statement.
MR. BALLINGER said that would not be considered a deepfake
because it would not [convey a fundamentally different
understanding].
2:18:42 PM
REPRESENTATIVE GRAY asked what would happen if the language on
page 2, line 5, "and the depiction is recognizable as an
identifiable actual child" was deleted.
MR. BALLINGER remarked, "It probably, almost definitely, would
be considered unconstitutional" until it reaches the Supreme
Court.
2:22:01 PM
REPRESENTATIVE SUMNER recalled a previous election in which his
political opponent Photoshopped him into a Planned Parenthood t-
shirt. He asked whether that would be considered defamation per
se under the proposed legislation.
MR. BALLINGER responded, "probably not," because the standard
for political officials is much higher. However, if Version U
were to pass, he suspected that the photo could be considered a
deepfake if it were made to look legitimate.
2:23:32 PM
REPRESENTATIVE CARPENTER shared a hypothetical example and asked
whether it would be considered a deepfake in addition to
defamation.
MR. BALLINGER reiterated that it would depend on whether [the
manipulated material] conveyed a fundamentally different
meaning.
2:25:47 PM
REPRESENTATIVE CARPENTER sought to confirm that if the material
included a disclosure statement, it would not be considered a
deepfake.
MR. BALLINGER confirmed that if the disclosure were used, there
would be "no harm no foul."
REPRESENTATIVE CARPENTER pointed out that there are years of
quoted text available to AI, which could be used to create a
speech in video form to manipulate the context and portray the
candidate in a certain light. He sought to confirm that this
conduct would not be covered by the bill.
MR. BALLINGER maintained that if the quote was used with the
intent to convey a completely different message, that would be
considered [a deepfake].
2:28:03 PM
MR. STANCLIFF asked the committee to identify the most
significant constitutional issues with the bill.
MR. BALLINGER shared his belief that the biggest constitutional
issue surrounds AI generated child pornography because, under
current jurisprudence, it is victimless.
CHAIR VANCE emphasized the importance of ensuring that any
sweeping changes pertaining to AI would not unintentionally
impact current electioneering practices. Furthermore, she
opined that there needs to be a disclaimer on the use of
deepfakes for consumer protection.
2:33:50 PM
REPRESENTATIVE CARPENTER suggested that the committee take a
step back from child pornography [that is generated
artificially] in an effort to realize that it may not involve
the filming of humans.
CHAIR VANCE agreed that much of this discussion bleeds over into
First Amendment rights. She shared her belief that [the bill]
should not push the envelope with regard to constitutionality in
an effort to protect Alaskans' rights. She added that the use
of AI would be an ongoing conversation because it's moving at a
much faster pace than the legislature.
[CSHB 358, Version U, was held over.]
| Document Name | Date/Time | Subjects |
|---|---|---|
| Rebecca Carrillo Human Rights App_Redacted.pdf |
HJUD 3/22/2024 1:00:00 PM |
|
| Rebecca Carrillo Human Rights Resume_Redacted.pdf |
HJUD 3/22/2024 1:00:00 PM |
|
| HB 358 - Proposed CS v.U.pdf |
HJUD 3/22/2024 1:00:00 PM HJUD 3/25/2024 1:00:00 PM |
HB 358 |