Legislature(2025 - 2026)GRUENBERG 120
02/28/2025 01:00 PM House JUDICIARY
Note: the audio
and video
recordings are distinct records and are obtained from different sources. As such there may be key differences between the two. The audio recordings are captured by our records offices as the official record of the meeting and will have more accurate timestamps. Use the icons to switch between them.
| Audio | Topic |
|---|---|
| Start | |
| Presentation(s): Legal and Ethical Implications of Artificial Intelligence | |
| Adjourn |
* first hearing in first committee of referral
+ teleconferenced
= bill was previously heard/scheduled
+ teleconferenced
= bill was previously heard/scheduled
| + | TELECONFERENCED | ||
| + | TELECONFERENCED | ||
ALASKA STATE LEGISLATURE
HOUSE JUDICIARY STANDING COMMITTEE
February 28, 2025
1:01 p.m.
MEMBERS PRESENT
Representative Andrew Gray, Chair
Representative Chuck Kopp, Vice Chair
Representative Ted Eischeid
Representative Genevieve Mina
Representative Mia Costello
Representative Jubilee Underwood
MEMBERS ABSENT
Representative Sarah Vance
COMMITTEE CALENDAR
PRESENTATION(S): LEGAL AND ETHICAL IMPLICATIONS OF ARTIFICIAL
INTELLIGENCE
- HEARD
PREVIOUS COMMITTEE ACTION
No previous action to record
WITNESS REGISTER
GUARAV KHANA, PhD, Senior Manager of Data Science and Digital
Journeys
Cisco Systems;
AI Leadership Instructor, Stanford University
Juneau, Alaska
POSITION STATEMENT: Presented on AI foundations during the
Legal and Ethical Implications of Artificial Intelligence
presentation.
ROSE FELICIANO, Executive Director for Washington and Northwest
TechNet
Juneau, Alaska
POSITION STATEMENT: Presented on AI policy during the Legal and
Ethical Implications of Artificial Intelligence presentation.
ACTION NARRATIVE
1:01:03 PM
CHAIR ANDREW GRAY called the House Judiciary Standing Committee
meeting to order at 1:01 p.m. Representatives Costello, Mina,
Eischeid, and Gray were present at the call to order.
Representatives Underwood and Kopp arrived as the meeting was in
progress.
^PRESENTATION(S): Legal and Ethical Implications of Artificial
Intelligence
PRESENTATION(S): Legal and Ethical Implications of Artificial
Intelligence
1:01:47 PM
CHAIR GRAY announced that the only order of business would be
the Legal and Ethical Implications of Artificial Intelligence
presentation.
1:02:27 PM
GUARAV KHANA, PhD, Senior Manager of Data Science and Digital
Journeys, Cisco Systems; AI Leadership Instructor, Stanford
University, gave a PowerPoint presentation, titled "AI
Foundations," [hard copy included in the committee packet].
Beginning on slide 2, he recalled the historic launch of ChatGPT
on November 30, 2022. On slide 3, he elaborated on the economic
impacts of AI, which is estimated to add between $2.6 trillion
to $4.4 trillion annually. Continuing to slide 4, he explained
that AI is fundamentally good at detecting patterns and
anomalies across large datasets. On slide 5, he likened AI to a
rocket ship with a quote from Andrew Ng. He outlined the
journey of machine learning - or AI - on slide 6, recalling that
it experienced a renaissance in the late 2000s with tasks like
detecting fraudulent credit card transactions. Its objectives
grew more sophisticated with the implementation of object
identification and x-ray analysis in the 2010s, growing to self-
driving cars and text/video generation at present. On slide 7,
he spoke to AI's forecasted contribution to the global economy
of $15.7 trillion by 2030, noting that thus far, AI has always
exceeded projections. Slide 8 highlighted use cases of AI in
state government for things like court filings, traffic
decongestion, and chatbots. Slides 9 through 11 gave examples
of how ChatGPT is being used. On slide 12, he explained the
transformative power of AI. He shared an example of how
generative AI systems make errors to illustrated the importance
of input and phrasing and why it matters. Slide 14 outlined the
latest research and innovation on grounding, alignment, and
safety. Grounding refers to a large language model's (LLM)
ability to provide accurate, reliable, and verifiable answers.
Alignment refers to the ability avoid harmful, biased, or
inappropriate outputs while remaining useful; the ability to
match human expectations and societal/organization norms. The
concept of safety refers to the ability for bad actors to
compromise these systems and output harmful and toxic content.
1:24:24 PM
DR. KHANA, in response to a series of questions, said the
difference between alignment and safety is the ability for the
LLM to sound like one's brand versus defense against outputting
toxic language; guardrails are built into the modes to prevent
safety violations and toxic speech; AI has no concept of
reasoning and instead, gives the illusion of thinking like a
human by predicting words; there is a meaning, depth, and
emotion expressed by humans that is not present in these
systems.
1:35:01 PM
DR. KHANA resumed the presentation on slides 15-17 with examples
of how to "hack" an LLM for a secret password. He explained
that AI can be tricked into revealing something it shouldn't or
generating toxic content. He concluded that these systems are
wonderful and innovative, but their flaws must be acknowledged
and mitigated.
1:38:52 PM
ROSE FELICIANO, Executive Director for Washington and Northwest,
TechNet, gave a presentation on TechNet and AI legislation. She
shared several policy principles that TechNet urges legislatures
to consider when crafting AI related legislation. First, policy
makers should avoid blanket prohibitions on AI, machine
learning, or other forms of automated decision making and
instead reserve any restrictions for specific use cases that
present clear, demonstrated risk of unacceptable harm. Second,
policy makers should leverage existing authorities under state
law that provide substantial anti-discrimination and civil
rights protections and limit new authorities specific to the
operation of AI where existing authorities are inadequate.
Third, policy makers should ensure any requirements on automated
decision tools focus on high risk uses where decisions are based
solely on the automated decisions. They should avoid labeling
entire sectors as inherently high risk and focus on specific
outcomes that involve the loss of life or liberty or have
significant legal effects on people. Fourth, interoperability
is a huge concern for the technology industry. She urged
legislators to rely on established national and international
framework as a guide for developing policy to ensure operability
and avoid patchwork. She encouraged members to allow measures
taken to comply with one law or regulation to satisfy the
requirements for another applicable law or regulation. She
shared that last year, TechNet had tracked 476 AI bills across
the country that largely focused on the creation of AI task
forces, election misinformation, protections against child
sexual abuse material (CSAM), and safeguarding against potential
bias in automated decision tools.
CHAIR GRAY acknowledged that protecting the public from harm may
cost money and may require restrictions on the technology
industry. He emphasized that the committee's goal is public
protection, not the protection of a particular company.
1:48:34 PM
MS. FELICIANO shared an example involving Zillow and the use of
artificial staging for real estate listings and how that might
feel deceitful to buyers. In response to a series of questions
about the use of AI in election material, she said a number of
states have passed AI disclosure laws. TechNet believes that
it's the candidate's responsibility [to represent themselves
honestly], and that this issue is much more important if AI is
being used harmfully or to disparage another candidate, as
opposed to being used to enhance one's physical appearance, for
example.
DR. KHANA, in response to a question from the chair, indicated
that it would be possible to quantify the use of AI in a
particular advertisement or video so that a usage threshold
could be implemented for disclosure, but implementing uniform
standards would be difficult because the variety of instances is
so vast. He suggested that if AI was used to craft something
entirely, a watermark could be used. Further, he shared his
belief that some of these standards would be determined in
court. He reasoned that disclosing a modicum use of AI would be
unreasonable.
MS. FELICIANO added that the state of Washington is considering
a bill that would require watermarking, also referred to as
"content provenance," for content altered by AI.
1:57:41 PM
MS. FELICIANO, in response to a series of questions about data
privacy, reported that 19 states had adopted data privacy laws
and suggested that all states should have one as a starting
point. With regard to health data, she said it must follow
Health Insurance Portability and Accountability Act (HIPAA)
guidelines. In response to a question about collaboration with
law enforcement, she said many AI companies have dedicated staff
whose job it is to look for CSAM and report it to the National
Center for Missing and Exploited Children, per federal law. In
response to a question about the regulation of data centers, she
said it's important that electric utilities and regional
planning companies are anticipating future need to prepare for
environmental and energy impacts.
2:06:36 PM
DR. KHANA agreed that an updated data privacy policy would be a
good starting point. He explained that the heart of a good data
privacy policy includes categorizing data and creating a
taxonomy of confidential versus public data or allowable versus
restricted data, for example. He said there's a way to do data
loss protection as well. Many newer generative AI models are
not that energy intensive and can be built for energy
efficiency. He added that smaller models can outperform larger
models for certain companies if they're trained to do specific
tasks well, versus trained to do everything like ChatGPT. He
opined that there wouldn't be an exponential need for energy
that crashes the grid; however, the need for power has been
underestimated. In response to a series of questions, he
reported that it takes months to build large generative AI
chatbots and days for smaller models, because they are often
derivatives of the larger models; guardrails are not required by
law, which could be mandated for any models that would be used
by the state along with other rules and layers of protection.
2:12:48 PM
MS. FELICIANO offered to follow up on two questions about
whether the Uniform Law Commission had standardized statutory
language and which states Alaska could look to for crafting
policy that imposes guardrails on new generative AI chatbots.
DR. KHANA spoke to the importance of striking a balance between
generalized and overly strict language to avoid the need to
revisit the policy as technology evolves. Be recommended erring
on the side of caution and including more rather than less to be
as comprehensive as possible.
2:17:19 PM
ADJOURNMENT
There being no further business before the committee, the House
Judiciary Standing Committee meeting was adjourned at 2:17 p.m.
| Document Name | Date/Time | Subjects |
|---|---|---|
| 2025-02-28 - House Judiciary Committee - Alaska House of Representatives v1.pdf |
HJUD 2/28/2025 1:00:00 PM |
Artificial Intelligence |