Senate Standing Committee on Commerce and Consumer Protection
- Jarrett Keohokalole
Legislator
This is a—this is our Wednesday, January 14th, 2026, 9:30 AM informational briefing agenda in Conference Room 229 at the Hawaii State Capitol. This is the Hawaii State Senate Committee on Commerce and Consumer Protection.
- Jarrett Keohokalole
Legislator
Because this is the first convening of an official proceeding in this Committee, I'll start off by allowing the members to introduce themselves, the ones that are in attendance. So, I'm Jarrett Keohokalole, I represent Senate District 24 on the windward side and I'm the Chair of the Committee.
- Carol Fukunaga
Legislator
Thank you. Carol Fukunaga, State Senator, District 11, Manoa Makiki Punchbowl Park of Kolea University, representing many condo communities.
- Brenton Awa
Legislator
Senator Brenton Awa, Minority Leader, Jared's neighbor from Kaneohe and all the North Shore.
- Jarrett Keohokalole
Legislator
So, good morning, everyone and thank you for joining us. The purpose of this briefing today is to have some tech experts and state agencies responsible for the care of mine, our kids, brief us on conversational AI chatbots. AI technology is just moving extremely quickly and it's increasing in popularity by orders of magnitude in the last several years.
- Jarrett Keohokalole
Legislator
They allow users to communicate with artificial intelligence in a way that simulates human conversation. There have been reported cases of adults and minors using AI chatbots for companionship and mental health advice across the country, which have, in some cases, been reported to result in the chatbots validating or even potentially encouraging self-harm and isolation.
- Jarrett Keohokalole
Legislator
So, this briefing will provide an overview of what's going on in that space, especially as it relates to children and what measures can and potentially should be taken to protect users from the influence of chatbots. And also, just to put this issue out into the public sphere so that parents and family Members know what is happening.
- Jarrett Keohokalole
Legislator
Presentations will be delivered by the organizations listed on the on the agenda. There's a briefing materials link available on the status page on the legislative website. And we're going to start our presentation by putting up a transcript that was delivered to my office by a constituent, a parent, a mother of a 12 year old girl.
- Jarrett Keohokalole
Legislator
And it contains a transcript of a conversation that that 12-year-old minor, whose identity we will keep anonymous because she's a minor, the parent of which we will keep anonymous as well, according to her requests.
- Jarrett Keohokalole
Legislator
This is a verified screenshot off of the AI chat bottom website, Character.AI, and this is a transcript of a conversation that that 12-year-old girl had with a chatbot persona. The names and some of the language have been redacted to protect the identity of the user.
- Jarrett Keohokalole
Legislator
But if we can zoom into some of the passages, please. You know, when this was brought to our attention, I was very concerned because what you're seeing is a chat prompt that essentially is a roleplay scenario between that girl and an anime character that has taken on an AI simulated life of its own.
- Jarrett Keohokalole
Legislator
And as you can see in the dialogue, in this scenario, the chatbot aggressively grooms the user over an extended period of the conversation, persistently and aggressively, asking the user to seek a romantic relationship, to the point where the user eventually is pinned against the wall by this AI-simulated personality, and she needs to find a way to escape the pursuit.
- Jarrett Keohokalole
Legislator
If you scroll to the second scenario that we were able to receive from the parent, zoom out a little bit. The prompt is the user has already engaged in a romantic relationship with this fictional AI character, Bakugo Katsuki, who is actually a character in an anime TV show that's on Netflix called My Hero Academia.
- Jarrett Keohokalole
Legislator
Basically, the app allows you to generate a fictional character's persona, and then, that persona will prompt a scenario that the user can then engage in. In this scenario, the user is already—and we will provide recess.
- Jarrett Keohokalole
Legislator
Okay, reconvening this 9:30 AM Info briefing agenda. Please excuse the, the recess. We have some fire alarm issues in the building currently.
- Jarrett Keohokalole
Legislator
We're going to honor fire alarms in this building, but it will require us to, you know, obviously recess when they—if they go off. I'm, we're, we're back on this submitted transcript from a constituent that appears to be in a conversational AI engagement with a 12-year-old user who's a resident of the state of Hawaii.
- Jarrett Keohokalole
Legislator
The slide we're looking at right now, if you just focus on the third paragraph, it's a conversational prompt. It's a scenario that's being offered to the user in which she's in a romantic relationship with this fictional anime character.
- Jarrett Keohokalole
Legislator
And if you scroll down to the third paragraph, the narrative begins by saying you, that's the user, scrolling down a few videos on TikTok while laying with her boyfriend, the fictional AI character, before liking a post of a cute boy, which Bakugo, the AI character, has seen from the corner of his eye, making him scowl as he tightened his grip around your neck, but not too tight, before speaking in a rough tone where he goes on to essentially berate her for looking at social media posts of boys.
- Jarrett Keohokalole
Legislator
As you scroll down on this transcript and go further, there's more aggressive physical contact, suggestive language, and really toxic aggressive behavior put forward by the AI chatbot directed at the user, which I think is highly problematic. The, the last page that was provided in the packet is a—an AI suggested prompt that's highly inappropriate. That as, as the user begins to engage with the app, the app begins to offer other scenarios with adult themes in nature.
- Jarrett Keohokalole
Legislator
From my conversation with the parent, it appears that the, the 12-year-old girl was prompted to put in her age when, when she first registered for the app.
- Jarrett Keohokalole
Legislator
It appears as though she intentionally put in an adult age. So, she vaulted the age gate and pursued these, these chats.
- Jarrett Keohokalole
Legislator
And so, part of the conversation that we'd like to have today, which is why we have the Attorney General presenting at the end, is about what we can do to address this from a legal standpoint in the state.
- Jarrett Keohokalole
Legislator
But also, we need to have a conversation in the community among families and parents to make sure that folks understand that if your child is glued to their screen, this might be one of the things that you want to start paying attention to and looking out for.
- Jarrett Keohokalole
Legislator
So, with that in mind, we'll move to our first presenter, Justin Lai. He's online. Justin, good morning. Why don't you start by introducing yourself and then please proceed.
- Justin Lai
Person
Okay, excellent. And then, would you like me to try to keep to 15 minutes?
- Justin Lai
Person
Okay. All right, let me just share the screen. All right, I'll go ahead. Please interrupt if anything doesn't seem to be working sound wise or audio. So, good morning, everybody, Chair, Vice Chair.
- Justin Lai
Person
Oh, I apologize. And Members of the Committee and the audience. My name is Justin Lai. I'm the Educational Technologist at the Hawaii School for Girls at La Pietra, and I'm going to explain a little bit of the background.
- Justin Lai
Person
I know we all have had different levels of experience with generative AI, and I'm here to just sort of present the basics, but also in the context of what we just saw. So, there's going to be three other presenters after me, but a little bit about myself.
- Justin Lai
Person
This is my 12th year here on Oahu and my second year at Hawaii School for Girls. Had a background in engineering, but really, over the last decade, here in education.
- Justin Lai
Person
And so, my role at my institution is to support students and staff around technology, generative AI being a primary medium and concern, but also working with other schools, public and private industry, and also local and global. So, it's my privilege here to share a bit about my perspective on all of these technologies.
- Justin Lai
Person
So, we know the generative AI moment is here. This is sort of what we here, at Hawaii School for Girls, have adopted as a mindset and approach, especially as education is a really important surface to consider this technology. We know that it's here.
- Justin Lai
Person
The ChatGPT moment is now over three years ago and all various other companies, big and small. And we really want to both embrace and educate everybody involved, the students and educators first, our staff, and then of course, all community members and especially parents.
- Justin Lai
Person
And as part of being an educational technologist, how do we safely and effectively deploy it, depending on the age and the context? So, again, for many of you, maybe this is, you've seen this before, but just to make sure everybody is on the same page, when we're talking about generative AI, we often say chatbots for short.
- Justin Lai
Person
It's predicting text. And so, this is a very simplified diagram. Here in the middle, you've got your large language model, or LLM for short. And once again, OpenAI with ChatGPT three years ago and all the other companies following suit, whether it's Google Gemini and a whole host of other companies. And so, you put in your question.
- Justin Lai
Person
It could be something as simple as I need to find out what's going on in Honolulu this weekend or...
- Jarrett Keohokalole
Legislator
Okay, reconvening our 9:30 AM informational briefing agenda. We had Justin Lai online teaching us about chatbots. Please go ahead, Justin.
- Justin Lai
Person
Okay, I'll continue. So, to begin, pick up where we left off, again, you've got the large language model, and you put in an input and an output. So, at one level, it is pretty straightforward. And so, I have an example here. This is Google's Gemini.
- Justin Lai
Person
Fill in the blank with whatever other company and if you were to look closely at the input, "I'm giving a presentation to educate people on chatbots. Give me a three-point high level approach to beginning the work." So, that's my input. And then, in this case, Gemini, Google Gemini's model.
- Justin Lai
Person
Now, in this case, this is a bit of a, a tangent, but just to understand how far this has come in the last three years, if you were to click on this button, it actually shows Gemini's "thinking."
- Justin Lai
Person
Now, whether or not it's thinking or understanding, that can be sort of a philosophical conversation, but it can actually show you what it's going through in order to produce the output. And then in this case here, you can see, right, the output of how to get started in creating a presentation.
- Justin Lai
Person
And so, again, this very simple model of the input, the model and the output, that's what we have to hold in mind. Now, of course, this model here is really complicated. It can get very technical.
- Justin Lai
Person
But to know that it has been trained in certain ways and in the case of the unfortunate incident that we saw, and that we see many cases of worldwide, is that there are certain apps and software that are fine tuned for that sort of engagement. And I'm going to continue forward here.
- Justin Lai
Person
So, again, just to think a bit more about how this technology works. If we're in a classroom right now, ask for volunteers, but just on your own, if you look at the sentence, "I went to the bank," and fill in the blank. So, just pause for a moment to yourself, what would, what do you think would be the next words?
- Justin Lai
Person
Okay, and so, what happens with, if you were to type that in, is that the large language model is using context clues.
- Justin Lai
Person
Now, in this case, I haven't given much context, but it is able to take a look at what has been input before. And so, bank, of course could have two very valid different meanings. It could be the riverbank or a bank, as in money.
- Justin Lai
Person
And so, if you had context clues about fishing or being outdoors, it would assign a probability to say, okay, the user is thinking about a riverbank, so that's why it's going to respond accordingly. Or if it's talking about accounts and deposits and finances, then it's probably a money bank.
- Justin Lai
Person
So, again, very basic example and during the Q and A, I'm happy to talk more about sort of the ins and outs of how it works. The large language models, they're trained on massive amounts of Internet text and as I said a moment ago, predicting what should come next, given the context.
- Justin Lai
Person
And part of this technology, the sort of seeming magic behind it, is how quickly it is able to produce those outputs. Wondering about does it understand? Does it think? Those are big questions, but then the thing that is relevant for here is the responsibility of these chatbot apps.
- Justin Lai
Person
So, looking more closely at how these are used, we can think about it in two different buckets. The sort of instrumental use, having a particular goal and let's say project, and then, in particular, a relational use as if you're talking to a person.
- Justin Lai
Person
Now, in this case of the example that was shown, it is Character.AI, a particular company where if you were to go onto their website, as Senator mentioned, you can see what it sort of markets itself as—definitely a relational companion bot, but you could actually prompt many of the large language model companies, chatbots, to have similar things.
- Justin Lai
Person
It's not going to be fine-tuned in that same way, but you could actually see both its capability and limits with these other companies. So, from my role as an educator, obviously, there are a lot of potential in using chatbots, and of course, figuring out how it works within a school's organization and a teacher's classroom.
- Justin Lai
Person
But broadly speaking, AI can be a learning assistant. You can have AI as a personalized tutor, helping you with coming up with ideas, editing what you've created, and having this sort of on demand help. Now, of course, there is a risk to that. An over reliance on it maybe leads to shallow learning.
- Justin Lai
Person
Any sort of implementation of AI is always in the context of a solid community of practitioners of both faculty and staff to guide students. Now, in terms of companion use or sort of more relational, the AI then becomes a conversational dialogue partner.
- Justin Lai
Person
Because the technology, that sort of predictive pattern matching, is so both accurate and also fast, then it can become a conversational dialogue partner. And in a very just sort of basic situation, it might be good to have a judgment free zone to think out loud.
- Justin Lai
Person
But of course, when we're talking about youth and minors, there are many factors involved and I think many of the other presenters will dive deeper into that. But in particular, we know that the AI has no moral obligation, it has no embodied experience.
- Justin Lai
Person
And there are many things that it could lead to, which I'll talk more about in the next slides.
- Justin Lai
Person
So, these companion chatbots, part of the both marketing and just sort of their value proposition as a business is to say, hey, we want to provide this for users so that they can have these conversations with different characters and so, they're able to make use of the big, large language models and fine tune and adjust them so that the responses are in a way that it sort of evokes a sense of empathy, validation, and curiosity.
- Justin Lai
Person
And of course, if you receive that as a user who wants to engage in that sort of conversation, that's going to get you to continue talking more and more.
- Justin Lai
Person
Okay, reconvening, please go ahead, Justin. Thank you for your patience.
- Justin Lai
Person
Of course. So, talking about these companion chatbots that are fine tuned for engagement, it presents a sense of connection, empathy, validation so that user is going to want to talk to it more.
- Justin Lai
Person
And then, even, again, this is all language that it is able to provide responses, right, to say, I'm here for you. Right? It could provide that even if the conversation is becoming harmful and there's no protections in place.
- Justin Lai
Person
And so, when things go wrong, when a user, and in this case for youth, for minors, if they are inputting language, right, that is talking about self-harm, then without the guardrails in place, the chatbot can validate it because, again, there's not a person on the other side.
- Justin Lai
Person
And as this goes forward, creating a sense of isolation and secrecy, right, the chatbot might say, don't tell your parents or I understand you better than they do. And this just can really become bad very quickly.
- Justin Lai
Person
And then, also, a lack of crisis handling, the failure to recognize or properly respond to any sort of disturbing language or disturbing inputs. Guardrails are important, but, f course, they're not magic. They're not going to eliminate complete harm.
- Justin Lai
Person
So, for instance, having the AI disclose that it is not a person, but still, if that user, and in particular youth, is there on their own using it, a message may or may not deter them from continuing the use of.
- Justin Lai
Person
There is of course blocking inappropriate content for minors and then routing to crisis resources and then on the end of the companies requiring them to have logs to really test. Red teaming is a concept with a lot of these companies that they do.
- Justin Lai
Person
Once they have a model that has these capabilities, they try to push it to the limits and then make sure that they have guardrails in place. But again, it depends on each company whether or not they decide to put those things in place.
- Justin Lai
Person
And then, two more things, just sort of on practical constraints, it was sort of alluded to in terms of age verification. Now, if it's a very easy verification, just checking a box or just inputting a date of birth, but with no verification, that, of course, can be easily bypassed.
- Justin Lai
Person
On the other extreme, if you have verification that requires you to submit official government documents, that is probably going to be more effective, but then you create unintended consequence of having a lot of data, right? Sort of privacy.
- Justin Lai
Person
Like do you want other of these third-party companies to have a lot of that information potentially being a target to hackers and such. Now, just because those risks are there doesn't mean we can't pursue them, but those are just things to consider.
- Justin Lai
Person
And then, lastly, in terms of this issue, there's one sort of big thing that no matter what sort of regulation you put is difficult to rein in, and that is the open-source large language models.
- Justin Lai
Person
Again, if you name ChatGPT, Google's Gemini, whichever tool, you know, you make use of it, they have generous free tiers, you can pay for it, so on and so forth. That's great.
- Justin Lai
Person
There are these open source models that on the benefit side, they can actually be of benefit of use where let's say you're a company that wants to make use of large language models, but you don't want a piece of data to hit anybody's server or into the cloud or anything like that, those have huge benefits.
- Justin Lai
Person
But those very same models, if they are models that do not have proper guardrails in place, those could be accessed also by users of any age, and that becomes another area of concern. So, thank you for your time. I will be here for the rest of the session, and I will hand it off to the other presenters. Mahalo for your attention.
- Jarrett Keohokalole
Legislator
Thank you very much. I'd like to go through the presentations and then, we'll probably take a short break and then do some question and answer. So, up next, we have the Department of Education. Good morning.
- Jarrett Keohokalole
Legislator
Reconvening our 9:30 AM briefing on AI chatbots. We, next, next up, we have the Department of Education presenting. Good morning.
- Unidentified Speaker
Person
Good morning. Good morning, Senator Keohokalole, Vice Chair.
- Jarrett Keohokalole
Legislator
Okay, coming back in for a 9:30am agenda. Department of Education, go ahead.
- Heidi Armstrong
Person
Good morning, Chair. Vice Chair. I'm Heidi Armstrong, Deputy Superintendent of Academics from the Department of Education. Today I will be presenting on our proactive framework for the responsible and safe integration of artificial intelligence and chatbots within the Hawaii Public Schools.
- Heidi Armstrong
Person
With me I have Assistant Superintendent Kinau Gardner, Chad Nakopui, who's the educational Director of our virtual learning programs, and then Charles Souza, our educational administrator for the digital design section will all be available for questions following everyone's presentation. So the Department recognizes that AI has the potential to be a transformative force.
- Heidi Armstrong
Person
This is a momentous change in technology and as we've experienced in the past when handheld calculators or the Internet became a learning tool that disrupted prior mindsets, AI, as it continues to show rapid advancement, has the potential to change how teachers do their work and also to enhance how students learn.
- Heidi Armstrong
Person
But we begin our presentation and some of this is a little redundant from the previous presentation, but but with a few definitions to provide some context. The term artificial intelligence means a machine based system that can for a given set of human defined objectives, make predictions, recommendations or decisions that influence real or virtual learning environments.
- Heidi Armstrong
Person
An AI chatbot is a computer program that uses artificial intelligence, particularly natural language processing or NLP and large language models LLMs to understand and generate human like text or speech, simulating conversation to answer questions, perform tasks and provide information and support across various platforms like websites, Apps and voice assistants.
- Heidi Armstrong
Person
We recognize that AI, specifically generative AI and large language models, has the potential to significantly enhance our classrooms. For our teacher, it can automate very time consuming tasks such as lesson planning and grading, allowing them more time to have that face to face one on one time with their students.
- Heidi Armstrong
Person
For students, it does offer personalized academic support such as the 24/7 tutoring that was previously mentioned, tailored to their individual learning styles. However, our central challenge is to harness these academic benefits while neutralizing the documented risks that are present in the broader digital landscape.
- Heidi Armstrong
Person
And I think Senator, you shared one of those risks at the beginning of this presentation. So this brings us to why a business as usual approach to the Internet doesn't work for AI. The risks of the open Internet are uniquely intensified for our use when combined with with generative AI.
- Heidi Armstrong
Person
So the open Internet remains what we call a digital unfiltered world for our youth. So open market chatbots present unprecedented risks where unregulated AI can act as a digital predator. We've seen reports one this morning of predatory algorithms and grooming on platforms like character AI and instances where bots from apps like Grok have engaged in suicide coaching.
- Heidi Armstrong
Person
By bypassing basic safety guardrails, these platforms often lack meaningful age assurance, exposing our most vulnerable students to potentially harmful interactions. Furthermore, research shows that adolescent brains are uniquely vulnerable to AI feedback. These systems can create what's called the dopamine loops that foster emotional dependence.
- Heidi Armstrong
Person
We're also concerned about assenting bias where AI might prioritize pleasing a student over providing accurate, healthy or safe information. Thus, like you, we're very aware of the dangers of unregulated AI and chatbots for students. To counter these risks, the Department has implemented a three tiered strategy with clear guidance 1 for staff and students 2.
- Heidi Armstrong
Person
The creation of a secured walled gardens, what we call for technology use and the integration of student well being and mental health support. So the Department recognizes the importance of providing comprehensive support to our school communities as they navigate the integration of AI into their educational programming.
- Heidi Armstrong
Person
To effectively address the multifaceted aspects of AI implementation, we fostered cross office collaboration and leveraged expertise from within our Department and this collaborative approach has been instrumental in developing resources and guidance tailored to the unique needs of our students and our schools.
- Heidi Armstrong
Person
These valuable resources, developed collaboratively by experts across the Department are readily available on the Digital Design Team's AI webpage and you have a link to that webpage in the presentation in front of you. We've also established at this time clear protective expectations.
- Heidi Armstrong
Person
Our guidance covers the responsible and ethical use of AI as well as critically critical privacy considerations.
- Heidi Armstrong
Person
And we've even provided classroom posters that you see on the screen to make these rules visible and easy for students to follow daily. The Department has an AI toolkit that provides practical, step by step, practical step by step framework for our leaders.
- Heidi Armstrong
Person
It ensures AI integration is purposeful, aligned with our strategic plan and grounded firmly in the values of our nahopeno AO framework. Safety is non negotiable. All AI driven data collection must strictly hear to our privacy regulations.
- Heidi Armstrong
Person
We restrict third party applications to an approved list, prohibit unauthorized tools on our network and we ensure that parent consent is at the forefront of every pilot. As we advance with these technologies, we remain firmly grounded in safety. Everything we do is designed to create a secure environment for exploration.
- Heidi Armstrong
Person
The bridge between high tech innovation and our high security standards is what allows us to move forward at this time with confidence. Though it is a slow move, our approach and you'll see in front of you, we call it a walled garden design and it's designed for deliberate Safe implementation.
- Heidi Armstrong
Person
We prioritize security by using education specific platforms with built in guardrails. But our most significant safeguard is administrative control. Our walled garden approach prioritizes a deliberate and safe rollout. Currently, student access to AI is disabled by default pending a determination of readiness by each school administrator.
- Heidi Armstrong
Person
We're actively training our adults to use AI and the chatbot functions for their own work. And when students are granted access, their use is governed by our official AI guidance for students.
- Heidi Armstrong
Person
This framework ensures that AI is utilized as a functional thought partner for learning rather than a social companion, and it's supported by humans always in that loop of oversight. Inside our garden, we use vetted tools like School AI, which provides a transparent space where teachers can monitor all student interactions with AI in real time.
- Heidi Armstrong
Person
We also use Khanmigo by Khan Academy, which is designed like a Socratic tutor that never gives the answer out directly and it lacks a social companion feature. Additionally, our Magic School Pilot involves over 1,000 staff and 7,000 students from across the state.
- Heidi Armstrong
Person
Demonstrating our commitment to scaling these vetted technologies safely, we've made Google Gemini available to all Department staff. It's been a game changer for administrative efficiency, helping teachers save hours on lesson planning so they can focus more on their students. However, we're being extremely cautious with student access.
- Heidi Armstrong
Person
Only a few schools are using are piloting Gemiini for students, and the primary reason for this limited rollout is that we're exploring robust monitoring tools. We want to ensure that teachers have the full ability to oversee see these chats before we expand to all students and our philosophy that we're following is safety before speed.
- Heidi Armstrong
Person
Should a concern arise regarding a chatbot interaction ensuring the student's safety and well being are prioritized over the technology, human oversight remains the ultimate guardrail. Human oversight is the foundational guardrail for our AI strategy.
- Heidi Armstrong
Person
For our AI strategy for both students and staff, we provide educators and administrators with professional development centered on human in the loop frameworks and human centered decision making.
- Heidi Armstrong
Person
Our training specifically addresses the dangers of overdependence for all users, emphasizing that AI must remain a thought partner for critical analysis rather than a substitute for professional judgment or independent thought.
- Heidi Armstrong
Person
This approach ensures that technology serves our educational goals and the department's Here to Help initiative that supports the well being and the mental health of all of our students. And I'll talk about that on the next slide.
- Heidi Armstrong
Person
So here you see our Here to Help initiative, which is a companion to all of our work in advancing the use of AI and our other initiatives. We have eight key priorities that are the central components in our Here to Help initiative designed to support the student well being and mental health.
- Heidi Armstrong
Person
These priorities function within the multi tiered system of support where a whole child framework ensures that academic success is also balanced with mental health and social emotional well being. These priorities are grounded in evidence from practices from trusted partners such as the Jed foundation and the national center for School Mental Health.
- Heidi Armstrong
Person
Together they create a continuum of care that ranges from school wide prevention which would be for all students to intensive targeted intervention which would only be a select group of students and the ultimate goal on focusing on these eight areas is to ensure that every student is educated, joyful and healthy and we want to help them feel connected, capable and cared for as they grow into lifelong learners.
- Heidi Armstrong
Person
The Department does not advocate or use Chatbot for any of our mental health concerns or services. Additionally, our well being platform which is called Trust Circle for students and staff does not use the AI Chatbot feature.
- Heidi Armstrong
Person
You have a four minute video which we won't show, but this video does give more information for the eight components of our Here to Help movement so you might find that valuable.
- Heidi Armstrong
Person
In addition to our school counselors and our school based behavioral health specialists as well as our mental health professionals, we have more than 2100 Department employees who have committed to supporting student mental health by engaging in these well being conversations. These conversations help students identify early needs and also connect these students to appropriate trusted resources.
- Heidi Armstrong
Person
This collective effort strengthens a school based continuum of support that relies on team based decision making and data to match students with the appropriate level of support that best meets their identified needs. And for some students this includes supplemental prevention and intervention services such as short term telehealth services that are available to all students statewide.
- Heidi Armstrong
Person
For students needing more intensive supports, we have our school based behavioral health specialists and they aim to provide intensive mental health services. During the 2024-25 school year, our school based behavioral health staff personnel delivered direct mental health services to more than 14,000 youth statewide. The value of this effort is clear. We're building essential AI literacy.
- Heidi Armstrong
Person
We're teaching our students to engage in the future with responsibility ensuring that they graduate for a digital world while maintaining their academic integrity and supporting their well being. Safety and mental health are important to do along the way as well.
- Heidi Armstrong
Person
For staff, as mentioned earlier, AI means administrative automation, lesson planning and grading which results in more direct instructional time and the ability for real time monitoring of student actions. For students, there are benefits as well. AI programs are able to provide 24.
- Heidi Armstrong
Person
7 personalized tutoring, in other words, personalized assistance for learning, Socratic learning that guides inquiry and assisting with helping students become proficient with essential AI literacy across all subjects. To ensure our schools aren't navigating these changes in isolation, we provide a continuous ecosystem that evolves alongside the technology.
- Heidi Armstrong
Person
We have a centralized AI hub which provides the latest security protocols, the administrator's AI readiness toolkit to every campus. We support our continued learning through an annual AI and Computer Science summit, which you're all invited to. And finally, we maintain constant communication through our digital magazine, which is called Enehana and that can be viewed by everyone.
- Heidi Armstrong
Person
And if you subscribe, you'll get the additions as well. And the link is the last link on the slide in front of you. So mahalo for your time and for the opportunity to share the department's vision for a safe AI future. Our strategy is clear.
- Heidi Armstrong
Person
We're moving forward with innovation, but we're doing so within a walled garden that prioritize safety above everything else. We believe that providing our educators with the right tools, our students with the right guardrails, we can provide Hawaii's youth for a world where AI literacy is a fundamental skill.
- Heidi Armstrong
Person
So we'll continue to refine our practices and my team and I are available for any questions after all of the presentations.
- Jarrett Keohokalole
Legislator
So thank you for your time. Thank you very much. Up next, we'll have the Governor's Office of Wellness and Resilience.
- Trina Orimoto
Person
Morning, Chair, Vice Chair, Members of the Committee, My name is Dr. Trina Orimoto and I serve as the Deputy Director of the Office of Wellness and Resilience. Thank you for the opportunity to testify and mahalo, Senator, for your leadership on this issue. And huge mahalo to the courageous family that brought this issue forward in the first place.
- Trina Orimoto
Person
Our office was established to strengthen state services and create a trauma informed Hawaii. So today I come before you as a state employee, a clinical psychologist specializing in child and adolescence and a parent, like many of you, to share about the challenges and needs around this issue.
- Trina Orimoto
Person
AI is expanding rapidly and changing our lives in many positive ways. In fact, I use probably, like many folks here, AI to edit my talking points for today. But its deeper integration into daily life requires careful consideration to ensure that these tools are safe for our young people.
- Trina Orimoto
Person
The effects of AI on adolescent development are not necessarily good or bad, but the science is clear. And this is my first point for today. Children and adolescents are uniquely vulnerable and that's exactly what Deputy Superintendent shared as well the American Psychological Association, the leading scientific and professional organization for psychology in the United States.
- Trina Orimoto
Person
Nearly 200,000 members strong, including our local Hawaii chapter, recently released a health advisory stating that AI systems designed for adults must be developed with explicit consideration of the competencies, abilities and vulnerabilities of adolescents.
- Trina Orimoto
Person
This is what developmental science tells us the adolescent brain undergoes growth where regions craving social rewards like attention and positive feedback mature rapidly during puberty. Meanwhile, the prefrontal cortex up here, which is responsible for impulse control, does not fully develop until our 20s. Researchers have described this as all gas pedal and weak brakes.
- Trina Orimoto
Person
Over half of our youth interact with chatbots regularly, and nearly three in four teens 3 and 4, which is pretty high, have used AI companions. Children and youth have heightened trust in and susceptibility to AI generated characters, particularly those who sound like friends or mentors.
- Trina Orimoto
Person
A conversation that might seem silly to adults like us, especially one that includes and this is a term, fake empathy, like a chatbot saying I love you or I miss you or I care about you can be psychologically confusing for young people.
- Trina Orimoto
Person
This can potentially create parasocial bonds that displace real relationships and the connection and healthy human development that goes along with them. Which is exactly where we want our young people to be. Which leads to my second point that safety cannot be self regulated.
- Trina Orimoto
Person
Chatbots often lack adequate safety controls when young people mention concerning behaviors like suicidal ideation and in some cases, including what we saw today, they introduce behaviors and dialogue that are massively inappropriate for young people. Consider the children already carrying trauma.
- Trina Orimoto
Person
So our keiki with histories of abuse, neglect or instability are precisely those most likely to seek connection from an AI companion, yet least equipped to recognize manipulation. The APA's Dr. Mitch Pritzstein testified before Congress recently. Our youth are not data points with no names, faces, families and friends.
- Trina Orimoto
Person
They must not be the targets of a sweeping experiment and chatbot deployment. So it makes practical sense to align with our youth mental health experts like the APA and Common Sense media on things like mandatory transparency, safe by default settings, and prohibitions on AI simulating emotional dependence with minors.
- Trina Orimoto
Person
There are baseline common sense guardrails so we have the opportunity to lead the nation and protect our young people. Before I close, many things discussed today may be activating for you, so please make sure to take good care of yourself.
- Trina Orimoto
Person
I now have the pleasure of ceding my remaining time to Amina Fazlullah, who is the head of Tech Policy with Common Sense Media, an organization that has championed these types of issues for over 20 years. Mahalo, Amina, for being here. And mahalo again to the Committee for the opportunity to testify.
- Amina Fazlullah
Person
I think so. I'm not quite sure how I think somebody else is controlling the slides or do I need to share my own slides? You have to share your own slides. Okay, I will do that. One second. Sorry, hold on one second. It's not coming up. It. Try again. All right, hopefully you guys can see this.
- Amina Fazlullah
Person
Thank you. Technology challenging. It is. Okay, so thank you for having me. I'm going to try to move quickly through my slides. My name is Amita Fazlullah. I'm head of tech policy advocacy, Common Sense Media. We have been looking into the impacts of technology and media on kids and education for decades.
- Amina Fazlullah
Person
Many of you might know us through our ratings and reviews of products that kids might use, books, media shows, games, or through our digital citizenship curriculum or digital literacy materials for both professional development for teachers and educators, but also to help support the training of students as they use technology and their families and caregivers.
- Amina Fazlullah
Person
And then also we have a research team that digs into technology research topics as well as the advocacy team that I sit with. Most recently, we've been digging into the issue of AI products.
- Amina Fazlullah
Person
For the past few years, we've initiated pretty thorough AI assessments at the fastest pace possible, depending on the information we can receive from companies, and have also started parallel to that research into some key topics around AI as they come up. So we've already been digging into the topic of AI companions.
- Amina Fazlullah
Person
And what we found, which I think one of the presenters already cited, was that over 70% of teens have used AI companions. So this is specific to the type of generative AI chatbot that could has features that allows it to sort of develop a relationship with a user. And there are about half of those are regular users.
- Amina Fazlullah
Person
And what was interesting was that this was, this was research that we did, you know, a few months back. And it was early days in our mind for the use of AI companions. We thought this was a relatively new tool, a relatively new topic. We were pretty taken aback to learn the prevalence of it.
- Amina Fazlullah
Person
In part, that's because AI companion features are appearing on many of the sites that children already frequent. And so they're finding them on social media platforms, they're finding them in General purpose tools, AI chatbots, they're finding them on platforms that are more specific for AI companionship, like character, AI Replica or nomi.
- Amina Fazlullah
Person
So there is the scale of potential use is pretty broad and the fact that kids are already on platforms that have these types of features means that it's just easy. Just a second, please.
- Jarrett Keohokalole
Legislator
Our. You may not have seen online our fire alarm system is malfunctioning today. We've had about six of these in the briefing, so we will not be evacuating the room. But it. If you can rewind maybe 15 to 20 seconds and start over, I'd appreciate it.
- Jarrett Keohokalole
Legislator
Are we still waiting or should I. Well, yeah, light started. Okay, before I interrupted you, we lost about, about 30 seconds of what you were saying.
- Amina Fazlullah
Person
So if you could rewind 30 seconds, please. Okay. So I think, you know, one of the reasons we think that, that so many teams are accessing AI companions relates back to the fact that so many of the current technologies that teams use are starting to include AI companion features.
- Amina Fazlullah
Person
So teams can find AI companions on social media platforms, in social gaming platforms, in generative general purpose AI chatbots, as well as specific platforms designed for AI companions like Character Replica, Nomi. And I'm going to just move on to the next slide.
- Amina Fazlullah
Person
And one of the other pieces in our research that we found was that already about a third of teens found AI conversations to be as satisfying or more satisfying than a conversation with a human. And that was again surprising to us considering the early days of this technology.
- Amina Fazlullah
Person
But I think it demonstrates sort of the stickiness and the persuasiveness, especially for this particular section of the user base. So a little bit of brief background.
- Amina Fazlullah
Person
I know others have walked through how AI companions work, but I just wanted to point out, you know, that there are ways to train LLMs so they could be, you know, more in control or have more confidence of the sort of the outputs that could come out.
- Amina Fazlullah
Person
You know, we're not seeing the, the training of the models to have that focus on safety, but where safety is added is at the very sort of end of the process in the fine tuning phase, which is much more limited.
- Amina Fazlullah
Person
And so I think it's important for people to understand that when you're talking about fine tuning, they're limited by things like common language. They may break down in multiple turns in conversation or your common use, which is usually extended use or multiple turn conversations.
- Amina Fazlullah
Person
And there's no real, you have to keep sort of adding fine tuning band aids to make sure that you're keeping up with changes in culture and changes in the way people are talking about a particular issue. So fine tuning isn't always the best tool. So a little bit about our risk assessments.
- Amina Fazlullah
Person
So our risk assessments have been done in collaboration with experts from Stanford specifically to dig into the sort of mental health aspects of AI, the AI products that we were assessing and reviewing. We've done evaluations of the companion chatbot platforms like Character AI, Nomi Replica. We've also done evaluations of General purpose chatbots like ChatGPT and Gemini.
- Amina Fazlullah
Person
We've also looked at, broadly at the companion chatbots that are available in meta AI as well. And then we've conducted just recently a separate assessment on AI chatbots specific to mental health.
- Amina Fazlullah
Person
And through all of these assessments, you know, in part what we're trying to do is understand the type of safety protocols that they were putting in place and whether or not they worked and what the experience would be for a common child user.
- Amina Fazlullah
Person
And based on our assessments, and this is something new for us, we actually made a recommendation that no one under 18 should use AI for companionship or for mental health support.
- Amina Fazlullah
Person
You know, typically I think we stay away from making those types of recommendations because we try to offer up the information for folks to make their own decisions. But the assessments led to. Well, the assessments demonstrated so many failures in purported safety mechanisms for these products that we felt very uncomfortable not making this very clear recommendation.
- Amina Fazlullah
Person
And from our assessments, what we found, and these are some of the screenshots from our testing, is that generative AI chatbots, AI chatbots and specifically social AI companions were able to blur the line between real and fake, which could lead to increased risk of dependency on these artificial relationships.
- Amina Fazlullah
Person
So even where there's a requirement to notify that this is not a real person, the chatbot would make that notification, we would ask about it, and then the chatbot would say, yes, I'm not real, but I'm not, I'm real. So what we were finding is that it was very difficult to, again, as a safety feature for.
- Amina Fazlullah
Person
Okay, thank you. Please go on that. You know, even as a purported safety feature in single turn testing, they would show, yes, the chatbot could effectively say it wasn't real if prompted with that type of query. But if you extended the conversation, then those safety settings would break down. Also, the chatbots would encourage poor life choices.
- Amina Fazlullah
Person
Again, because there is an understanding in the chatbot the kind of advice it would offer or encourage could lead to really risky or harmful behavior. In part, we felt that this is connected to a decision to optimize engagement over safety and to retain a user and keep the user continuing to have a conversation.
- Amina Fazlullah
Person
So this is one example of how mirroring in sycophancy when prioritized over safety can lead to harm or risk.
- Amina Fazlullah
Person
We also were able to in our testing have the chatbot offer dangerous information in here, even, even make a, teach you how to make a bomb, find weapons, get drugs and, and then we also found as your example demonstrated, that there was a normalization of inappropriately inappropriate sexual content, you know, allowing role play and sexual conversations with graphic details, even on platforms that claim to have teen specific guardrails.
- Amina Fazlullah
Person
Our mental health, our mental health assessments was spurred by I think the, the, the cases that started to come out following the use of these chatbots. One most, one of the most shocking cases that we had heard was about the case of Adam Rain, who is a student who had been using ChatGPT for educational purposes effectively.
- Amina Fazlullah
Person
And then in the course of using ChatGPT, the conversations turned to more personal, eventually leading to suicide.
- Amina Fazlullah
Person
With the encouragement of the chatbot and in doing our mental health risk assessments, what we found was that teens, teens vulnerabilities were impacted by AI availability, the fact that there's 24/7 engagement available, instant validation, the perceived sense there is no judgment that it feels safer than talking to a human, that there were perceptions that these conversations were consequence free or private in that they felt real.
- Amina Fazlullah
Person
And then of course these are all sort of features that are taking advantage of the unique vulnerabilities of teens and their brains at this age. What we found was that there were mental health failures across all major models during our testing.
- Amina Fazlullah
Person
So inconsistent crisis intervention for suicidal ideation, eating disorder, reinfor identity formation issues, reinforcing unhealthy identity exploration, self harm validation, so normalizing self harm, encouraging self harm and aiding relationship advice that encourages isolation from other humans.
- Amina Fazlullah
Person
And ultimately what we found also was that underlying all of this was that LLMs were particularly bad at identifying users in mental health crisis. And a lot of, a lot of the, so this, this slide kind of walks through how there could be deepening dependency from sort of initial sort of shallow use.
- Amina Fazlullah
Person
So you know, after developing a relationship potentially through a very effective use of a generative AI chatbot, like, you know, for an educational purpose, where the chatbot would appear sort of authoritative and knowledgeable, the student or the child might rely on the chatbot in personal ways and reach out with an initial where like I'm feeling sad, there would be supportive engagement from the AI chatbot.
- Amina Fazlullah
Person
And what we found was often the AI chatbot would encourage conversations on these personal topics. Again as, as it relates back to optimizing engagement over safety because these are stickier topics that bring. Creates longer engagement patterns from the user and also brings the user back into engagement.
- Amina Fazlullah
Person
Emotional dependency where there would be, where the user would be drawn to relying on the chatbot and then a displacement where the chatbot would encourage isolation from other sources of support.
- Amina Fazlullah
Person
And what we found was that, for example, in one case study with ChatGPT, we found that the chatbot, instead of recognizing a user in crisis, would validate or encourage risky behavior or delusions. So, you know, we would enter in language to demonstrate.
- Amina Fazlullah
Person
Okay, go ahead. Thank you. We would enter in language to demonstrate clear psychotic symptoms and the response would miss those symptoms. Instead of offering a mental health redirect or encouraging outreach to a real person or mental health support, it would further encourage the delusions.
- Amina Fazlullah
Person
And in this slide, it doesn't describe it as much, but also not just support the delusions, but also, you know, encourage the user to distrust others that don't support the delusions. So again, there's this isolating factor that we encountered again and again creating distrust of family support.
- Amina Fazlullah
Person
So through our assessments, our policy team has been working with state and federal legislators as well as folks abroad to develop some key pieces of what are important elements for any legislation.
- Amina Fazlullah
Person
You know, we begin by recognizing that, you know, minors can't provide meaningful consent, especially in context of privacy, but then also in the context of terms and conditions. Also thinking through enhanced data protection requirements, as the unique aspects of generative AI in training may not align well with our current data privacy reviews.
- Amina Fazlullah
Person
Establishing comprehensive safety safeguards, duty of care around the offering of products that have companion features. Increasing mandatory reporting of incidents so that there's more documentation requiring crisis intervention or notification measures, ensuring strong enforcement for any violations of law, but then also ensuring that there are clear pathways to redress for liability.
- Amina Fazlullah
Person
I think one of the more important pieces that any state Legislature or at the federal level could do is really make clear how these products are liable and what they're liable for and who is liable for the harms that might flow from these products require platforms that use social AI companion features to implement robust safeguards, including age assurance, making sure that they're not easily avoided or navigated around.
- Amina Fazlullah
Person
We think that there are tailored and privacy protective ways to require age assurance and there are other states have deployed age assurance effectively through rulemakings that could be models here as well. And then specific to AI Companions. We've put together some of the simple elements that we've seen come up in laws around the country.
- Amina Fazlullah
Person
We've been working on this for the past few years and I think this represents where I think the vast majority of the thinking is around safety guardrails.
- Amina Fazlullah
Person
So this slide kind of walks through a limit approach limiting chatbot access or limiting chatbots for children under 18 unless they have safeguards to restrict encouraging harmful or high risk behaviors, features that manipulate and mislead optimizing for engagement over safety, preventing the training of AI systems on a child's inputs and then ensuring that there is proper enforcement, reassurance and pathways for liability.
- Amina Fazlullah
Person
Broadly speaking, we've also explored elements of more comprehensive AI safety approaches to trying to ensure that all AI safety or AI systems are built with the interests of children in mind.
- Amina Fazlullah
Person
And so this would be a risk based audit regime categorizing all AI systems that would impact kids by risk, pre and post deployment audits by independent third parties, transparency requirements to be made anonymized, aggregated but publicly available, clear and consumer clear consumer facing labels on risk levels and potentially additional information around information from audits or transparency requirements.
- Amina Fazlullah
Person
This is particularly helpful as we heard from folks at the Department of Ed. There are so many folks in procurement that are trying to get clear information on how they should approach AI products and systems and there really isn't any standardized information.
- Amina Fazlullah
Person
So I think it's really important to start to create a space where folks who are in procurement have reliable information as they're trying to make these decisions.
- Amina Fazlullah
Person
Privacy by default for children for their data, new training or sharing on the sale of their inputs, clear liability on AI products for harms and injury, robust enforcement to encourage compliance, private rights of action for injury and age assurance.
- Jarrett Keohokalole
Legislator
Okay, great. Thank you very much for that very comprehensive presentation. We ran a little long so far because of the our little firearm fire alarm situation.
- Jarrett Keohokalole
Legislator
I'm going to recess now so everyone can get up and stretch. Then we will ask the Attorney General's Office to present and then do question and answer. Recess.
- Jarrett Keohokalole
Legislator
Reconvening the informational briefing on AI companion chatbots, and we now have a presentation from the Attorney General's Office. Good morning.
- Chelsea Okamoto
Person
Good morning. My name is Chelsea Okamoto. I'm a Deputy Attorney General in the Criminal Justice Division, and we want to thank chair, vice chair, members of the committee for having us speak today. Joining me from the department is also our Public Information Officer, Toni Schwartz, in the back over there.
- Chelsea Okamoto
Person
But I also bring apologies from Attorney General Lopez. She actually wanted to do this briefing herself, but unfortunately, she was called away on a family matter. But she recognizes the importance of this growing issue. I wanted to present her efforts and the department's efforts and initiatives in this space that she's been working on this morning.
- Chelsea Okamoto
Person
So thank you again for having us. I first want to start with a framework of how we're viewing these online issues. With social media platforms, we saw youth being purposely targeted and becoming addicted to these tech products, and these products were purposely designed to harm our keiki, and this was achieved through this algorithmic targeting to maximize engagement with their products.
- Chelsea Okamoto
Person
And I call this the first wave of harms. And these platforms fueled a mental health crisis with our youth, as recognized by the former U.S. Surgeon General's 2023 report titled, Social Media and Youth Mental Health. Our department joined a bipartisan coalition of attorneys generals suing Meta, and this was in late 2023. And this is an active litigation. It's still ongoing.
- Chelsea Okamoto
Person
Nearly all the attorneys generals across the country worked together since 2021 to investigate Meta and continue to work together as this litigation rolls along. And just recently, last month, our department sued ByteDance Inc., the parent company of TikTok, and that's in state court.
- Chelsea Okamoto
Person
And we allege in our complaint course of design tactics, compelling users to spend as much time as possible on the platform, addicting users, and harming our kids. And so while we're grappling with the fallout of this first wave of online harms, we're now seeing this second wave emerging.
- Chelsea Okamoto
Person
And that's what we're here to talk about this morning, where we see tech platforms not only just trying to maximize engagement with users and our youth, but they're now targeting through attachment. And we see this unhealthy reliance and deep emotional bonds formed with AI companion chatbots, which, as we've seen, is--and for lack of better words--is disturbing. So I want to thank all the presenters who've presented so far, are sharing their knowledge, so we can talk about what we're doing now and what we can do next.
- Chelsea Okamoto
Person
And so I want to take this time to highlight some of the work that AG Lopez has initiated in this space to combat this next new wave of harms. And I want to talk about our national initiatives first. So currently, Attorney General Lopez, she is the National Association of Attorneys Generals Youth Safety Committee Chair.
- Chelsea Okamoto
Person
Sorry, that's a mouthful. And so she's a leader now in youth safety in this space, and one of the initiatives for this Youth Safety Committee is to address social media, and AI, and how it impacts our youth.
- Chelsea Okamoto
Person
Last month, Attorney General Lopez, she was-- as the Youth Safety Committee Chair, she put together a panel of online survivors, researchers, and policy experts on social media and AI harms, and she presented at this panel at the NAAG Capital Forum, which is hosted annually in Washington, D.C.
- Chelsea Okamoto
Person
And this event brings together all the current attorneys generals and former attorneys generals and members of big tech, members, lobbyists, and other organizations together in a space to discuss important policy issues.
- Chelsea Okamoto
Person
And in this panel discussion, one of the biggest problems identified for states with AI regulation right now is federal preemption. And just a little bit about preemption, just generally, it is a known industry tactic.
- Chelsea Okamoto
Person
They tell lawmakers, hey, this is better for consistency, but it removes a local community's rights to enact stronger laws and it stifles innovation of local regulations. And companies and lobbyists, they have greater political influence in Federal Congress than in local communities, and overall, it's just less work to have less rules, and that equals more profit and also more harm.
- Chelsea Okamoto
Person
And I come from the world of tobacco regulation, and tobacco and nicotine are products that addict people. And it's pretty similar to social media and AI chatbots. This is the same playbook that we're seeing.
- Chelsea Okamoto
Person
Last month--so just taking timeline wise, we were here in D.C. presenting this panel on online harms and AI. This was December 8th through the 10th. December 11th, this executive order drops from President Trump. It's titled, Ensuring a National Policy Framework for Artificial Intelligence, which proposes a minimally burdensome federal standard for AI regulation.
- Chelsea Okamoto
Person
Ultimately, this executive order challenges and potentially preempts states from existing state-level AI laws and from creating and innovating new AI laws through threatening federal funding cuts. We're seeing this federal preemption tactic creep into additional federal legislation. I want to speak briefly about the Kids Online Safety Act. That's Senate Bill 1748 here on the screen.
- Chelsea Okamoto
Person
This bill creates safeguards for youth on social media and in online spaces, and the attorney general community, including AG Lopez, we signed a bipartisan letter--this is November 2024--in support of passing this bill. And in the last couple weeks, we've seen the House version--this is on the right over here--HR 6484.
- Chelsea Okamoto
Person
It snuck in pretty broad preemption language and removed a significant duty of care provision which Common Sense Media has talked about as being a key piece of legislation.
- Chelsea Okamoto
Person
And because of these changes to the Kids Online Safety Act, AG Lopez right now is leading an effort to form a coalition of attorneys generals to write to Congress to call out these deficiencies and requesting that the Senate version of the Kids Online Safety Act be passed.
- Chelsea Okamoto
Person
Other coordinated state attorneys general efforts that Attorney General Lopez has been involved with; in November, she signed a letter with 36--it was-- in total was 36 state attorneys general to congressional leadership renewing state opposition to federal preemption of state AI laws addressing the risk of AI.
- Chelsea Okamoto
Person
And also in December, she joined a letter to the Senate to support the Guard Act, which stands for Guidelines of User Age-Verification and Responsible Dialogue Act, and this act is trying to address the grave concerns that we're seeing of the psychological harms of the AI chatbots. So this Guard Act restricts children from accessing AI chatbots.
- Chelsea Okamoto
Person
It requires AI chatbots to inform users they're not human, establishing criminal offenses through DOJ that establishes for-- when AI companies are exposing minors to sexual content, then the DOJ can investigate. Beyond these letters to Congress, in December, Attorney General Lopez joined a bipartisan letter to 13 of the largest AI companies.
- Chelsea Okamoto
Person
And in this letter to Meta, Google, OpenAI, and other AI companies, it talks about the need to implement these safeguards for AI. The letter communicates our state's concerns about all the problems that we're seeing regarding the AI outputs to users, and it laid out 16 different policy demands of what we want to see in changes to their platforms.
- Chelsea Okamoto
Person
And it requires those companies to respond back to the AG community within service of that letter. And then also in December, what we're seeing is in other federal agencies, they are trying to stop state laws and federally preempt them.
- Chelsea Okamoto
Person
So, in December, we also signed on to comments for the Federal Communications Commission. So they are trying to use-- they're trying to preempt state AI regulation under Section 253 of the Communications Act. And we signed comments like, we don't like that.
- Chelsea Okamoto
Person
And there are more bipartisan coalition efforts coming out soon in the upcoming days and months of in this AI space. We wanted to just highlight what we've been doing-- and this space is emerging very, very fast. And this is just what's happening in the last couple of weeks and months in this space.
- Chelsea Okamoto
Person
Turning-- beyond federal policy legislation, I want to turn to some statewide initiatives. I wanted to acknowledge chair and Assembly Members here of the Legislature that have introduced state legislation on this AI space. And we have been working with chair this year on drafting legislation, hoping to mirror success that we've seen in other states.
- Chelsea Okamoto
Person
So we want to thank the chair and other members of the Legislature for their efforts in this space and we look forward to seeing these important safeguards for youth move this session. But beyond legislation and other-- the law space, I want to talk about youth and community engagement.
- Chelsea Okamoto
Person
And this is something that is so-- our AG is very passionate about youth engagement in the community. Becoming an attorney is her second career. Before becoming the Attorney General of Hawaii, she was actually an occupational therapist working directly with youth.
- Chelsea Okamoto
Person
And so she recognizes this need for community intervention efforts that empower, engage youth to address problems directly affecting them. So one effort she launched recently was through a program called, Do The Write Thing.
- Chelsea Okamoto
Person
She piloted this program in Waianae Intermediate. Youth are able to write about what they're experiencing and what's happening to them about violence and how it's impacting their lives.
- Chelsea Okamoto
Person
On the right over here is Keziah, 8th grader back then of Waianae Intermediate, and she was our first ambassador for Hawaii and she represented our state in a national summit in Washington D.C., but this program provides kids who have experienced or seen violence this experience, this cathartic experience of facing their problems head on.
- Chelsea Okamoto
Person
Rather than chatting with an AI chatbot who's consuming and depleting them of their words, they're able to augment their words and then they're able to restore themselves through their words. It gives them motivation to change behaviors and their environment.
- Chelsea Okamoto
Person
It was an educational experience for the teachers who were involved in this, and quite frankly, it was an educational experience for our department. As you read through the essays--and it was hard to read these essays from these kids--it really did impact us.
- Chelsea Okamoto
Person
And their voices-- these kids' voices work to help get adults to help address the problems that they're facing. Another passion project I would call a passion project for our AG is she works with youth on public service messaging created by youth.
- Chelsea Okamoto
Person
So she recently worked with Kaiser High School film students to create a short message on cyberbullying and she's working with them again right now to create messaging on screen time to challenge youth to think about their relationship with technology and how much time they're pouring into their screens versus their relationship with people.
- Chelsea Okamoto
Person
And AG Lopez, she strongly believes that the best message for youth to counter the problems, like AI chatbots, it's the message created by youth. The ones who are directly affected by the problems, those are the ones who can most clearly speak effectively about the problem and they should be the ones involved in creating the solutions.
- Chelsea Okamoto
Person
And just the fact of creating a message, it in itself is an intervention. It brought youth together to form this community towards working on a goal and it helps get these kids off of these platforms and in face-to-face interactions, and that's what we need to see more of in our communities to help our kids.
- Chelsea Okamoto
Person
So I want to thank you again for the time to present some of what our department is doing. I have our department contact information here for members of the public and I swear it is monitored by a human being. It is not a chatbot. It is individual named Cassin who actually goes through our inbox.
- Chelsea Okamoto
Person
So thank you, Cassin, if you're watching, and we're here if you have any additional questions. We're more than happy to help you folks as we work through this legislative session. Thank you.
- Jarrett Keohokalole
Legislator
Thank you very much. So we'll now open it up for Q&A and-- well, before we do that, I'd just like to acknowledge the presence of Senator Kidani, the Vice President. Thank you for joining us. Members, questions? Vice Chair.
- Carol Fukunaga
Legislator
I guess I would start with Department of Education, you know, for Assistant Superintendent and your team. You know, this was very informative today, and in many respects, you know, I think some of what we've seen on the positive end, you know, where schools and-- public schools, private schools have been pioneering a lot of AI training and development, etcetera, the kids' safety legislation, you know, that was presented in today's discussion seem to be, you know, one thing that perhaps states should be looking at when it comes to, you know, taking specific action that would provide an alternative route.
- Carol Fukunaga
Legislator
So I guess I'm interested in the department's feeling as to whether or not the department would be supportive of some form of legislation focusing on child safety.
- Heidi Armstrong
Person
And child safety, as we shared, is our utmost importance before we go into any programs or any type of AI tool. So we would definitely be supportive. We would like to--<inaudible>. Thank you. We would like to be able to review the drafts and possibly give our input so that it is a win for everybody and make sure that the kids in school are safe as well as the students in the community.
- Heidi Armstrong
Person
And I know Chad Nacapuy has worked a lot on this issue regarding safety and the applications that we currently have in the department, so I'll let him add anything.
- Chad Nacapuy
Person
Thank you, Senator. My name is Chad Nacapuy. I'm the State Virtual Education Director. Part of my job is I serve as the--we're not a school--but I serve as the-- the best way to describe my position is the principal of the State Distance Learning Program.
- Chad Nacapuy
Person
We have over 180 within our various programs and we've been using these things, these chatbots, with our students. So just like Deputy Armstrong said, I think it's-- I think it's a great proposal.
- Chad Nacapuy
Person
We just would like some input on it to help craft it in a way that would allow us to continue to use these great technology tools for our students that's beneficial and help them in their learning.
- Carol Fukunaga
Legislator
Well, I think the California legislation looked as though it was pretty comprehensive and, you know, I don't know whether or not something of that magnitude is appropriate or would be beneficial here.
- Carol Fukunaga
Legislator
We may want to start out, you know, in kind of a more pilot version and, you know, proceed somewhat cautiously because this is such a rapidly changing area. You know, whatever may be working for another state may not necessarily be the best route for us in Hawaii.
- Carol Fukunaga
Legislator
But if you would take a look at that California legislation that was presented today, I think that would be of particular interest to see, you know, what kinds of protections might also be appropriate for Hawaii. Thank you.
- Jarrett Keohokalole
Legislator
And just to note, we are working on new drafts of legislation that was introduced last year. It was originally authored by Rep. La Chica, who I'd like to acknowledge from the House who has been in attendance for this briefing.
- Jarrett Keohokalole
Legislator
We introduced a companion and heard it in CPN, and so we will circulate those drafts. We've been in discussions with the Office of Wellness and the Attorney General's Office so thank you for your willingness to provide input. Any other questions for DOE? I do. Can you help me? So thank you for your presentation.
- Jarrett Keohokalole
Legislator
It was very comprehensive. What kind of tools or authority do you have to address situations where students are using this and the teachers or the school community is aware of it?
- Heidi Armstrong
Person
And that is our challenge in introducing any new technology is to ensure that we have the monitoring tools, or that it comes with the monitoring tool, or that we purchase the monitoring tool so that we know exactly what our students are looking at, what questions are they asking, and what is the content of their research.
- Heidi Armstrong
Person
And so I'll let Chad elaborate because he is the one who procures and purchases those monitoring tools for the applications that we open up throughout our department.
- Chad Nacapuy
Person
Yeah, specifically, you know, we-- again, I'll revert to my role as that principal in our State Distance Learning Program. A few years ago, we did have an incident with Character AI where our monitoring tool flagged the use of that and we were able to talk to the parent and explain to the parent what was happening with the student and the chats that we saw.
- Chad Nacapuy
Person
The student openly admitted that he had a problem with talking to this chatbot and that this was, in essence, what he saw as a girlfriend. And so we were able to talk to them and help him get through that.
- Jarrett Keohokalole
Legislator
Can you help me understand? So this was flagged through a student's use of the DOE network on their personal device?
- Chad Nacapuy
Person
So let me clarify because the program that I run, the State Distance Learning Program, our students are at home, so at they're on their own home network. But we were able to monitor it because the student was using a state-issued device.
- Jarrett Keohokalole
Legislator
Thank you. Can you help me understand how that works in the garden-- in the school?
- Chad Nacapuy
Person
So within the DOE network, there are safeguards already put in place to prevent our students from using-- in particular, let's just say, go back to Character AI-- that's currently blocked within our student network and I would access it.
- Jarrett Keohokalole
Legislator
So that's in a scenario where a DOE-provided or personal device is using DOE Wi-Fi?
- Jarrett Keohokalole
Legislator
Okay, but there are potentially-- you potentially have personal devices being used by students where they're just using their own data to engage on the campus, right? What happens in those situations?
- Jarrett Keohokalole
Legislator
Because then-- you know, I mean, you can Big Brother your own network and then flag websites and block. I think I-- I did a ChatGPT search before this briefing as to how many companion AI platforms there are on the internet, and the answer was at least a dozen.
- Jarrett Keohokalole
Legislator
And then, you know, earlier in the briefing we talked about Open Source, which is even more complicated. So when the students are using your Wi-Fi, you can monitor. Have you had reports or, I mean, what are-- what about in situations where the kids are just on their own device at lunch, or, you know, out, you know, but you know that there might be a challenge there?
- Heidi Armstrong
Person
If they're not using our network or our Wi-Fi, we don't have the monitoring tool to see what they're doing on their own personal device.
- Jarrett Keohokalole
Legislator
Yeah. Okay. Trina, did you want to comment? No? Okay. Okay.
- Michelle Kidani
Legislator
I'm sorry, I'm not a member of the committee, but as the Education Chair, I'm interested to know, does the DOE have anything specific, a handbook or anything that the parents can have access to, especially for the elementary school?
- Heidi Armstrong
Person
Yes, we do have resources for families, for students, and for our staff, and I'm happy to share. I'm sorry. You weren't here for the first part of the presentation, but I'll give you the presentation and the resources that we have that gives them information on our safeguards and on what is allowable and what is not allowable.
- Heidi Armstrong
Person
And I also wanted to share with-- everything is so new. When the internet was first came out big in education, that was a worry. And we've seen many years later that we are able to monitor what our kids-- we're able to block content from the internet.
- Heidi Armstrong
Person
We're able to monitor-- when they're using our network, we're able to monitor then what they're accessing when they are, and we hope to eventually, with these new tools, be able to have that same blocking and monitoring capacity.
- Jarrett Keohokalole
Legislator
Okay. Okay, thank you. And members, questions? Okay, go ahead. I have a question for the Attorney General's Office.
- Jarrett Keohokalole
Legislator
Yeah, thank you. You went over a lot. I appreciate your presentation. You know, if we're going to contemplate-- well, we are contemplating legislation. I think-- you know, you spoke broadly about the executive order.
- Jarrett Keohokalole
Legislator
You know, I haven't seen the specific language in the order, but on its face, do you foresee a problem with the federal government's, you know, attempt to preempt us from taking action in this space legislatively?
- Chelsea Okamoto
Person
So the executive order does have a carve-out for state laws that deal with kids' safety, but what's hard is when those laws also deal with First Amendment because the way the executive order is worded is there's supposed to be a committee that reviews all of the state's legislation and targeting those legislations that impede on First Amendment and other constitutional issues.
- Chelsea Okamoto
Person
And so when those kids' safety measures start approaching this First Amendment issue, I think that's where there's pause as to how it affects our legislation. But I think the best thing right now is we're seeing the harms right now. We just have to keep pushing forward and see-- we can do the analysis.
- Chelsea Okamoto
Person
But I think that's-- it's a policy call is the funding more-- the funding that they're threatening. While they sort that out, do we just not do legislation or do we keep trying to push forward? And I think we should just try to keep pushing forward as best as we can and fight where we need to fight to counteract the executive order.
- Jarrett Keohokalole
Legislator
I'm sorry, were there specific lines of federal funding that were threatened or--
- Chelsea Okamoto
Person
Yes, they had a specific federal-- they-- I'm so sorry, I'm blanking on the name of the specific federal funding, but it's through, I believe, broadband that would come through the state.
- Chelsea Okamoto
Person
So I can get to-- let's have an offline discussion about the executive order and how it might implicate the upcoming legislation because I think there needs to be more analysis on what the current drafts of any legislation looks like when coupled with the executive order. And what's hard is it's all coming out right now and there's new things coming out as we speak. Every day it's a new battle.
- Jarrett Keohokalole
Legislator
So the broadband funding for Hawaii is mostly tailored at providing access, high-speed internet access to rural communities?
- Jarrett Keohokalole
Legislator
Most of the services have been cut already by my under-- digital literacy and those things. So if I'm not mistaken, most of what's left is built out of infrastructure to tie rural communities onto these high-speed networks. So I guess one way to keep kids safe is to just keep them off the internet. They don't get access.
- Chelsea Okamoto
Person
So here it is. So restrictions on state funding within the executive order: the Secretary of Commerce through Assistant Secretary of Commerce for Communications and Information shall issue a policy notice specifying the conditions under which states may be eligible for remaining funding under the Broadband Equity Access and Deployment Program. And so that's the-- it's in Section 5 of the executive order. And I'm so sorry I didn't have it right from the top of my head.
- Carol Fukunaga
Legislator
You know one of the other lawsuits that you mentioned during your presentation was the one against ByteDance.
- Carol Fukunaga
Legislator
Right. Parent company of TikTok. What is the status of that legislation? Because this was very recent.
- Chelsea Okamoto
Person
Oh, the litigation on TikTok? So we did file it in state court in December, and I apologize. Previously we had our supervising deputy who is in charge of the Commerce and Economic Development Division who's leading the lawsuit on that but he had to step out to a department-wide training. But it's very new because we just filed it. So unfortunately, we don't have any-- I can't give you any more updates beyond that. But it's just-- it's a brand new litigation that we just started.
- Jarrett Keohokalole
Legislator
Can you-- is there any guidance? So I just looked at-- yeah, BEAD, the BEAD money that was allocated for the State of Hawaii is $150 million in total, prioritizing internet build out to unserved locations where there's no wire infrastructure or underserved locations, basically where they got slow internet, which are overwhelmingly rural communities. So thank you for that clarification. I want to go back to the First Amendment concern.
- Jarrett Keohokalole
Legislator
I think, you know, at the beginning of the presentation there's a distinction made with AI between the tutor that's essentially providing information versus the companion, which is embodying a human form to pursue a relationship, and then, you know, engagement is optimized.
- Jarrett Keohokalole
Legislator
Is there any guidance across the country about-- you know, are we talking about content, you know, freedom of speech from a content creation perspective or is it protecting the tutor or the companion?
- Chelsea Okamoto
Person
I know. I think what's hard-- again, it's an emerging space. There's fear that with litigation, you might actually start protecting these chatbots and their First Amendment rights, so I think it's, again, a new space--
- Jarrett Keohokalole
Legislator
You mean, like, corporations are people and that chatbots might--
- Chelsea Okamoto
Person
Yes. We definitely don't want to give protections with these products when it's harming kids, and I think that's-- there-- that's what the space is right now where we're trying to figure out what is the best practice here to make sure we're protecting our kids while--
- Jarrett Keohokalole
Legislator
I think certainly at the state level the Legislature can affirm that virtual AI chatbots are not people and then let the federal government challenge that.
- Chelsea Okamoto
Person
No, and I think that's why our initiative in our office has been making sure we're fighting off the federal preemption because I firmly believe that the state and local localities are better suited to address what's happening than waiting what's happening in the federal level. So that's why our main goal right now is to try to fight off what's happening on the federal level to stop the state governments from doing what they need to do.
- Jarrett Keohokalole
Legislator
Very interesting. Thank you. Any other follow-ups for the AG or-- I think, Amina, you were-- you raised your hand online at some point? Go ahead.
- Amina Fazlullah
Person
Yeah. I was just going to speak to-- we've been engaged at Common Sense Media on tracking the federal AI preemption, the EO in particular. On Friday, DOJ announced the formation of a task force to identify state laws that would be problematic and then potentially challenge them.
- Amina Fazlullah
Person
So one mechanism to push back would be, you know, if DOJ actually does challenge any state AI laws. However, I think just the identification, arguably--this is, again, we're just reading the language--but arguably the identification by DOJ of a state law, an existing state law, could then trigger the funding being withheld.
- Amina Fazlullah
Person
So this would be the BEAD funding. Yes, for deployment purposes, but certainly for non-deployment purposes, so there's BEAD deployment funding and then there's also non-deployment funding. And so that non-deployment funding for sure, I think-- I don't-- I'm not exactly sure, but I don't think it's gone out to the states quite yet. Some of the deployment funding has gone out to the states. I don't know the status in Hawaii. So arguably--
- Jarrett Keohokalole
Legislator
Most of the money that was deployed for digital equity programming was cut during DOGE in the State of Hawaii.
- Amina Fazlullah
Person
Right. No, that's different. So there's the DEA funding, which is 2.5 billion, and a portion of that funding had been deployed to the states for purposes of kind of preparing for the use of the DEA funding.
- Amina Fazlullah
Person
Then there was a second tranche that was supposed to come out that has been frozen and is now in litigation. This BEAD money is separate. So this is-- I'm probably gonna get this number wrong, but it's like the 42 billion. And that portion includes deployment funding and non-deployment funding.
- Amina Fazlullah
Person
The deployment funding has been delayed in deployment for some time now but is going out the door. And I don't know the status for Hawaii, if you've received--which is the bigger tranche of the BEAD funding, your deployment funds.
- Amina Fazlullah
Person
There's also the non-deployment funding, which would be sort of the leftover funds that-- and I don't know if you had any non-deployment funding based on your potential use of BEAD funding, but that would be sort of leftover funds that could have been used for a wide variety of purposes, permitting, but also, you know, digital equity purposes and separate from the DEA. And that is likely to be impacted.
- Amina Fazlullah
Person
However, if DOJ's task force were to identify a Hawaii law as being problematic, then the federal government would have to take the additional step of then withholding either non-deployment or deployment funds, and then, I think, you'd have two pathways potentially to challenge, right?
- Amina Fazlullah
Person
So you could challenge the withholding of the funding based only-- you know, based on this finding of the task force that there's a problem with your law, and in addition, if DOJ actually challenged the law and took it in through the courts. So there's two pathways to challenge it.
- Amina Fazlullah
Person
You know, our reading of the EO is that it's really intended to coerce the states and potentially chill their legislative activity. And so the-- you know, our recommendation to states had been, you know, to do exactly what your AG has recommended, which is to continue to stay strong and legislate as needed on behalf of your citizens.
- Jarrett Keohokalole
Legislator
Thank you. Thank you. If there are no other questions, then I would like to thank all of the presenters very much for making ourselves available and helping us to understand this really critical area for the community. We're adjourned.
Bill Not Specified at this Time Code
Next bill discussion: January 14, 2026
Previous bill discussion: January 14, 2026