Technology User Protections
With Great Power Comes Great Responsibility
Over the last two decades, technology has advanced faster than at any point in human history but our laws have failed to keep pace. Social media has reshaped how we see ourselves, smartphones are constant companions, and artificial intelligence is poised to disrupt the job market on a scale not seen since the Industrial Revolution.
We need clear, modern rules that put people first, protect mental health, and give users more control over how technology affects their lives.
Transparency for Edited and AI-Generated Content
Image editing, filters, and AI-generated content have created unrealistic and manufactured versions of reality. When people — especially young people — compare themselves to curated and altered images without knowing it, they are set up to believe something is wrong with them. This has contributed to rising rates of body dysmorphia, eating disorders, and declining mental health.
I support clear disclosure requirements for altered and AI-generated images so users know what they are seeing.
All published images and videos should include a disclaimer identifying if they were created by AI or altered with programs. There should be three distinct categories:
Minor Alterations: The content (people, locations, or subjects) has remained unaltered. The background, lighting, or environment has some minor alterations from the original.
Major Alterations: The content (people, locations, or subjects) has been edited or altered.
Generated Content: The content (people, locations, or subjects) has been generated by artificial intelligence and may not be reflective of reality.
Transparency should be the default, not the exception. And platforms that do not enforce these disclosure requirements should be heavily fined.
AI Ban for Government Entities, Government Figures, and Elected Officials
Elected Officials should be banned from sharing or propagating AI generated content in any official capacity. Doing so should be grounds for fines or impeachment. Promoting AI generated content in a public position within the government is directly and purposefully misleading the public.
Platform Design Protections for Users
Many digital platforms rely on dark patterns, designs used to manipulate behavior, encourage excessive use, and reduce user control. While some protections exist for children, adults deserve meaningful safeguards as well.
I support platform design standards that reduce manipulation and prevent technology addiction:
- Auto-play disabled by default to prevent endless video consumption
- Infinite scrolling turned off by default, with feeds limited to discrete pages of content
- Users should actively choose to load more content rather than being passively fed it
- Apps should also stop automatically refreshing feeds when users navigate away
- Option to disable short-form video content.
Honest Feeds and User Control
Social media platforms should prioritize user choice over algorithmic manipulation.
- Platforms must provide a feed consisting of content the user has opted in to see by default (a user’s connections, follows, and subscriptions)
- Users must be able to customize how their feed serves them content, whether it be chronological order, most engaged updates, or tailored entertainment content
- No platforms should not force-feed algorithmic content by default
Increased Protections for Minors
Children deserve stronger safeguards in an online world that was never designed with their well-being in mind.
Raise the minimum age for social media accounts to fourteen, with clear enforcement standards and meaningful penalties for platforms that fail to comply.
Require Social Media platforms who already offer teenagers (ages 14-18) social media accounts to increase restrictions on content served to them and the format in which things are served. This allows them to stay connected with friends while also protecting them from predatory practices.
Responsible AI Platform Standards
Artificial intelligence platforms must be designed with guardrails, especially when accessed by young users.
- Require public AI platforms to offer a clearly defined educational mode, distinct from standard use
- Educational mode should limit response length, avoid certain categories of content, and prioritize learning, safety, and age-appropriate engagement
Protecting Individuals from AI-Generated Sexual Exploitation
Technology must not be allowed to weaponize people's likenesses.
- Establish a clear federal offense for creating or distributing sexually explicit content modeled after an individual without their consent, including AI-generated or digitally altered material
This protection should apply regardless of whether the individual is a public figure or private citizen.