Technology User Protections

With Great Power Comes Great Responsibility

Over the last two decades, technology has advanced faster than at any point in human history but our laws have failed to keep pace. Social media has reshaped how we see ourselves, smartphones are constant companions, and artificial intelligence is poised to disrupt the job market on a scale not seen since the Industrial Revolution.

We need clear, modern rules that put people first, protect mental health, and give users more control over how technology affects their lives.

Transparency for Edited and AI-Generated Content

Image editing, filters, and AI-generated content have created unrealistic and manufactured versions of reality. When people — especially young people — compare themselves to curated and altered images without knowing it, they are set up to believe something is wrong with them. This has contributed to rising rates of body dysmorphia, eating disorders, and declining mental health.

I support clear disclosure requirements for altered and AI-generated images so users know what they are seeing.

All published images and videos should include a disclaimer identifying if they were created by AI or altered with programs. There should be three distinct categories:

Minor Alterations: The content (people, locations, or subjects) has remained unaltered. The background, lighting, or environment has some minor alterations from the original.

Major Alterations: The content (people, locations, or subjects) has been edited or altered.

Generated Content: The content (people, locations, or subjects) has been generated by artificial intelligence and may not be reflective of reality.

Transparency should be the default, not the exception. And platforms that do not enforce these disclosure requirements should be heavily fined.

AI Ban for Government Entities, Government Figures, and Elected Officials

Elected Officials should be banned from sharing or propagating AI generated content in any official capacity. Doing so should be grounds for fines or impeachment. Promoting AI generated content in a public position within the government is directly and purposefully misleading the public.

Platform Design Protections for Users

Many digital platforms rely on dark patterns, designs used to manipulate behavior, encourage excessive use, and reduce user control. While some protections exist for children, adults deserve meaningful safeguards as well.

I support platform design standards that reduce manipulation and prevent technology addiction:

Honest Feeds and User Control

Social media platforms should prioritize user choice over algorithmic manipulation.

Increased Protections for Minors

Children deserve stronger safeguards in an online world that was never designed with their well-being in mind.

Raise the minimum age for social media accounts to fourteen, with clear enforcement standards and meaningful penalties for platforms that fail to comply.

Require Social Media platforms who already offer teenagers (ages 14-18) social media accounts to increase restrictions on content served to them and the format in which things are served. This allows them to stay connected with friends while also protecting them from predatory practices.

Responsible AI Platform Standards

Artificial intelligence platforms must be designed with guardrails, especially when accessed by young users.

Protecting Individuals from AI-Generated Sexual Exploitation

Technology must not be allowed to weaponize people's likenesses.

This protection should apply regardless of whether the individual is a public figure or private citizen.