Sen. Lummis Unveils RISE Act, Sets New Transparency Rules For AI

HomeNews* Senator Cynthia Lummis has introduced the RISE Act of 2025 to clarify legal responsibilities for professionals using Artificial Intelligence (AI) tools.

  • The bill requires AI developers to provide detailed “model cards” describing system data, uses, and limitations, but does not force open sourcing of AI models.
  • Professionals such as doctors, lawyers, engineers, and financial advisors will remain legally liable for advice provided using AI systems.
  • AI developers can reduce their civil liability by publicly releasing these model cards, but must update documentation frequently and explain any withheld information.
  • Protection for AI developers will not apply in cases of recklessness, fraud, or misuse outside defined professional settings. Senator Cynthia Lummis (R-WY) has introduced the Responsible Innovation and Safe Expertise (RISE) Act of 2025. The bill is aimed at setting clear liability rules for professionals who use artificial intelligence (AI) in fields such as medicine, law, engineering, and finance.
  • Advertisement - Under the proposed law, professionals—including physicians, attorneys, and financial advisors—would stay legally responsible for any advice or services given, regardless of whether AI systems contributed to the decision. The bill would require AI developers to publish “model cards” that detail each system’s training data, intended uses, limitations, performance metrics, and possible failure points. These documents must be updated within 30 days whenever the AI model is changed or new issues are discovered.

According to a press release, Lummis stated: “Wyoming values both innovation and accountability; the RISE Act creates predictable standards that encourage safer AI development while preserving professional autonomy.” She added, “This legislation doesn’t create blanket immunity for AI.” The “model card” requirement stops short of forcing developers to disclose their entire source code.

The bill also excludes immunity for AI developers in cases involving recklessness, willful misconduct, fraud, or if developers misrepresent their models or use them outside their intended professional context. Any omissions in the model cards must be justified in writing, and only non-safety trade secret information can be redacted.

The legislation arrives as experts warn about risks stemming from closed-source AI models, which provide little insight into how decisions are made. In a previous interview, Simon Kim, CEO of Korean venture firm Hashed, warned: “OpenAI is not open, and it is controlled by very few people, so it’s quite dangerous. Making this type of [closed source] foundational model is similar to making a ‘god’, but we don’t know how it works,” as shared with CoinDesk.

The RISE Act aims to balance innovation with professional responsibility, requiring transparency from AI developers but allowing them to keep trade secrets if they do not impact user safety. The bill is currently under consideration and has not yet become law. For more, the full RISE Act text is available online.

Previous Articles:

  • SEC Delays Decisions on Dogecoin, HBAR, Solana, Cardano ETF Filings
  • Tencent Considers Acquisition of MapleStory Developer Nexon
  • MicroStrategy Director Carl Rickertsen Sells All Shares for $10M
  • Shopify Adds USDC Payments for Merchants Over Coinbase’s Base Network
  • Sonic Labs Invests 400,000 S in Hey Anon, the On-Chain AI for DeFi
  • Advertisement -
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
  • Pin
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)