@OpenAI “AGI should benefit all of humanity… We are seeking teams from across the world to develop proof-of-concepts for a democratic process that could answer questions about what rules AI systems should follow.”
Significant new paper with contributors spanning ARC, Anthropic, DeepMind, Cambridge University, OpenAI and more that proposes an approach for evaluating frontier AI models for extreme risks. Non-exhaustive list of these extreme risks below: arxiv.org/pdf/2305.15324…pic.twitter.com/7mXvsOIGtx
AI may be the most consequential technology advance of our lifetime. Today we announced a 5-point blueprint for Governing AI. It addresses current and emerging issues, brings the public and private sector together, and ensures this tool serves all society. blogs.microsoft.com/on-the-issues/…
Remarkable consensus @UNTechEnvoy#GlobalDigitalCompact deep dive on #AI:
• Urgency of the situation
• Need to bridge the global divide
• Need for transparency, accountablility, fairness & protection of privacy
• Need for global coordination in governance of AI pic.twitter.com/xVtuWvgzRx
New DeepMind blog discussing ‘An early warning system for novel AI risks’.
“must expand the evaluation portfolio to include the possibility of extreme risks from general-purpose AI models that have strong skills in manipulation, deception, cyber-offense” deepmind.com/blog/an-early-…pic.twitter.com/dcGlDzLnxz
With the proliferation of AI-generated text, AI detectors are now being used to flag cheating, plagiarism, and misinformation. But a Stanford study reveals a very big problem: these detectors are not reliable and biased against non-native English writers. stanford.io/3MwAFLr