I’m co-organizing a two-day conference aimed at broadening the concerns of AI ethics to include both nonhuman animals and potential conscious machines. Join us at Princeton, either in person or virtually, on October 6 & 7 to explore these neglected aspects of AI ethics. pic.twitter.com/M98OvQXnNq
Access to technology and inequalities can start during device development. We need to think about this challenge as soon as we dream up the technology. This will give us the best chance for justice in research. @royalsocietypic.twitter.com/yz9TywkuE9
Commercial casualties of medical devices, beyond just DBS. If you don’t care about them you should, as if we don’t act it will touch you or someone you know. Discussions @royalsociety starting today are critical for this and the next generation. pic.twitter.com/DZW18SyFOl
To effectively regulate AI, policymakers must first understand it. Last month, we organized a 3-day boot camp to help congressional staffers think critically about this emerging technology. stanford.io/45LUQxmpic.twitter.com/w6ntFhLP7t
Outside integration into products, there are three major metaphors for using AI directly: chat, copilot & talk to my documents. These approaches feel very narrow and limit what AI can usefully do
Other modes that incorporate group coordination may be far more effective for work
Certainly formative, but an exciting use of mindLAMP to predict response to TMS for depression The area under the curve for correct classification of TMS response ranged from 0.59 (passive data alone) to 0.911 (both passive and active data): formative.jmir.org/2023/1/e40197