The AI Now Institute published its 2019 Report, which explores the risks and harms of artificial intelligence (AI), as well as the movements demanding a halt to the development and use of dangerous AI. One of the main findings of the report is that community groups, workers, journalists, and researchers -not corporate AI ethics statements and policies -have been primarily responsible for pressuring tech companies and governments to set guardrails on the use of AI. And while efforts to regulate AI systems are underway, they are being outpaced by government adoption of AI systems to surveil and control. Among the key recommendations, AI Now suggests that regulators should ban the use of facial recognition and similar technologies in important decisions that impact people's lives and access to opportunities. The AI industry needs to make significant structural changes to address systemic racism, misogyny, and lack of diversity in AI applications. Moreover, states should develop expanded biometric privacy laws that regulate both public and private actors, while legislators should regulate the integration of public and private surveillance infrastructures. Also, algorithmic impact assessments should account for a eyes impact on climate health and geographical displacement.