Changes the covered model threshold to $100M in training costs, but does little else. Still strategically ambiguous - by training a large, expensive model in CA, you'll incur criminal liability to the state based on signing a statement dependent on an ambiguous definition of critical harm (a forward-looking statement covering infinite time and application scope). Still restricts open-source and economic growth - it will be hard to develop or use large open-source models in CA due to 1047's requirement to prevent downstream modifications of said models.
Still a politically-controlled agency with unilateral power to set standards (and w/o term limits). We have some clarity now on governance - all 5 positions on the board must be approved in one way or another by the CA legislature. In particular, if 1 person or party controls the legislature, then they control the FMD and therefore indirectly the entire AI industry in California (via the power of Senate confirmation or direct legislative appointments to its governing board).
As such, many of the past impact analyses still apply - 1047 will slow scientific development in CA by heavily restricting access to the best models. This will likely cause developer flight from California, leading to substantial long-term harm to CA revenue and health outcomes. It promotes governance by politicians rather than governance by scientists, which is a significant step down from what we had before. There are better ways to actually address safety, for example, by regulating use, safe harbor laws for adversarial inspection of models, or laws that require peer review or common evals on models deployed at wide scale.