In one of the discussion groups I follow, someone asked about the possibility of using artificial intelligence to write emergency plans. After my initial reaction of, “Are you serious?” I realized it is a reasonable question to ask, given all the hype surrounding AI. Emergency managers are always seeking ways to improve and are quick to embrace new technologies. So, are we missing a bet here?
Let me raise two caveats. The first is that I am by no means an expert in artificial intelligence. My thoughts on the matter are more philosophical than technical. The second caveat is that the field is incredibly dynamic and changing almost daily, both in terms of technology and in terms of legal and economic factors. Whatever I write today could be out of date by the time you read this article.
Part of the problem with AI is the name itself. Despite conjuring science fiction images of the end of society at the hands of intelligent robots, AI is not intelligent, it is imitative. It can acquire vast amounts of data very quickly, recognize patterns and predict associations based on those patterns. This adapts well to things like speech recognition and language translation. However, I get a bit skeptical when its proponents claim AI is capable of decision-making. There is a big difference between the decision-making capacity required for sorting boxes in a warehouse and that for deciding the assignment of resources in a crisis. Since those decisions are based on the input data, I am reminded of the old programmer’s warning, “Garbage in, garbage out.”
Part of this skepticism is based on the overwhelming hype surrounding the use of AI to essentially replicate things that are already in use. Both Google and Bing have incorporated AI into their search functions, with sometime hilarious results. Unfortunately, some of the results have also included potentially serious results, such as incorrect medical advice. AI recognizes patterns; it does not analyze or understand context.
Which brings us to its use in emergency planning. Over the years, I have conducted the review of numerous plans at multiple levels of government and private sector organizations. After a while, they all begin to look much the same. The reason for this is that we have pushed hard to standardize our emergency plans through the development of doctrinal documents such as Comprehensive Preparedness Guide (CPG) 101 Developing and Maintaining Emergency Operations Plans. This is not a bad thing in itself. Standardization allows for a common approach to operations and the integration of supporting elements. However, we have pushed beyond standardizing concepts to developing way too much minutia in initiatives such as the national qualification system. The end result is that many plans are now written to conform to doctrine rather than for the convenience of the user.
However, between all this doctrinal information and the hundreds of existing plans, there is a large pool of material that would probably be sufficient for Chat GPT or one of the other AI systems to develop an emergency plan for a given jurisdiction that would conform to all existing requirements. The question is, “Why would you want to?”
Several years ago, I was part of a team helping a major city with its evacuation planning. My job was to review their evacuation plan. It was a thing of beauty: the concept was solid, all the elements needed were addressed, and all supporting elements were integrated into the plan. However, it didn’t take much digging to find out that the plan had been written in week by a single individual and that nothing in the plan had ever been coordinated with the supporting agencies or with the host jurisdictions receiving the evacuees. The plan was useless.
Similarly, I once unsuccessfully bid on a multi-jurisdictional project to assist with their emergency plan development. The budget was tight, so I was careful to propose only the minimum process necessary to formulate the plans. When I asked for suggestions on how to improve future proposals, I was told that it looked like I wanted them to do a lot of work. Apparently, they were looking for a fiction writer and not a planning consultant.
My point here is that it is not the writing of the document that is important but the process of developing the plan, something that any emergency manager worth his or her salt understands. I’m not sure that AI offers much beyond the many templates or plan writing programs already available online or the many samples your colleagues are willing to share for free. If AI floats your boat, give it try. Just don’t expect it make decisions for you and don’t expect your plan to be user-friendly. I suspect that the time you save in writing will be eaten up in proof reading and correcting contextual issues. Never forget that the process of planning is important, not the end result.
Comments
You can follow this conversation by subscribing to the comment feed for this post.