We are creating some awesome events for you. Kindly bear with us.

The Development of Government Data Disaster Recovery Plan in the U.S.

According to the National Oceanic and Atmospheric Administration, the U.S. has experienced 18 weather/climate disaster events in 2021, with losses exceeding $1 billion each. One of the many impacts of these disasters is data loss and corruption, which also happens as a result of hardware failures, human errors and cybercrimes.

Thankfully, the federal government has taken steps to help agencies prepare for system failures from natural, criminal or unintentional events. Federal agencies can turn to frameworks like the National Institute of Standards and Technology’s Contingency Planning Guide for Federal Information Systems and the Federal Emergency Management Agency’s National Disaster Recovery Framework to guide their planning for overcoming the effects of natural and man-made disasters.

While current programs and existing frameworks show government leading the charge, agencies at federal, state and local levels can go further in ensuring the successful deployment of disaster recovery (DR) plans.

An offensive stance in data protection and management is essential in DR planning. Before a disaster strikes, agencies should closely document every piece of software and hardware. This audit should be continual and IT managers should ensure they have technical support details for each item. This means knowing how many copies of data exist and where they are stored as well as understanding applications or systems that need to be recovered and contact information for application owners.

In addition to these strategies, agencies can take advantage of disaster recovery as a service — an outsourced cloud-based model, which can be useful to agencies that lack the required expertise and resources to build and test an efficient disaster recovery plan. DRaaS offers greater ease of use and accessibility for technical staff from any location while leading to meaningful savings in storage and software.

Agencies should also leverage automation, which will enable a more proactive approach to mitigating data loss.  IT leaders can eliminate manual, time-consuming and repeatable processes that hinder necessary DR planning by automatically testing, documenting and executing disaster recovery plans.

Automation can easily scale from single applications to entire sites. To ensure awareness in automated environments, tools like dashboards give IT teams an at-a-glance view of target environment states and alert them to potential recovery time objective and recovery point objective violations before they impact recovery.

These approaches are crucial, but merely having a disaster recovery plan isn’t enough. Regular, full-scale testing is a key element of a rigorous disaster recovery plan, especially for multisite environments. Agencies should implement and automate test schedules, verifying orchestration plans with isolated and low-impact testing of VM backups, replicas, applications and storage snapshots.

While many of these DR plans involve solutions implemented by technical teams, everyone in the agency should play a part in the DR plan. IT leaders should assign specific strategic and tactical roles to members of the organisation and make sure they’re well communicated — including a specified plan of action for each stakeholder in the case of a disaster.

Government IT managers who implement an offensive strategy and execute core DR principles will reduce risk, save time and restore confidence for stress-free recovery. As the number of natural disasters continues to increase and the cost of recovery exponentiates, DR planning will be vital to the continuity of government agencies at every level.

As reported by OpenGov Asia, the researchers aim to solve personal data problems with a new method using a technique called locality sensitive hashing. They found they could create a small summary of an enormous database of sensitive records. The method is both safe to make publicly available and useful for algorithms that use kernel sums, one of the basic building blocks of machine learning, and for machine-learning programs that perform common tasks like classification, ranking and regression analysis.

Send this to a friend