Abstract

Urban management is one of the most prominent problems in modern and contemporary social governance. With the scale of the city, the flow of industries and population becomes larger and larger, and the city takes on more and more risks and emergencies. In order to minimize losses and prevent in advance, early drills are often the best way to prevent emergencies. This paper aims to use data mining technology to design a smart city emergency management system to achieve the role of early warning of emergency time and emergency response to emergency events. In response to this, this paper proposes the EIM-DS algorithm, an optimized data mining algorithm based on high-utility itemset mining. This paper improves the threshold setting and data classification method, which greatly develops the execution efficiency of the algorithm. In addition, an intelligent emergency management system is also designed based on the optimization algorithm. The system focuses on analyzing enterprise data, emergency data and emergency command data. The combination of the three data can evaluate the order of the entire city. The experimental results of this paper prove that the calculation time of the algorithm in this paper is about 50% lower than that of other algorithms under different datasets, which has better efficiency, and the designed system can run more smoothly through black-box and white-box experiments.

1. Introduction

Urban modernization is an inevitable trend of contemporary urban development, and China’s urbanization process has entered a critical period. Promoting its healthy and in-depth development is an important task for China. With the continuous deepening of urbanization, the overall appearance of the city has undergone earth-shaking changes compared with the past, but it has also brought many problems in sustainable development, such as comprehensive land use, overall urban planning and construction, urban management, energy supply shortage and so on. Therefore, how to keep pace with the times in urban management level and operation efficiency has become an important issue. In the face of these challenges, the existing urban management model has been limited, and it is necessary to seek new ways to meet the needs of urban development, reallocate urban resources, and continuously improve the level of urban management.

The main innovations of this paper are as follows: An efficient data mining algorithm (EIM-DS) based on an improved dataset is proposed, and around the proposed improved dataset structure, cyclic TWU pruning, tree construction, and compressed storage are proposed before the search process. In the search process, the expansion set and local TWU utility value pruning are introduced, and a back-to-front calculation method is presented to quickly mine all high-utility itemsets.

Like volcanic eruption, malaria and other major human disasters often bring huge damage to the city. In this regard, many scholars start with the study of urban disasters, hoping to explore the best method for urban emergency management. Based on the pressure-state-response (PSR) model, Wang and Wu conducted a longitudinal evaluation of the time series in Xuhui District, Shanghai from the pressure layer, the state layer and the response layer. Based on understanding the current situation of the city and the trend of changes over the years in advance, he also combined the city evaluation system [1]. The Chaves et al.’s study compares crowdsourcing tools designed for urban planning and urban emergency management and proposes five-dimensional types of crowdsourcing quality that can be used to optimize urban planning and emergency management processes [2]. Kyne applied the definition of pesticides in American law to urban management research, and he compared the pesticidal model to urban disasters to discuss [3]. It can be found that although many scholars have proposed appropriate models, they can often only evaluate one kind of disaster.

In this regard, technological progress in various fields has promoted the development of smart city applications, thereby improving the way of life in modern cities. The concept of smart city is well known, and the smart city platform is applied to emergency management of cities. Paganelli et al. proposed a framework that enables developers to model smart things as web resources, expose data in IoT through RESTful application programming interfaces (APIs), and develop applications on top of them [4]. Daniel et al. presented a blur-based approach to dynamically configure how vision sensors work in terms of perception, encoding, and transmission modes, leveraging different types of reference parameters [5]. Gasco-Hernandez From October 2009 to February 2011, the Information Society Unit of the Institute for Future Technologies (IPTS) of the European Commission’s Joint Research Centre conducted an exploratory study on ICT-enabled governance models in EU cities. It aimed to deepen the understanding of the interaction between ICT and governance processes at the city level by providing the following [6]. Bates thought about incorporating people’s practices and repurposing existing IoT in understanding and designing smart cities. He considered ethics and naturalization more carefully in co-creating smart campuses, and the importance of challenging existing rhetoric around energy waste in smart city and smart building research [7]. Ghosh and Gosavi put forward a well-designed semi-Markov model to capture the stochastic dynamics of post-earthquake events, which will be used to quantify the risk rates faced by people and estimate recovery times [8]. It can be found that the research on urban emergency management should focus on the analysis and processing of daily data, but very few scholars have conducted research on it. Therefore, it is necessary to use data mining technology to accumulate the data in urban management and analyze it to establish the research of urban emergency management.

3. Urban Intelligent Emergency Management

3.1. Digital City

Digital city management refers to the use of some technologies including computer network, wireless communication, and other means to establish a relatively comprehensive information management platform. By making full use of information resources, digital city adopts a combination of different management methods to well locate the urban management space and management objects [9]. In the process of city management, a new management system with “supervision” and “command” as the core is adopted, supervision, and management are carried out separately, corresponding responsibilities are clarified, and a new process of urban management is created [10]. In the process of supervision, the internal and external evaluations are combined to comprehensively assess all aspects of urban management, making urban management more accurate and efficient, and the pace of management more unified. The digital city application is shown in Figure 1:

3.2. Cloud Computing and Data Mining

With the growth of data processing volume, traditional processing methods can no longer meet user needs, and distributed and parallel processing methods are more advantageous. Although cloud computing technology is relatively mature in terms of search engines, there are still many problems in many fields, for example, how to ensure data security in cloud databases, how to determine what form of algorithms can be implemented on cloud computing platforms and so on. Cloud computing is regarded as the best way to solve the data processing and analysis of the Internet of Things. In theory, it can also solve problems, such as mass data mining and management and content search, but some problems are still being explored.

The architecture of cloud computing is divided into three layers: The first layer is the resource allocation layer, which receives the request sent by the user and decides whether to send the request to the next layer. This layer ensures that the task is received and executed under the premise of computing resources, otherwise the task is not executed. This layer needs to manage not only user requests, but also computing risk and resource allocation. In order to achieve this goal, it is necessary to evaluate the service request. The evaluation system is the core content of the first layer, and decides whether to execute the request according to the value of the evaluation. The second layer is the virtual machine management layer, which is responsible for controlling all virtual machines and managing the allocation and recycling of virtual machines. When a user starts executing a task, the scheduling mechanism is responsible for dynamically allocating computing resources to each local database, so that each local database corresponds to one or more virtual machines or virtual threads, and is responsible for handling load balancing. When the amount of tasks decreases, the second layer is responsible for resource recovery and puts it into the resource pool for the next call [11, 12]. This scheduling mechanism enables the users to use the cloud computing platform transparently, and users do not need to care about how the background handles resource allocation and process scheduling. The third layer is the physical layer, which is the actual server where the data is stored.

3.3. EIM-DS Algorithm
3.3.1. Basic Definition of High-Utility Itemset Mining

High-utility mining is a research direction of pattern mining, a subfield of data mining. The goal of pattern mining is to find some novel, difficult to observe directly and useful patterns in the given database. The purpose of high-utility itemset mining is to find all high-utility itemsets (HUI) in a transaction dataset.

A dataset is a collection of transactions. Each transaction has a unique identifier, recorded as TID (TransactionID). Each transaction contains multiple (item, internal utility value) tuples, such as the data in Table 1. Each of these items has two weights: internal utility value (such as quantity) and external utility value (such as profit). The set of all distinct items in dataset D is

Table 1 presents a transaction dataset with 5 transactions, and the external utility values of the items are given in Table 2 [13].

Item in any transaction has an internal utility value of and an external utility value set of . The utility value of item in is written as

Since itemset appears in multiple transactions, the utility value of item is denoted as [14]

For example, in transaction , the utility value of item is

The utility value of item in transaction dataset is

The utility value of itemset in transaction is recorded as

The utility value of itemset on the entire transaction database is denoted as

is the set of transactions containing .

Assuming that the utility value of an itemset is higher than an artificially set threshold, then is considered to be a high-utility itemset, otherwise, it is considered a low-utility itemset. The purpose of the high-utility itemsets algorithm is to find all high-utility itemsets [15].

The utility value of transaction is recorded as , and the formula is

The transaction-weighted utility value of itemset is the sum of the utility values of all transactions including , which is denoted as , and the formula is [16]

Assuming that is an itemset in transaction , the sum of the items after itemset in is called the remaining item utility value, denoted as , and the formula is [17]

For example, assuming the itemset {} in , then the remaining items in are

3.3.2. Improve High-Efficiency Data Mining Algorithm

Firstly, the superset information of each node is recorded by constructing a tree, and the constructed tree is stored by means of compression. Secondly, two strategies, sub-node pruning and local TWU pruning, are used in the search process to further narrow the search space. In the calculation process, the execution time of the algorithm is reduced by using the order of the dataset and avoiding double calculation, so that all HUIs can be mined efficiently. In addition, a high-efficiency data mining algorithm (T-EIM-DS) based on multithreading improved dataset structure is proposed to further reduce the execution time of the algorithm.

Each transaction in the original transaction dataset consists of multiple items and the value of the item. In order to improve the efficiency of database use, this paper puts forward a new dataset structure (ItemsData, ITD) to reconstruct the database. The new dataset is composed of data of multiple items, each item is composed of the identifier of multiple transactions and the utility value of the item under the transaction of that identifier. The reconstructed transaction database is shown in Table 3. The reconstructed database has two advantages. The first is that it can achieve fast indexing, and can quickly find all the data of the required item under the time complexity of (1). The original transaction dataset requires a lookup of the entire dataset to complete. The second advantage is that the utility value of the two terms can be quickly calculated. Since in the improved dataset, the identifiers of each item are ordered, the utility value of the two items can be obtained under the time complexity of according to the characteristic of ordering.

Without doing any operations, all possible solutions are a set enumeration tree, the solution space is 2L, and L is the number of distinct items in the dataset. Constructing a tree by traversing the dataset can effectively reduce the search space, such as the FP-Growth algorithm. The ordering of items also affects the size of the construction tree, and currently the best ordering is based on TWU [18]. The itemset sorted according to the TWU of the items in the table is . This paper introduces the concept of constructing a tree based on the set enumeration tree. The process of constructing a tree is shown in Figure 2. Because the memory required for constructing tree storage is too large, a compressed storage method is proposed. It combines the child nodes with the same parent node and records it as the expansion set of the item, and determines the expansion set of the itemset by finding the intersection of the expansion sets of all nodes in the itemset. The construction tree and compressed storage cannot only effectively reduce the memory consumption of the construction book, but also effectively obtain the expansion set of the current item by finding the intersection, and effectively retain the child node information of each node [19, 20].

If itemset and are one of the extension set of , the utility value of the extension set is denoted as . The utility value of extension set of is defined as

Assuming itemset , if , then the utility value of and its extended set are both less than the threshold.

Assuming itemset , transaction belongs to . The local utility value of is denoted as , which is defined as

Assuming the itemset is and is one of the extended set of , then the local TWU utility value of item z is written as

Assuming the itemset is , for any, if

So,

is also established. Item is removed from the expansion set of the expansion set of .

Assuming itemset , the itemset obtained by pruning the utility value of the extended set of is recorded as , which is defined as

The itemset obtained by pruning the local TWU utility value of is defined as , which is defined as

is used to determine which items in the extended set contain the solution of the high-utility itemset in the subtree, and is used to find the intersection with the EOI of the extended set to determine the range of the EOI of the extended set. The algorithm flowchart is shown in Figure 3.

4. Design of Urban Intelligent Emergency Management System

4.1. SSH Framework
4.1.1. Spring

Spring is the most widely used framework in the Java language. It is designed to simplify the complexity of enterprise application development and to build the entire application in a unified and efficient way. Spring does not compete with existing technical frameworks, but combines them to help people develop applications faster. The Spring has two core functions are respectively the IOC (InversionofControl, inversion of control) and AOP (AspectOrientedProgramming, aspect-oriented programming).

The implementation of Spring AOP is depicted in Figure 4. Spring AOP regenerates a Spring Proxy class by inheriting the target object or implementing the target object interface. When the target object needs to be accessed, Spring Proxy intercepts the request for the target object and executes the corresponding method. The method mainly includes the defined pre-section, the method of the target object, and the post-section. Finally, Spring Proxy returns the processing result to the caller. Spring Proxy is implemented by extends or implements, corresponding to two dynamic proxy implementations, JDKProxy and CGlibProxy, respectively.

4.1.2. Spring MVC

MVC divides code into three layers: model, view, and controller. The model is the abstraction of the real things. The object-oriented design idea is applied when designing the model, and the real things are abstracted into application program objects. The role of the model is to encapsulate the state and operation of the application object, and to respond to the controller’s access to the application object. The view is the front-end code of the application [21, 22]. After the browser parses and fills the model data, it becomes the interface that the user sees. The role of the view is to provide an interactive interface, parse the model data for display, and pass the user’s operation to the controller. The controller is the link between the model and the view, and it translates the user’s actions in the view into specific application business. The application business returns the final result to the view by querying, processing, and analyzing the model data [23]. The role of the controller is to translate the user’s operation into a specific business, and return the final processing result to the appropriate view.

Spring MVC is a sub-module of the Spring framework, which is specially used to develop enterprise-level web applications and is a specific implementation of the MVC design pattern. Figure 5 shows the role diagram of Spring MVC.

4.1.3. Hibernate

During actual project development, a data persistence layer is usually added between the business logic code and the database to simplify access to the database. With a data persistence layer, the conversion between Java objects and the database can be done transparently, regardless of database connections, concurrency, transactions, and so on.

Hibernate provides a lightweight encapsulation of JDBC. Through the pre-configured files, the Java objects and database table resumes are linked, so that developers do not need to write SQL statements, and directly manipulate the Java objects to achieve the purpose of manipulating the database and complete various query operations. Therefore, Hibernate is also called a fully automatic Object RelationalMapping (ORM) framework.

Figure 6 shows how Hibernate works: The connection between the application and Hibernate is established through the PersistentObject (The concrete implementation of a class in the application is a single-threaded, short-lived object that is connected to a Session at a certain time). Properties and Mapping are Hibernate configuration files, Hibernate performs object-relational mapping based on them, and converts the code developed by developers with Hibernate API into SQL statements. After that, it interacts with the database system with its own Session (the implementation is to package the Session in JDBC).

4.2. Requirement Exploration

Natural disasters, accidents, public health, and social security events are listed as public emergencies. Every year, public emergencies cause more than 1 million casualties and hundreds of billions of yuan of economic losses. Therefore, it is necessary to control the time of urban disaster in advance.

4.2.1. Analysis of the Role of the System

According to the business requirements, those business functions that the system needs to implement have been roughly defined. This chapter analyzes the role of the system from the user’s point of view, and further excavates user requirements to clarify the functions that the system needs to implement.

System administrator: It is responsible for managing the entire system and have all the permissions of the system. It is mainly used to divide the permissions of the system, including assigning menu permissions, control permissions, and data permissions to other users [24]. It can also be used to reset user passwords and other operations.

Second-level administrator: Each enterprise can have one second-level administrator. It has some permissions assigned by the system administrator and is responsible for further permissions to each user, for example, assigning authority to enterprise information management personnel and assigning authority to emergency plan management personnel.

Emergency commander: It is the commander at the time of the accident, who will carry out accident rescue command after receiving the accident. Emergency commanders are mainly responsible for the determination of accident levels, the allocation of emergency rescue teams, and the allocation of emergency rescue materials.

Enterprise users: It is responsible for managing each piece of data in the enterprise and performs operations, such as adding, modifying, and deleting data. The division of specific data management is carried out by secondary management.

4.2.2. Functional Requirements of the System

The enterprise data management module includes three parts: enterprise basic information, enterprise employee information management, and enterprise equipment information management. Enterprise users are responsible for the functions of entering, modifying and deleting enterprise basic information, enterprise employee information, and enterprise equipment information. Supervisors use settings to check corporate functionality and gain access to various corporate data.

Emergency data management includes five parts: emergency plan management, emergency drill management, emergency material management, policy and regulation management, and expert information management. The emergency plan refers to the disposal plan when an accident occurs in the enterprise, and the emergency drill is a regular drill carried out according to the emergency plan. The three parts of emergency plan, emergency drill and emergency materials are managed by enterprise personnel. Policies and regulations include documents issued by some countries, which are the basis for enterprises to formulate emergency plans. Expert information management includes expert information from all walks of life, which is used to provide professional advice and assist in emergency command in the event of an accident.

The emergency command management module includes a series of processes for SMS template management and emergency command. SMS templates are some short messages formulated in advance to quickly notify emergency commanders and emergency rescuers in the event of an accident. Emergency command includes several functions of accident registration, accident grade determination, distribution of rescue groups, distribution of rescue materials, and post-processing.

4.2.3. Non-functional Requirements of the System

The system server environment is given in Table 4, in which the system’s flight function requirements are mainly in the following three points: (1) The external environment safety of the system: The emergency measures in case of hardware failure and network failure should be fully considered to ensure the uninterrupted operation of the system. (2) Internal security of the system: While ensuring the security of the external system, the system must also ensure data security and confidential information is not leaked through user authorization verification, data encryption, etc. (3) Fault tolerance: When illegal data are generated due to user mis-operation or other reasons, the system shall have a certain fault tolerance mechanism and be able to isolate illegal data or automatically repair and correct it.

5. Realization of Urban Intelligent Emergency Management System

5.1. Overall System Architecture

The purpose of the emergency management system designed in this paper is to provide a subdivided information service platform for society and enterprises, and to use computer technology to assist users in their work. It is a typical Web application program. The system adopts a three-layer architecture in B/S mode, which divides the whole software into presentation layer, application layer, and data layer.

The presentation layer is located at the top layer and provides an interactive interface for the user in the browser. The operation performed by the user will be sent to the application layer through the browser in the form of HTTP, and the application layer will return the result to the browser after processing. The application layer is in the middle, which is the main part of software system development and realizes the specific logic of the business. Generally, user requests involve access to data, so the application layer also needs to access the database. The specific logic of the business is closely linked with database operations. The data layer is located at the bottom layer and is the foundation of the entire application system. The data layer refers to the database and provides data storage and query functions. When the database receives the SQL statement from the application layer, it will perform query cache, generate parse tree, preprocess, generate execution plan, query storage engine, and finally get the query result.

The advantages of the B/S architecture are (1) simple to use, no need to install and maintain software, and no hardware requirements. Using a browser, users can get various services provided by the server by sending HTTP protocol through the browser’s own function, without installing and maintaining any software, and there are almost no hardware requirements for the client. At the same time, in the three-tier architecture, only the presentation layer will be displayed to the user, the application layer and the data layer are completely transparent to the user, and it is very simple for the user to use. (2) Cross-platform features: The traditional C/S architecture is closely related to the computer’s operating system. When the operating system changes, the developed software is completely useless. The C/S architecture can access the service as long as the browser can be used on the computer. Figure 7 shows the overall architecture of system. (3) Easy maintenance for subsequent expansion. Because of the hierarchical design idea, only the function design of the corresponding layer of the system is realized in each layer, and the interaction between layers is called by the corresponding function module of the adjacent layer, so that the relationship between layers presents a weak dependence. It is easy for developers to maintain, modify, and extend the code later, and it is not too much work to change because the code is too hierarchical, or because the code is too coupled.

5.2. System Database Design

According to the method of standard design, the development of database application system can be divided into demand analysis, conceptual structure design, logical structure design, physical structure design, database implementation, database operation, and maintenance of six stages. Because in the previous stage the overall demand analysis, the selection of the appropriate database have been carried out, so in this summary, mainly the conceptual structure of the database design, logical structure design, and physical structure design are discussed.

5.2.1. Enterprise Data Management

Enterprise basic information management includes three functions: adding, deleting, modifying and checking the enterprise basic information, setting and checking enterprises, and marking enterprise geographic information. The addition, deletion, modification, and checking of basic enterprise information are managed by enterprise users. Each enterprise corresponds to a piece of basic information, including the company’s registration number, industry nature, legal representative, geographic location, and other information, which is convenient for supervisors to query the data. The function of setting and checking the enterprise is used by supervisors, and through this function, they can obtain the query permission for the information of employees and equipment of the corresponding enterprise. The enterprise geographic information labeling module uses a GIS map. By punctuating the map, the longitude and latitude coordinates of the enterprise can be obtained.

Enterprise employee information management includes two functions: addition, deletion, modification and checking of enterprise employee information and employee training record management. The enterprise employee information includes the employee’s name, contact information, position, qualification certificate and other information, and each employee corresponds to multiple employee training records.

Enterprise equipment information management includes three functions: adding, deleting, modifying and checking enterprise equipment information, maintenance record and replacement record of enterprise equipment. Equipment specifications, model, factory number, enterprise, date of production and other information. In addition to the basic functions, it also includes the function of viewing equipment maintenance records according to the equipment.

5.2.2. Emergency Data Management

Emergency data management includes five parts: emergency material management, emergency plan management, emergency drill management, expert information management, and laws and regulations management. (1) Emergency material management is responsible for collecting the emergency material reserves of each company, including material name, model, storage location, manager, and other information. In the event of an accident, supervisors will use the company’s materials again. (2) Emergency plan management is responsible for collecting information on the formulation of emergency plans of each company, including the release time of the plan, the basis for preparation, document attachments, and the type of the plan. At the same time, the basis for the preparation of each emergency plan must be related to laws and regulations. (3) Emergency drills are drills carried out according to the emergency plan. The emergency drill management is responsible for collecting the situation of each emergency drill, including the corresponding emergency plan type, drill location, drill unit, on-site drill pictures, drill evaluation report, and other information. (4) The management of expert information and laws and regulations shall be managed by supervisors. Expert information management collects expert information from all walks of life in Jiexiu City, and informs relevant experts for emergency command when an accident occurs. The management of laws and regulations includes some documents issued by the state for the formulation of emergency plans.

5.2.3. Emergency Command and Management

The emergency command and management module are a series of procedures for handling accidents when an accident occurs. The command and management of on-site accidents are realized through functions, such as accident registration, accident reporting, SMS occurrence, emergency rescue team allocation, emergency rescue material allocation, and post-processing. In addition to the emergency command process, emergency command management also includes two functions: SMS template management and emergency rescue group management. The SMS template is for quick message notification when an accident occurs. The users of the system can directly use the set SMS template without re-editing, which speeds up the notification. The emergency rescue team is the basic unit of emergency rescue in the event of an accident. It is responsible for emergency rescue, medical rescue, environmental monitoring, technical support and other works. Each emergency rescue team includes multiple members.

5.3. System Test

In the actual use of the disaster prevention system, the data are flexible and changeable, and it is particularly complicated to use. Therefore, this paper compares and analyzes the application of the proposed algorithm in the system. In order to evaluate the performance of the algorithm, this paper selects HUI-Miner, d2HUP, FHM, UFH as the comparison algorithms. The data and source code used in the experiment are from SPMF (https://www.philippe-fournierviger.com/spmf/). The specific characteristics of the six datasets used in the experiment are given in Table 5. Among them, chess, connect, and PUMSB are dense datasets, BMS, foodmart, and retail are sparse datasets.

The experimental platform and related environment versions used in this paper are shown in Table 6.

All algorithm designs are based on mining all high-utility itemsets, so the execution time of the algorithm is an important indicator for evaluating the performance of the algorithm. For different datasets, the threshold is selected at the position where the performance of each algorithm differs. The detailed algorithm execution time is shown in Figure 8. According to Figure 8, the EIM-DS algorithm performs best on the foodmart and PUMSB datasets. On the BMS dataset, the execution time of the EIM-DS algorithm and the UFH algorithm can be 2–3 orders of magnitude faster than other algorithms. On the chess dataset, the EIM-DS algorithm and the d2HUP algorithm can lead other algorithms by nearly an order of magnitude. On the retail dataset, the EIM-DS algorithm performs slightly worse than other algorithms. From the overall performance, the execution time of EIM-DS is less than 1000 s in both sparse transaction sets and intensive transaction sets, which indicates that EIM-DS has far better overall performance than the other four algorithms. The EIM-DS algorithm has excellent overall performance in execution time thanks to more efficient data structure and more effective pruning strategy. The new dataset structure (ITD) can calculate the utility value of the itemset more quickly, and the effective pruning strategy can greatly reduce the number of candidate points, thereby reducing the execution time.

Figure 9 presents the maximum memory values of the five algorithms at the threshold of Figure 8 for six datasets. On the four datasets of BMS, connect, PUMSB, and retail, the EIM-DS algorithm shows less memory footprint than the other four algorithms. On the Chess dataset, the memory footprint of EIM-DS is only higher than that of d2HUP. On the foodmart dataset, the performance of the EIM-DS algorithm is inferior to the d2HUP algorithm and the FHM algorithm, but only 5 MB higher than the FHM algorithm. On smaller datasets, since the EIM-DS algorithm will reconstruct the dataset, it will occupy part of the memory. On datasets with less memory usage, the proportion of reconstructed datasets will become higher, which will affect the results. However, on larger datasets, such as the Connect dataset, the UFH algorithm, the FHM algorithm, the HUI-Miner algorithm and the d2HUP algorithm all use more than two times the memory than the EIM-DS algorithm. Overall, the maximum memory usage of the EIM-DS algorithm does not exceed 1 GB, and the overall memory performance of the EIM-DS algorithm is far better than other algorithms. According to the reconstructed dataset ITD, the memory usage during the depth search process can be effectively reduced. During the pruning process, the creation of a one-dimensional set TA greatly reduces the memory occupied by pruning. The application of these two strategies is an important reason why memory can have such a stable performance.

The memory usage of the algorithm is affected not only by the size of the dataset, but also by the length of the itemset. The increase in memory usage can demonstrate the robustness of the algorithm in terms of memory. Since the length of the longest high-utility itemset of the small dataset is relatively short, the comparison experiment cannot be completed, so this paper chooses the larger dataset connect and PUMSB. With different maximum utility set length settings, the memory usage is recorded, and the experimental results are shown in Figure 10. It can be clearly observed that the EIM-DS algorithm has a much lower memory growth rate than other algorithms, which also shows that the EIM-DS algorithm has the best robustness in terms of memory. On the connect and PUMSB datasets, the high-utility itemset lengths range from 3 to 15, and the EIM-DS algorithm increases by 103 MB and 36 MB, respectively. On the PUMSB dataset, the memory of other algorithms has increased by at least 650 MB, and on the connect dataset, it has increased by at least 950 MB.

6. Conclusions

By comparing the results of the experiments in the article, in the candidate point experiment, due to the cyclic TWU pruning before the search proposed by EIM-DS, for the pruning of the extended set utility value and the local TWU utility value in the search process, the number of candidate points of the EIM-DS algorithm was much smaller than that of the HUI-Miner and FHM algorithms. In the experiment of execution time, on most of the datasets, the time of the EIM-DS algorithm was the first or tied first. Due to the proposed improved dataset, and the proposed improved strategy based on the reconstructed dataset, the time performance of the EIM-DS algorithm was better than the comparative UFH, FHM, d2HUP, and HUI-Miner algorithms on both sparse and dense datasets. In the experiment of consuming memory, a certain amount of memory was consumed due to the proposed new dataset structure and compressed storage structure. But on most datasets, the EIM-DS algorithm outperformed the UFH, FHM, d2HUP, and HUI-Miner algorithms. In the experiment of the impact of mining depth on memory, the memory growth of the EIM-DS algorithm was much smaller than that of the comparison algorithm. From the comprehensive results, it can be concluded that the EIM-DS algorithm has better performance than the UFH, FHM, d2HUP, and HUI-Miner algorithms in terms of time and memory, and the system can also run stably.

Data Availability

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Conflicts of Interest

The author states that this article has no conflicts of interest.

Acknowledgments

This work was supported by Postgraduate Scientific Research Innovation Project of Xiangtan University: “Research on the construction of “soft power” of China’s emergency management system and capacity” (XDCX2021B028).