Data Warehouses, Decision Support Systems, and Deep Technologies During the Global COVID-19 Pandemic
Introduction
Data storage for businesses involves the storage of such information as stocks, raw materials, deposits, and other such information related to the daily operations of the business. The architecture of such a system needs to be aimed at data management. Data warehousing uses technologies that allow data from multiple sources to be compared and analyzed so that businesses can the consolidated data to make decisions (Simion, et al. [1]). The database is built to implement the volume and the requirements of the system and help project managers and organizational managers make decisions related to the development of the business structure or further daily operations (Simion, et al. [1]). Furthermore, database applications improve the reliability and efficiency of the user and the ability to make decisions, store, update, and get answers through reports (Simion, et al. [1]). Communication with the essential departments of the organization is facilitated by the efforts of the component dialog. The analysis highlights the structure between the data analysis and the simplest form of analysis is comparing the data with similar information. Other observations require techniques using analytical data based on mathematical theories which were developed to make correlations based on mathematical theories using products of a hypothetical nature compared with actual data (Simion, et al. [1]).
Deep Technologies
Over the past few decades, society has become overtly more data driven. With the takeover of social media, a need for learning how data and subjects relate to one another implies the need for a data processing method to consider emotion, sense, everyday practices, and the nexus between data and data technologies, Big Data, Internet of Things (IoT), and Artificial Intelligence may certainly be the future of all data transactions (Lee A [2]). Future data technologies must take all these factors into consideration and be able to process data at rates far exceeding current speeds and abilities. Parallel processing is beginning to play a key role in data interpretation. Thus, the benefit of using these technologies lies in capturing the scalar nature of data and focusing on the socio-technical processes behind data applications. It begins with a conceptualization of the Personhood of Data, which highlights exactly how distant the subject is from the data resulting in a conceptual language that then provides a method for analyzing the scalar nature of the data (Brynjolfsson Jin, et al. [3]).
Data, then, becomes a positivistic resource as the data highlights the humanity of the subject through each bit of datafied representation of humanity through the scientific representation of the subject’s world. So, with the use of deep technologies, data can be processed and analyzed and be used not only to humanize daily data activities, stresses, and emotions and sensemaking but to individualize our humanity in the surveillance of the data with separate and scientific representations of new data structures—the distant relationship of data and the embodied life represented by social media and activities only capable of being analyzed by deep technologies (Lee A [2]). The simplest form of analysis is comparing the data with similar data synthesized. In addition, information can acquire quality when using techniques of graphical representation that make these correlations, observation techniques analytical data based on mathematical theories, comparing actual data with the theoretical products of a hypothetical model, or observation techniques automatic based on data.
Big Data
The Internet of Things (IoT), Big Data, and Neural Networks are being considered as emerging networks that can comprehend complex automatic monitoring, identification, and management through a network of smart devices and parallel processing such as in a Neural network. Big Data has emerged as a viable and sustainable network that can analyze massive amounts of data from several sources including social media, sensors, attenuators, etc. to derive a sustainable decision by linking devices together and then developing consistent algorithms which can analyze, manipulate, and manage the connected systems so that a huge bulk of data can then be used for smarter decision-making and post-analysis for various reasons (Brynjolfsson Jin, et al. [3]).
IoT is a set of disparate devices connected via a common network such as Big Data analytics. The efficient use of the IoT in multiple areas has helped improve productivity and reduce errors (Brynjolfsson Jin, et al. [3]). Because smart devices are linked to the network, they can make smarter decisions and post-analysis via various purposes; in other words, the network is connected to these devices using Big Data, which improves the limited resources and management of data with those smart devices to improve power efficiency. Due to the inherent nature of Big Data, including the 7 Vs, improvements can be engaged in networking ability and various approaches in place for recovery, constrained energy, and the huge bulk of storage on the cloud. Therefore, data correlation and multiple characteristics of sensory data can be improved with the use of Big Data and deep technologies (Chu, et al. [4]).
Emerging Diseases and Deep Technologies
When using deep technologies, data scientists prefer R Studio or Python languages, however, they are limited in their speed and memory abilities. Scalability is one of the most important considerations when using machine learning models and parallel data summarizations (Al Amin, et al. [5]). Thus, a more successful computational data model can be presented using parallel data summarization because the model requires only a small amount of memory (i.e., RAM) and the algorithm works in three phases, producing a broader statistical and machine learning model which can handle datasets much bigger than the main memory. Designed with a vector-vector outer product with a C++ code to escape the bottlenecking that can happen in deep learning, this system is still faster and with a bigger memory than other parallel systems (e.g., Spark, Hive, Cassandra, etc.), which could become important in such cases as epidemiological disease tracing (Al Amin, et al. [5]).
Big Data plays a variety of important roles which critically support the world’s manufacturing, legal, financial, cybersecurity, and medical systems. Through open-source platforms like Hadoop, etc., information is shared among government and nongovernmental facilities regarding emerging diseases, predictions made through computational models, and cybersecurity underlaid for those who must shelter in place and work from home (Ahouz, et al. [6]). Johns Hopkins University Center for Systems Science and Engineering (JHU CCSE) has had the closest to actual live data as any other models with error rates at 4.71, 8.54, and 6.13%, respectively. Furthermore, these error rates were based on computational model error rates based on actual infection, death, and hospitalization rates based on data mining using Big Data analytics (Gupta, et al. [7,8]).
Conclusion
The strengths of Big Data analytics cannot be understated in this data-driven society where everything written on social media to the demographic makeup of the victims of the pandemic can be used as input into the computational model makeup for determining who may be future victims of the pandemic. According to the models predicted by JHUCCSE, a high rate of errors was unavoidable at the beginning of the pandemic. Contributing to the high error rates were the lack of knowledge about the disease, unknown diagnostics, and unknown patterns of susceptibility. However, as time has shown, knowledge of the factors surrounding the pandemic has contributed to a decrease in the error rates relative to the pandemic.
For more Articles on: https://biomedres01.blogspot.com/
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.