The article presents preliminary results of the conceptual analysis of the mechanistic profile of the computer metaphor. Mechanic reductionism is a special direction of computer metaphor rooted in various historical forms of word usage. Here we trace the stages of formation of the principles of transferring the properties of a mechanical computer to the properties of the human body and mind. We are also trying to identify the basic principles of semantic transfer, which have survived to this day in the discourse of modern computationalism. The reasons are analyzed due to which the metaphor «Human (body) is machine», traditional for the Modern Age, was transformed into a more complex version of «Mind is machine». What happened to the concepts of «mind» and «machine»? How have ideas about the properties of computational procedures changed? What keeps the counterintuitive computer metaphor viable today? The answers to this series of questions and theoretical ways of solving these problems are contained in this paper.
In technology research, much has been achieved in the field of visual and auditory analysis as compared to the perception of smells. This article discusses the latest advances in the field of electronic (E-nose) used to recognize the olfactory code. All studies can be conditionally divided according to the goals into: 1. Research aimed at the practical application of electronic systems in areas such as food industry, medicine, the environment, etc. 2. Research aimed at cognitive processes and psycho-emotional reactions of a person during olfactory experience, as well as the ability of a machine to predetermine human reactions to various smells. In this case, an important role is played by language, which acts as a representation of olfactory sensations. The methods of conceptual and semantic analysis are becoming more and more in demand in studies devoted to machine recognition of the olfactory code.
The scientific and technological progress permeates inexorably into all spheres of society, regardless of their wishes. Information technologies transform the daily life of every person to such an extent that the society itself is commonly called an information society. It is impossible to study the processes of information society without a fundamental understanding of the concept itself. This creates a basis for analyzing the mechanisms of dynamics of transformation processes evolving with the developing society. The only way to proceed successfully is to use a comprehensive systematic approach, based on the synthesis of theories, principles, techniques of research that will be able to provide both an increase in socio-philosophical knowledge, and the utilitarian efficiency of practical construction and management of transformation processes.
This creates the goal of the article – to formulate our own description of the information society, including the definition and the list of its key characteristics. The goal is the basis for the article’s task: to prepare a theoretical basis for further, more in-depth research into the prospects of information society development.
The article discusses the strategies of translation of «machine texts» on the example of generative transformers (GPT). Currently, the study and development of machine text generation has become an important task for processing and analyzing texts in different languages. Modern technologies of artificial intelligence and neural networks allow us to create powerful tools for activities in this field, which are becoming more and more effective every year. Generative transformers are one of such tools. The study of generative transformers also allows developers to create more accurate and efficient machine translation algorithms, which improves the quality of translations and improves the user experience. In this context, the features of machine texts created by generative transformers, their patterns, errors and imperfections, which require special translation strategies, deserve special interest. Today we can say that the generation of unique and relevant texts is a routine task that has been automated. Nevertheless, certain restrictions for the use of such texts still exist, in particular, their use requires the use of appropriate translation strategies. The paper proposes the author’s typology of translation strategies, where, taking into account the features of AST, it is proposed to add a substrategy of tertiary-moderation translation.
Industry over the last ten years has been characterized by a high level of digital transformation, affecting all layers of production and all areas of the economy. One of the key trends in the digitalization of industry is the digital twin – a system that combines a physical object, its digital model, and a continuous link between the two. However, the integration of such a complex technological solution is fraught with a number of barriers that arise in one way or another in the way of technology users. The authors of the article analyzed 100 examples of digital twin use in industry according to 7 criteria, among which the scope of technology application, the most problematic stage of its integration, key technological, organizational, ecosystem and economic barriers, as well as recommendations on overcoming these barriers, which have been applied or are in the process of application, were highlighted. The article contains a layer of statistical data and infographics that confirm the key conclusions about the main barriers to the integration of digital twins, among which the lack of devices and equipment with the necessary capacity, the lack of existing strategies for the technology implementation, the lack of standards for industry regulation and high initial investment are the main ones. Each block of barriers is accompanied by real examples (cases) of use of digital twin in the domestic and foreign industry, among which there are such companies as KamAZ, UEC-Klimov, Tesla, Volvo, Norske Shell and others.