Most Cited Article – Theranostic Applications of Stimulus-Responsive Systems based on Fe2O3

Author(s):Mehrab PourmadadiMohammad Javad AhmadiHomayoon Soleimani DinaniNarges Ajalli and Farid Dorkoosh*

Volume 10, Issue 2, 2022

Published on: 21 April, 2022

Page: [90 – 112]

Pages: 23

DOI: 10.2174/2211738510666220210105113

Abstract

According to the interaction of nanoparticles with biological systems, enthusiasm for nanotechnology in biomedical applications has been developed in the past decades. Fe2O3 nanoparticles, as the most stable iron oxide, have special merits that make them useful widely for detecting diseases, therapy, drug delivery, and monitoring the therapeutic process. This review presents the fabrication methods of Fe2O3-based materials and their photocatalytic and magnetic properties. Then, we highlight the application of Fe2O3-based nanoparticles in diagnosis and imaging, different therapy methods, and finally, stimulus-responsive systems, such as pH-responsive, magneticresponsive, redox-responsive, and enzyme-responsive, with an emphasis on cancer treatment. In addition, the potential of Fe2O3 to combine diagnosis and therapy within a single particle called theranostic agent will be discussed. Read now: https://bit.ly/3St6XsJ

Aims & Scope – Pharmaceutical Nanotechnology

ISSN (Print): 2211-7385
ISSN (Online): 2211-7393

Volume 10, Issue 5, 2022

Aims & Scope

Pharmaceutical Nanotechnology publishes original manuscripts, reviews, thematic issues, rapid technical notes and commentaries that provide insights into the synthesis, characterisation and pharmaceutical (or diagnostic) application of materials at the nanoscale. The nanoscale is defined as a size range of below 1 µm. Scientific findings related to micro and macro systems with functionality residing within features defined at the nanoscale are also within the scope of the journal. Manuscripts detailing the synthesis, exhaustive characterisation, biological evaluation, clinical testing and/ or toxicological assessment of nanomaterials are of particular interest to the journal’s readership. Articles should be self contained, centred around a well founded hypothesis and should aim to showcase the pharmaceutical/ diagnostic implications of the nanotechnology approach. Manuscripts should aim, wherever possible, to demonstrate the in vivo impact of any nanotechnological intervention. As reducing a material to the nanoscale is capable of fundamentally altering the material’s properties, the journal’s readership is particularly interested in new characterisation techniques and the advanced properties that originate from this size reduction. Both bottom up and top down approaches to the realisation of nanomaterials lie within the scope of the journal. Read now: https://bit.ly/3Sqf61b

Editors Choice – Fingerprint Presentation Attack Detection in Open-Set Scenario Using Transient Liveness Factor

Author(s):Akhilesh Verma*Vijay Kumar Gupta and Savita Goel

Volume 14, Issue 8, 2021

Published on: 23 April, 2020

Article ID: e180122181249

Pages: 9

DOI: 10.2174/2666255813999200423123033

Abstract

Background: In recent history, fingerprint presentation attack detection (FPAD) proposal came out in a variety of ways. A close-set approach uses a pattern classification technique that best suits a specific context and goal. The Open-set approach works fine in a wider context, which is relatively robust with new fabrication material and independent of sensor type. In both cases, results were promising but not too generalizable because of unseen conditions not fitting into the method used. It is clear that the two key challenges in the FPAD system, sensor interoperability and robustness with new fabrication materials are not addressed to date.

Objective: To address the above challenges, a liveness detection model is proposed using a live sample using transient liveness factor and one-class CNN.

Methods: In our architecture, liveness is predicted by using the fusion rule, score level fusion of two decisions. Here, ‘n’ high-quality live samples are initially trained for quality. We observed that fingerprint liveness information is ‘transitory’ in nature, a variation in the different live sample is natural. Thus, each live sample has a ‘transient liveness’ (TL) information. We use no-reference (NR) image quality measure (IQM) as a transient value corresponding to each live sample. A consensus agreement is collectively reached in transient value to predict adversarial input. Further, live samples at the server are trained with augmented inputs on the one-class classifier to predict the outlier. So, by using the fusion rule, score level fusion of consensus agreement and appropriately characterized negative cases (or outliers) predicts liveness.

Results: Our approach uses high-quality 30-live samples only, out of 90 images available in the dataset to reduce learning time. We used Time Series images from the LivDet competition 2015. It has 90-live images and 45-spoof images made from Bodydouble, Ecoflex and Playdoh of each person. Fusion rule results in 100% accuracy in recognising live as live.

Conclusion: We have presented an architecture for liveness-server for extraction/updating transient liveness factor. Our work explained here a significant step forward towards a generalized and reproducible process with consideration towards the provision for the universal scheme as a need of today. The proposed TLF approach has a solid presumption; it will address dataset heterogeneity as it incorporates wider scope-context. Similar results with other datasets are under validation. Implementation seems difficult now but has several advantages when carried out during the transformative process. Read now: https://bit.ly/3oTElLV

Editors Choice – LWT-DCT based Image Watermarking Scheme using Normalized SVD

Author(s):Rahul DixitAmita Nandal*Arvind DhakaVardan Agarwal and Yohan Varghese Kuriakose

Volume 14, Issue 9, 2021

Published on: 21 August, 2020

Page: [2976 – 2991]

Pages: 16

DOI: 10.2174/2666255813999200821161656

Abstract

Background: Nowadays, information security is one of the most significant issues of social networks. The multimedia data can be tampered with, and the attackers can then claim its ownership. Image watermarking is a technique that is used for copyright protection and authentication of multimedia.

Objective: We aim to create a new and more robust image watermarking technique to prevent illegal copying, editing and distribution of media.

Method: The watermarking technique proposed in this paper is non-blind and employs Lifting Wavelet Transform on the cover image to decompose the image into four coefficient matrices. Then Discrete Cosine Transform is applied which separates a selected coefficient matrix into different frequencies and later Singular Value Decomposition is applied. Singular Value Decomposition is also applied to the watermarking image and it is added to the singular matrix of the cover image, which is then normalized, followed by the inverse Singular Value Decomposition, inverse Discrete Cosine Transform and inverse Lifting Wavelet Transform respectively to obtain an embedded image. Normalization is proposed as an alternative to the traditional scaling factor.

Results: Our technique is tested against attacks like rotation, resizing, cropping, noise addition and filtering. The performance comparison is evaluated based on Peak Signal to Noise Ratio, Structural Similarity Index Measure, and Normalized Cross-Correlation.

Conclusion: The experimental results prove that the proposed method performs better than other state-of-the-art techniques and can be used to protect multimedia ownership. Read now: https://bit.ly/3oXsXyq

Editors Choice – Assessment of Risks for Successful Implementation of Industry 4.0

Author(s):Rimalini Gadekar*Bijan Sarkar and Ashish Gadekar

Volume 15, Issue 1, 2022

Published on: 28 September, 2020

Page: [111 – 130]

Pages: 20

DOI: 10.2174/2666255813999200928215915

Abstract

Purpose: The transformation happening globally, though referred to by different names and nomenclatures, the overall objective to inspire digitalization and smart practices by reducing human intervention and enhancing machine intelligence to take on the global manufacturing and production to another level of excellence is a proven fact now. However, earlier research has been found lacking in the strategic approach to evaluate and analyze the I4.0 adoption-related risks for its implementation. This ultimately deprived organizations of a multitude of the benefits of I4.0 adoption. This research proposes a systematic methodology for understanding and evaluating the most evident risks in the context of I4.0 implementation.

Design/Methodology/Approach: The research is mainly based on the inputs from experts/consultants along with robust literature review and researcher’s experience in the area of risk handling. The MCDM methods used for investigation and assessment are Fuzzy AHP and Fuzzy TOPSIS. The outcomes of the study are further validated through sensitivity analysis and real-world scenario.

Results: Technical and Information Technology (IT) risks are found to be on the top of the priority list, which needs urgent attention while embarking on I4.0 adoption in the industry, and the most important criteria, which needed urgent attention was Information Security. The paper has also developed the ‘Industry 4.0 Risks Iceberg model’ and systematically categorized the challenges into 5 dimensions for easy assessment and analysis.

Practical Implications: This systematic and holistic study of the I4.0 associated risks can be used to find the most critical and crucial risks based on which the strategies and policies may be modified to harness the best of I4.0. This will not only ensure the returns on investment but also will build trust in the system. The research would be very beneficial to managers, academicians, researchers, and technocrats who would be involved in I4.0 implementation. Read now: https://bit.ly/3zCMCIO

Editors Choice – Role of Digital Watermarking in Wireless Sensor Network

Author(s):Sanjay Kumar*Binod K. SinghAkshitaSonika PundirRashi Joshi and Simran Batra

Volume 15, Issue 2, 2022

Published on: 30 July, 2020

Page: [215 – 228]

Pages: 14

DOI: 10.2174/2666255813999200730230731

Abstract

WSN has been exhilarated in many application areas such as military, medical, environment, etc. Due to the rapid increase in applications, it causes proportionality to security threats because of its wireless communication. Since nodes used are supposed to be independent of human reach and dependent on their limited resources, the major challenges can be framed as energy consumption and resource reliability. Ensuring security, integrity, and confidentiality of the transmitted data is a major concern for WSN. Due to the limitation of resources in the sensor nodes, the traditionally intensive security mechanism is not feasible for WSNs. This limitation brought the concept of digital watermarking in existence. Watermarking is an effective way to provide security, integrity, data aggregation and robustness in WSN. In this paper, several issues and challenges, as well as the various threats of WMSN, is briefly discussed. Also, we have discussed the digital watermarking techniques and its role in WMSN. Read more: https://bit.ly/3Q19uc2

Editors Choice – Stock Market Prediction Based on Technical-Deviation-ROC Indicators Using Stock and Feeds Data

Author(s):Deepika N. and P. Victer Paul*

Volume 15, Issue 3, 2022

Published on: 31 August, 2020

Article ID: e180322185408

Pages: 9

DOI: 10.2174/2666255813999200831120847

Abstract

Background: The attempt of this research is to propose a novel approach for the efficient prediction of stock prices. The scope of this research extends by including the feature of sentiment analysis using the emotions and opinions carried by social media platforms. The research also analyzes the impact of social media, feeds data and Technical indicators on stock prices for the design of the prediction model.

Objectives: The goal of this research is to analyze and compare the models to predict stock trends by adjusting the feature set.

Methods: The basic technical and new momentum volatility indicators are calculated for the benchmark index values of the stock. The text summarization was applied on collected day-wise tweets for a particular company and then sentiment analysis was performed to get the sentiment value. All these collected features were integrated to form the final dataset and accuracy comparisons were made by experimenting with the algorithms- Support vector machine (SVM), Backpropogation and Long short-term memory (LSTM).

Results: The execution is carried out for each algorithm with 30 epochs. It is observed that the SVM exhibits 2.78%, Backpropogation exhibits 5.02% and LSTM exhibits 10.30 % enhanced performance than the prediction model designed using basic technical indicators. Moreover, along with human sentiment, the SVM provides 5.48%, Backpropogation 5.28% and LSTM 0.07% better accuracy. The standard deviation results are for SVM 1.59, for back propagation 2.46, and LSTM 0.19.

Conclusion: The experimental results show that the standard deviation of LSTM is less than the SVM and back propagation algorithms. Hence, obtaining steady accuracy is highly possible with LSTM. Read now: https://bit.ly/3vDiZ94

Editors Choice – Meta-heuristic Techniques to Train Artificial Neural Networks for Medical Image Classification: A Review

Author(s):Priyanka* and Dharmender Kumar

Volume 15, Issue 4, 2022

Published on: 15 September, 2020

Article ID: e220322185915

Pages: 18

DOI: 10.2174/2666255813999200915141534

Abstract

Medical imaging has been utilized in various forms in clinical applications for better diagnosis and treatment of diseases. These imaging technologies help in recognizing body’s ailing region easily. In addition, it causes no pain to the patient as the interior part of the body can be examined without difficulty. Nowadays, various image processing techniques such as segmentation, registration, classification, restoration, contrast enhancement and many more exist to enhance image quality. Among all these techniques, classification plays an important role in computer-aided diagnosis for easy analysis and interpretation of these images. Image classification not only classifies diseases with high accuracy but also analyses which part of the body is infected. The usage of Neural networks classifier in medical imaging applications has opened new doors or opportunities to researchers stirring them to excel in this domain. Moreover, accuracy in clinical practices and the development of more sophisticated equipment are necessary in the medical field for more accurate and quicker decisions. Therefore, keeping this in mind, researchers started using meta-heuristic techniques to classify the methods. This paper provides a brief survey on the role of artificial neural networks in medical image classification, various types of meta-heuristic algorithms applied for optimization purposes, and their hybridization. A comparative analysis showing the effect of applying these algorithms on some classification parameters such as accuracy, sensitivity, and specificity is also provided. From the comparison, it can be observed that the usage of these methods significantly optimizes these parameters leading us to diagnose and treat a number of diseases in their early stage. Read now: https://bit.ly/3vDQfNy

Editors Choice – Key Issues in Software Reliability Growth Models

Author(s):Md. Asraful Haque* and Nesar Ahmad

Volume 15, Issue 5, 2022

Published on: 12 October, 2020

Pages: 7

DOI: 10.2174/2666255813999201012182821

Abstract

Background: Software Reliability Growth Models (SRGMs) are the most widely used mathematical models to monitor, predict and assess the software reliability. They play an important role in industries to estimate the release time of a software product. Since 1970s, researchers have suggested a large number of SRGMs to forecast software reliability based on certain assumptions. They all have explained how the system reliability changes over time by analyzing failure data set throughout the testing process. However, none of the models is universally accepted and can be used for all kinds of software.

Objectives: The objective of this paper is to highlight the limitations of SRGMs and to suggest a novel approach towards improvement.

Methods: We have presented the mathematical basis, parameters and assumptions of the software reliability model and analyzed five popular models, namely Jelinski-Moranda (J-M) model, Goel Okumoto NHPP model, Musa-Okumoto Log Poisson model, Gompertz Model and Enhanced NHPP model.

Conclusion: The paper focuses on challenges like flexibility issues, assumptions, and uncertainty factors of using SRGMs. It emphasizes considering all affecting factors in reliability calculation. A possible approach has been mentioned at the end of the paper. Read now: https://bit.ly/3cZehMB

Editors Choice – Correlations and Hierarchical Clustering Investigation Between Weather and SARS-CoV-2

Author(s):Kaoutar El Handri* and Abdellah Idrissi

Volume 15, Issue 6, 2022

Published on: 09 November, 2020

Pages: 9

DOI: 10.2174/2666255813999201109201006

Abstract

Background: Humanity today faces a global emergency. It is conceivably the greatest crisis of our generation. The coronavirus pandemic, which has many global implications, has led researchers worldwide to seek solutions to this crisis, including the search for effective treatment in the first place.

Objective: This study aims to identify the factors that can have an essential effect on COVID-19 comportment. Having proper management and control of imports of COVID-19 depends on many factors that are highly dependent on a country’s sanitary capacity and infrastructure technology. Nevertheless, meteorological parameters can also be a connecting factor to this disease; since temperature and humidity are compatible with a seasonal respiratory virus’s behavior.

Method: In this work, we analyze the correlation between weather and the COVID-19 epidemic in Casablanca, the economic capital of Morocco. It is based on the primary analysis of COVID-19 surveillance data from the Ministry of Health of the Kingdom of Morocco and weather data from the meteorological data. Weather factors include minimum temperature (°C), maximum temperature (°C), mean temperature (°C), maximum wind speed (Km/h), humidity (%), and rainfall (mm). The Spearman and Kendall rank correlation test is used for data analysis. Between the weather components.

Results: The mean temperature, maximum temperature (°C) and Humidity were significantly correlated with the COVID-19 pandemic with respectively (r= -0.432, r = -0.480; r=0.402, and p=- 0.212, p= -0.160, and p= -0.240).

Conclusion: This discovery helps reduce the incidence rate of COVID-19 in Morocco, considering the significant correlation between weather and COVID-19, of about more than 40%. Read now: https://bit.ly/3d4BXPL

%d bloggers like this: