BlueBox adalah platform belanja online pertama di Manokwari yang menyediakan berbagai produk kebutuhan belanja harian dan bulanan, termasuk makanan dan cemilan. Penelitian ini bertujuan untuk ...mengevaluasi bagaimana aplikasi BlueBox dapat meningkatkan kualitas layanan berdasarkan aspek kepuasan dan pengalaman pengguna. Data dianalisis menggunakan metode Webqual 4.0 meliputi variabel Usability, Information Quality, Service Interaction dan metode EUCS khususnya variabel Timeliness. Analisis data dilakukan dengan pendekatan PLS-SEM menggunakan SmartPLS 4.0. Hasil pengujian menunjukkan bahwa rata-rata kepuasan pengguna aplikasi BlueBox mencapai nilai 4,095 dengan kategori Puas. Variabel Timeliness memiliki pengaruh signifikan terhadap User Satisfaction sedangkan variabel Usability, Information Quality, dan Service Interaction tidak menunjukkan pengaruh signifikan.
Multi target object detection is predominantly employed in field of computer vision. Computer vision and machine vision goes hand in hand to develop efficient applications such as autonomous driving ...systems and intelligent robotic systems in industries. One of the widely used neural network architectures for object detection is Single Shot MultiBox Detector (SSD). Numerous approaches have been carried out in this field by introducing complex convolutional operations to enhance performance. Still it remains an onerous challenge to develop an efficient neural network model can detect small objects accurately. To overcome this challenge, this research proposes an improved version of SSD enhancing the performance of auxiliary convolutional layers for small object detection. Dice coefficient and cross entropy loss approach was adapted to calculate Intersection Over Union (IoU) threshold. The model was trained on PASCAL VOC 2007 and 2012 datasets and testing was carried on PASCAL VOC 2007. Mean average precision (mAP) of 80.7%, 3.2% greater than original SSD at an input size of 300 \times 300 . The architecture model was deployed in Intempora RTMaps embedded with NXP Bluebox 2.0 to observe the model performance in real-time.
Convolution neural system is being utilized in field of self-governing driving vehicles or driver assistance systems (ADAS), and has made extraordinary progress. Before the CNN, conventional AI ...calculations helped ADAS. Right now, there is an incredible investigation being done in DNNs like MobileNet, SqueezeNext & SqueezeNet. It improved the CNN designs and made it increasingly appropriate to actualize on real-time embedded systems. Due to the model size complexity of many models, they cannot be deployed straight away on real-time systems. The most important requirement will be to have less model size without a tradeoff with accuracy. Squeeze-and-Excitation SqueezeNext which is an efficient DNN with best model accuracy of 92.60% and with least model size of 0.595MB is chosen to be deployed on NXP BlueBox 2.0 and NXP i.MX RT1060. This deployment is very successful because of its less size and better accuracy. The model is trained and validated on CIFAR-10 dataset.
With regards to Advanced Driver Assistance Systems in vehicles, vision and image-based ADAS is profoundly well known since it utilizes Computer vision algorithms, for example, object detection, ...street sign identification, vehicle control, impact cautioning, and so on., to aid sheltered and smart driving. Deploying these algorithms directly in resource-constrained devices like mobile and embedded devices etc. is not possible. Reduced Mobilenet V2 (RMNv2) is one of those models which is specifically designed for deploying easily in embedded and mobile devices. In this paper, we implemented a real-time RMNv2 image classifier in NXP Bluebox 2.0 and NXP i.MX RT1060. Because of its low model size of 4.3MB, it is very successful to implement this model in those devices. The model is trained and tested with the CIFAR10 dataset.
Convolution Neural Network (CNN) has been the most influential innovations in the filed of Computer Vision. CNN have shown a substantial improvement in the field of Machine Learning. But they do come ...with their own set of drawbacks - CNN need a large dataset, hyperparameter tuning is nontrivial and importantly, they lose all the internal information about pose and transformation to pooling. Capsule Networks have addressed the limitations of CNNs and have shown a great improvement by calculating the pose and transformation of the image. On the other hand, deeper networks are more powerful than shallow networks but at the same time, more difficult to train. Simply adding layers to make the network deep has led to vanishing gradient problem. Residual Networks introduce skip connections to ease the training and have shown evidence that they can give good accuracy with considerable depth. Putting the best of Capsule Network and Residual Network together, we present Residual Capsule Network, a framework that uses the best features of both Residual and Capsule Networks. In the proposed model, the conventional Convolutional layer in Capsule Network is replaced by skip connections like the Residual Networks to decrease the complexity of the Baseline Capsule Network and seven ensemble Capsule Network. We trained our model on MNIST and CIFAR-10 datasets and have noted a significant decrease in the number of parameters when compared to the Baseline models.
Autonomous vehicles use Electronic Control Units running complex software to improve passenger comfort and safety. To test safety of in-vehicle electronics, the ISO 26262 standard on functional ...safety recommends using fault injection during component and system-level design. A Fault Injection Framework (FIF) induces hard-to-trigger hardware and software faults at runtime, enabling analysis of fault propagation effects. The growing number and complexity of diverse interacting components in vehicles demands a versatile FIF at the vehicle level. In this paper, we present a novel retargetable FIF based on debugger interfaces available on many target systems. We validated our FIF in three Hardware-In-the-Loop setups for autonomous driving based on the NXP BlueBox prototyping platform. To trigger a fault injection process, we developed an interactive user interface based on Robot Operating System, which also visualized vehicle system health. Our retargetable debugger-based fault injection mechanism confirmed safety properties and identified safety shortcomings of various automotive systems.
Ultra-thin MobileNet Sinha, Debjyoti; El-Sharkawy, Mohamed
2020 10th Annual Computing and Communication Workshop and Conference (CCWC),
2020-Jan.
Conference Proceeding
Convolutional Neural Networks (CNNs) are deep learning architectures which play an important role in object detection, image classification, face recognition, autonomous driving applications, etc. ...MobileNet is a light CNN model which was developed especially for embedded vision applications. But still, it is quite challenging to deploy the baseline model into memory constrained micro-controller units. Design Space Exploration of the above-mentioned model can make it less memory and computationally intensive. This paper proposes some modifications to the existing baseline MobileNet architecture to make it more efficient and suitable to be deployed on real-time embedded platforms. The intent behind developing such an architecture is to reduce the size, the number of parameters, computation time per epoch and the overfitting problem considerably without letting the accuracy drop below the baseline accuracy level. We achieve good accuracy levels by using the Swish activation function instead of the standard activation function ReLU and introducing a regularization method called random erasing instead of Drop out into the network. We decrease the model size by using Separable Convolutions in place of Depthwise Separable Convolutions, changing the channel depth, choosing an optimum width multiplier value and eliminating some layers with the same output shape, without much drop in the accuracy levels. We train the model with the above-mentioned modifications from scratch on the CIFAR-10 dataset and obtain a much lighter architecture as compared to the baseline MobileNet V1. We name the new DNN architecture as Ultra-thin MobileNet having a size of 3.9 MB only which is deployable in real-time embedded processors with limited memory and power.
In this paper, we demonstrate the implementation of our ultra-efficient deep convolutional neural network architecture: CondenseNeXt on NXP BlueBox, an autonomous driving development platform ...developed for self-driving vehicles. We show that CondenseNeXt is remarkably efficient in terms of FLOPs, designed for ARM-based embedded computing platforms with limited computational resources and can perform image classification without the need of a CUDA enabled GPU. CondenseNeXt utilizes the state-of-the-art depthwise separable convolution and model compression techniques to achieve a remarkable computational efficiency. Extensive analyses are conducted on CIFAR-10, CIFAR-100 and ImageNet datasets to verify the performance of Con-denseNeXt Convolutional Neural Network (CNN) architecture. It achieves state-of-the-art image classification performance on three benchmark datasets including CIFAR-10 (4.79% top-1 error), CIFAR-100 (21.98% top-1 error) and ImageNet (7.91% single model, single crop top-5 error). CondenseNeXt achieves final trained model size improvement of 2.9+ MB and up to 59.98% reduction in forward FLOPs compared to CondenseNet and can perform image classification on ARM-Based computing platforms without needing a CUDA enabled GPU support, with outstanding efficiency.