Bone metastasis is the leading cause of death in prostate cancer patients, for which there is currently no effective treatment. Since the bone microenvironment plays an important role in this ...process, attentions have been directed to the interactions between cancer cells and the bone microenvironment, including osteoclasts, osteoblasts, and bone stromal cells. Here, we explained the mechanism of interactions between prostate cancer cells and metastasis-associated cells within the bone microenvironment and further discussed the recent advances in targeted therapy of prostate cancer bone metastasis. This review also summarized the effects of bone microenvironment on prostate cancer metastasis and the related mechanisms, and provides insights for future prostate cancer metastasis studies.
Full text
Available for:
FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
This paper aims to accelerate the test-time computation of convolutional neural networks (CNNs), especially very deep CNNs <xref ref-type="bibr" rid="ref1">1 that have substantially impacted the ...computer vision community. Unlike previous methods that are designed for approximating linear filters or linear responses, our method takes the nonlinear units into account. We develop an effective solution to the resulting nonlinear optimization problem without the need of stochastic gradient descent (SGD). More importantly, while previous methods mainly focus on optimizing one or two layers, our nonlinear method enables an asymmetric reconstruction that reduces the rapidly accumulated error when multiple (e.g., <inline-formula> <tex-math notation="LaTeX">\ge</tex-math> <inline-graphic xlink:type="simple" xlink:href="he-ieq1-2502579.gif"/> </inline-formula>10) layers are approximated. For the widely used very deep VGG-16 model <xref ref-type="bibr" rid="ref1">1 , our method achieves a whole-model speedup of 4<inline-formula><tex-math notation="LaTeX">\times</tex-math> <inline-graphic xlink:type="simple" xlink:href="he-ieq2-2502579.gif"/> </inline-formula> with merely a 0.3 percent increase of top-5 error in ImageNet classification. Our 4<inline-formula><tex-math notation="LaTeX">\times </tex-math> <inline-graphic xlink:type="simple" xlink:href="he-ieq3-2502579.gif"/> </inline-formula> accelerated VGG-16 model also shows a graceful accuracy degradation for object detection when plugged into the Fast R-CNN detector <xref ref-type="bibr" rid="ref2">2 .
Deep Residual Learning for Image Recognition Kaiming He; Xiangyu Zhang; Shaoqing Ren ...
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
2016-June
Conference Proceeding
Open access
Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly ...reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets 40 but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively ...prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5× speed-up along with only 0.3% increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4%, 1.0% accuracy loss under 2× speedup respectively, which is significant.
We revisit large kernel design in modern convolutional neural networks (CNNs). Inspired by recent advances in vision transformers (ViTs), in this paper, we demonstrate that using a few large ...convolutional kernels instead of a stack of small kernels could be a more powerful paradigm. We suggested five guidelines, e.g., applying re-parameterized large depthwise convolutions, to design efficient high-performance large-kernel CNNs. Following the guidelines, we propose RepLKNet, a pure CNN architecture whose kernel size is as large as 31×31, in contrast to commonly used 3×3. RepLKNet greatly closes the performance gap between CNNs and ViTs, e.g., achieving comparable or superior results than Swin Transformer on ImageNet and a few typical downstream tasks, with lower latency. RepLKNet also shows nice scalability to big data and large models, obtaining 87.8% top-1 accuracy on ImageNet and 56.0% mIoU on ADE20K, which is very competitive among the state-of-the-arts with similar model sizes. Our study further reveals that, in contrast to small-kernel CNNs, large-kernel CNNs have much larger effective receptive fields and higher shape bias rather than texture bias. Code & models at https://github.com/megvii-research/RepLKNet.
Object Detection Networks on Convolutional Feature Maps Shaoqing Ren; Kaiming He; Girshick, Ross ...
IEEE transactions on pattern analysis and machine intelligence,
2017-July-1, 2017-07-00, 2017-7-1, 20170701, Volume:
39, Issue:
7
Journal Article
Peer reviewed
Most object detectors contain two important components: a feature extractor and an object classifier. The feature extractor has rapidly evolved with significant research efforts leading to better ...deep convolutional architectures. The object classifier, however, has not received much attention and many recent systems (like SPPnet and Fast/Faster R-CNN) use simple multi-layer perceptrons. This paper demonstrates that carefully designing deep networks for object classification is just as important. We experiment with region-wise classifier networks that use shared, region-independent convolutional features. We call them "Networks on Convolutional feature maps" (NoCs). We discover that aside from deep feature maps, a deep and convolutional per-region classifier is of particular importance for object detection, whereas latest superior image classification models (such as ResNets and GoogLeNets) do not directly lead to good detection accuracy without using such a per-region classifier. We show by experiments that despite the effective ResNets and Faster R-CNN systems, the design of NoCs is an essential element for the 1st-place winning entries in ImageNet and MS COCO challenges 2015.
Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we ...propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94% top-5 test error on the ImageNet 2012 classification dataset. This is a 26% relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66% 33). To our knowledge, our result is the first to surpass the reported human-level performance (5.1%, 26) on this dataset.
One of recent trends 31, 32, 14 in network architecture design is stacking small filters (e.g., 1×1 or 3×3) in the entire network because the stacked small filters is more efficient than a large ...kernel, given the same computational complexity. However, in the field of semantic segmentation, where we need to perform dense per-pixel prediction, we find that the large kernel (and effective receptive field) plays an important role when we have to perform the classification and localization tasks simultaneously. Following our design principle, we propose a Global Convolutional Network to address both the classification and localization issues for the semantic segmentation. We also suggest a residual-based boundary refinement to further refine the object boundaries. Our approach achieves state-of-art performance on two public benchmarks and significantly outperforms previous results, 82.2% (vs 80.2%) on PASCAL VOC 2012 dataset and 76.9% (vs 71.8%) on Cityscapes dataset.
This paper investigates an improved active power control method for variable speed wind turbine to enhance the inertial response and damping capability during transient events. The optimized power ...point tracking (OPPT) controller, which shifts the turbine operating point from the maximum power point tracking (MPPT) curve to the virtual inertia control (VIC) curves according to the frequency deviation, is proposed to release the "hidden" kinetic energy and provide dynamic frequency support to the grid. The effects of the VIC on power oscillation damping capability are theoretically evaluated. Compared to the conventional supplementary derivative regulator-based inertia control, the proposed control scheme can not only provide fast inertial response, but also increase the system damping capability during transient events. Thus, inertial response and power oscillation damping function can be obtained in a single controller by the proposed OPPT control. A prototype three-machine system containing two synchronous generators and a PMSG-based wind turbine with 31% of wind penetration is tested to validate the proposed control strategy on providing rapid inertial response and enhanced system damping.
Basolateral amygdala (BLA) principal cells are capable of driving and antagonizing behaviors of opposing valence. BLA neurons project to the central amygdala (CeA), which also participates in ...negative and positive behaviors. However, the CeA has primarily been studied as the site for negative behaviors, and the causal role for CeA circuits underlying appetitive behaviors is poorly understood. Here, we identify several genetically distinct populations of CeA neurons that mediate appetitive behaviors and dissect the BLA-to-CeA circuit for appetitive behaviors. Protein phosphatase 1 regulatory subunit 1B+ BLA pyramidal neurons to dopamine receptor 1+ CeA neurons define a pathway for promoting appetitive behaviors, while R-spondin 2+ BLA pyramidal neurons to dopamine receptor 2+ CeA neurons define a pathway for suppressing appetitive behaviors. These data reveal genetically defined neural circuits in the amygdala that promote and suppress appetitive behaviors analogous to the direct and indirect pathways of the basal ganglia.
Display omitted
•Several genetically distinct CeA neurons mediate appetitive behaviors•BLA Ppp1r1b+ neurons project to CeA neurons that mediate appetitive behaviors•BLA Rspo2+ neurons project to CeA neurons that suppress appetitive behaviors•BLA-to-CeA pathways are analogous to corticostriatal direct and indirect pathways
Kim and Zhang et al. dissect genetically defined circuits for appetitive behaviors from the basolateral amygdala to the central amygdala that are genetically analogous to the direct and indirect pathways of the cortex and striatum.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP