Refine
Has Fulltext
- no (5)
Document Type
- Article (5) (remove)
Is part of the Bibliography
- yes (5) (remove)
Keywords
- Algorithms (5) (remove)
Objectives To compare image quality of deep learning reconstruction (AiCE) for radiomics feature extraction with filtered back projection (FBP), hybrid iterative reconstruction (AIDR 3D), and model-based iterative reconstruction (FIRST). Methods Effects of image reconstruction on radiomics features were investigated using a phantom that realistically mimicked a 65-year-old patient's abdomen with hepatic metastases. The phantom was scanned at 18 doses from 0.2 to 4 mGy, with 20 repeated scans per dose. Images were reconstructed with FBP, AIDR 3D, FIRST, and AiCE. Ninety-three radiomics features were extracted from 24 regions of interest, which were evenly distributed across three tissue classes: normal liver, metastatic core, and metastatic rim. Features were analyzed in terms of their consistent characterization of tissues within the same image (intraclass correlation coefficient >= 0.75), discriminative power (Kruskal-Wallis test p value < 0.05), and repeatability (overall concordance correlation coefficient >= 0.75). Results The median fraction of consistent features across all doses was 6%, 8%, 6%, and 22% with FBP, AIDR 3D, FIRST, and AiCE, respectively. Adequate discriminative power was achieved by 48%, 82%, 84%, and 92% of features, and 52%, 20%, 17%, and 39% of features were repeatable, respectively. Only 5% of features combined consistency, discriminative power, and repeatability with FBP, AIDR 3D, and FIRST versus 13% with AiCE at doses above 1 mGy and 17% at doses >= 3 mGy. AiCE was the only reconstruction technique that enabled extraction of higher-order features. Conclusions AiCE more than doubled the yield of radiomics features at doses typically used clinically. Inconsistent tissue characterization within CT images contributes significantly to the poor stability of radiomics features.
Recently, substantial research effort has focused on how to apply CNNs or RNNs to better capture temporal patterns in videos, so as to improve the accuracy of video classification. In this paper, we investigate the potential of a purely attention based local feature integration. Accounting for the characteristics of such features in video classification, we first propose Basic Attention Clusters (BAC), which concatenates the output of multiple attention units applied in parallel, and introduce a shifting operation to capture more diverse signals. Experiments show that BAC can achieve excellent results on multiple datasets. However, BAC treats all feature channels as an indivisible whole, which is suboptimal for achieving a finer-grained local feature integration over the channel dimension. Additionally, it treats the entire local feature sequence as an unordered set, thus ignoring the sequential relationships. To improve over BAC, we further propose the channel pyramid attention schema by splitting features into sub-features at multiple scales for coarse-to-fine sub-feature interaction modeling, and propose the temporal pyramid attention schema by dividing the feature sequences into ordered sub-sequences of multiple lengths to account for the sequential order. Our final model pyramidxpyramid attention clusters (PPAC) combines both channel pyramid attention and temporal pyramid attention to focus on the most important sub-features, while also preserving the temporal information of the video. We demonstrate the effectiveness of PPAC on seven real-world video classification datasets. Our model achieves competitive results across all of these, showing that our proposed framework can consistently outperform the existing local feature integration methods across a range of different scenarios.
We created variant maps based on bat echolocation call recordings and outline here the transformation process and describe the resulting visual features. The maps show regular patterns while characteristic features change when bat call recording properties change. By focusing on specific visual features, we found a set of projection parameters which allowed us to classify the variant maps into two distinct groups. These results are promising indicators that variant maps can be used as basis for new echolocation call classification algorithms.
TBES: Template-Based Exploration and Synthesis of Heterogeneous Multiprocessor Architectures on FPGA
(2016)
This article describes TBES, a software end-to-end environment for synthesizing multitask applications on FPGAs. The implementation follows a template-based approach for creating heterogeneous multiprocessor architectures. Heterogeneity stems from the use of general-purpose processors along with custom accelerators. Experimental results demonstrate substantial speedup for several classes of applications. In addition to the use of architecture templates for the overall system, a second contribution lies in using high-level synthesis for promoting exploration of hardware IPs. The domain expert, who best knows which tasks are good candidates for hardware implementation, selects parts of the initial application to be potentially synthesized as dedicated accelerators. As a consequence, the HLS general problem turns into a constrained and more tractable issue, and automation capabilities eliminate the need for tedious and error-prone manual processes during domain space exploration. The automation only takes place once the application has been broken down into concurrent tasks by the designer, who can then drive the synthesis process with a set of parameters provided by TBES to balance tradeoffs between optimization efforts and quality of results. The approach is demonstrated step by step up to FPGA implementations and executions with an MJPEG benchmark and a complex Viola-Jones face detection application. We show that TBES allows one to achieve results with up to 10 times speedup to reduce development times and to widen design space exploration.
Organisation und Algorithmus
(2021)
Der vorliegende Beitrag analysiert, wie Organisationen Algorithmen, die wir als digitale Beobachtungsformate verstehen, mit Handlungsfähigkeit ausstatten und damit actionable machen. Das zentrale Argument lautet, dass die soziale Relevanz digitaler Beobachtungsformate sich daraus ergibt, dass und wie sie in organisationale Entscheidungsarchitekturen eingebettet sind. Diesen Zusammenhang illustrieren wir am Beispiel des österreichischen Arbeitsmarktservice (AMS), der 2018 einen Algorithmus einführte, um die Integrationschancen arbeitsuchender Personen zu bewerten. Der AMS steht dabei stellvertretend für aktuelle Bestrebungen vieler Organisationen, algorithmische Systeme einzusetzen, um knappe öffentliche Ressourcen vermeintlich effizienter zu distribuieren. Um zu rekonstruieren, wie dies geschieht, zeigen wir, welche Operationen des Kategorisierens, Vergleichens und Bewertens das algorithmische Modell vollzieht. Darauf aufbauend demonstrieren wir, wie das algorithmische Modell in die organisationale Entscheidungsarchitektur eingebunden ist. Erst durch diese Einbindung – die Möglichkeit, Unterschiede für andere, relativ stabil erzeugte Entscheidungen zu machen – entfaltet das digitale Beobachtungsformat soziale Relevanz. Abschließend argumentieren wir, dass algorithmische Modelle, wie sie am Fall des AMS beobachtet werden können, dazu tendieren, sich in Organisationen zu stabilisieren. Dies begründen wir damit, dass die organisationalen Lernchancen im Umgang mit dem Algorithmus dadurch reduziert sind, dass dieser in einem Bereich zum Einsatz kommt, der durch Technologiedefizit und koproduktive Leistungserstellung geprägt ist.