Your search
Results 351 resources
-
This book discusses biological, cognitive, educational, sociological, and interactive to discuss the nature of learning disabilities, its origins, its diagnosis, and effective remediation. It emphasizes the development of ideas as the motor forces behind the economic policies. © 1999 Taylor & Francis.
-
We demonstrate that a nonzero strangeness contribution to the spacelike electromagnetic form factor of the nucleon is evidence for a strange-antistrange asymmetry in the nucleon's light-front wave function, thus implying different nonperturbative contributions to the strange and antistrange quark distribution functions. A recent lattice QCD calculation of the nucleon strange quark form factor predicts that the strange quark distribution is more centralized in coordinate space than the antistrange quark distribution, and thus the strange quark distribution is more spread out in light-front momentum space. We show that the lattice prediction implies that the difference between the strange and antistrange parton distribution functions, s(x)-s(x), is negative at small-x and positive at large-x. We also evaluate the strange quark form factor and s(x)-s(x) using a baryon-meson fluctuation model and a novel nonperturbative model based on light-front holographic QCD. This procedure leads to a Veneziano-like expression of the form factor, which depends exclusively on the twist of the hadron and the properties of the Regge trajectory of the vector meson which couples to the quark current in the hadron. The holographic structure of the model allows us to introduce unambiguously quark masses in the form factors and quark distributions preserving the hard scattering counting rule at large-Q2 and the inclusive counting rule at large-x. Quark masses modify the Regge intercept which governs the small-x behavior of quark distributions, therefore modifying their small-x singular behavior. Both nonperturbative approaches provide descriptions of the strange-antistrange asymmetry and intrinsic strangeness in the nucleon consistent with the lattice QCD result. © 2018 authors. Published by the American Physical Society.
-
The excellent O-regioselectivity of the glycosidation of the ambident 2-O-substituted 5-fluorouracil (5-FU) via the silver salt method is computationally investigated at the MP2/6-311++G(2d,p):DZP//B3LYP/6-31+G(d):DZP level of theory. The reactions studied are those between 1-bromo-1-deoxy-2,3,4,6-tetra-O-acetyl-α-d-glucopyranose and the silver salts of 5-FU, 2-O-butyl-5-FU, and 2-O-benzyl-5-FU. Two pathways are considered as follows: (A) one where the silver and bromide ion do not interact, and (B) another where the silver and bromide ion interact in the transition states. Because the O-reaction barriers are much lower (by 13.3-22.2 kcal/mol) than N-reaction barriers in both pathways, the O-regioselectivity of the silver salt method can be satisfactorily explained by either path A or path B. Furthermore, path B, where Ag and Br interact consistently, has lower activation barriers than the corresponding path A (by 6.8-17.4 kcal/mol) in both N- and O-reactions. This computational result can be attributed to the following reasons: (1) the speeding-up effect in Koenigs-Knorr reactions due to the addition of silver carbonate into the reaction mixture; (2) the halogens being pulled away by silver ions from halides, as proposed by Kornblum and co-workers; and (3) the oxocarbenium ion involvement in the glycosidation reactions. The large energy difference between N- and O-transition states originates from the association between Ag and N-(O-) of the ambident unit (-N3-C4=O4) that shows significant covalent character so that the O-reaction transition states of the silver salt method benefit from favorable ionic interaction (C+···O-) and favorable covalent interaction (Ag···N). These two favorable interactions are in agreement with the hard and soft acids and bases principle; the former is a hard-hard interaction and the latter is a soft-soft interaction. © 2018 American Chemical Society.
-
Native fluorescence spectra play important roles in cancer detection. It is widely acknowledged that the emission spectrum of a tissue is a superposition of spectra of various salient fluorophores. However, component quantification is essentially an ill-posed problem. To address this problem, the native fluorescence spectra of normal human very low (LNCap), moderately metastatic (DU-145), and advanced metastatic (PC-3) cell lines were studied by the selected wavelength of 300 nm to investigate the key fluorescent molecules such as tryptophan, collagen and NADH. The native fluorescence spectra of cancer cell lines at different risk levels were analyzed using various machine learning algorithms for feature detection and develop criteria to separate the three types of cells. Principal component analysis (PCA), nonnegative matrix factorization (NMF), and partial least squares fitting were used separately to reduce dimension, extract features and detect biomolecular alterations reflected in the spectra. The scores corresponding to the basis spectra were used for classification. A linear support vector machine (SVM) was used to classify the spectra of the cells with different metastatic ability. In detection of signals coming from tryptophan and NADH with observed data corrupted by noise and inference, a sufficient statistic can be obtained based on the basis spectra retrieved using nonnegative matrix factorization. This work shows changes of relative contents of tryptophan and NADH obtained from native fluorescence spectroscopy may present potential criteria for detecting cancer cell lines of different metastatic ability. © 2018 SPIE.
-
The primary purpose of this study was to determine if a difference existed between peak speed attained when performing a sprint with maximal acceleration versus from a gradual build-up. Additionally, this investigation sought to compare the actual peak speed achieved when instructed to reach 75% and 90% of maximum speed. Field sport athletes (n = 21) performed sprints over 60 m under the experimental conditions, and the peak speed was assessed with a radar gun. The gradual build-up to maximum speed (8.30 +/- 0.40 m.s-1) produced the greater peak speed (effect size = 0.3, small) than the maximum acceleration run (8.18 +/- 0.40 m.s-1), and the majority of participants (62%) followed this pattern. For the sub-maximum runs, the actual mean percentage of maximum speed reached was 78 +/- 6% for the 75% prescribed run and 89 +/- 5% for the 90% prescription. The errors in attaining the prescribed peak speeds were large (~15%) for certain individuals, especially for the 75% trial. Sprint training for maximum speed should be performed with a gradual build-up of speed rather than a maximum acceleration. For sub-maximum interval training, the ability to attain the prescribed target peak speed can be challenging for field sport athletes, and therefore where possible, feedback on peak speeds reached should be provided after each repetition.
-
Post-translational phosphorylation is essential to human cellular processes, but the transient, heterogeneous nature of this modification complicates its study in native systems. We developed an approach to interrogate phosphorylation and its role in protein-protein interactions on a proteome-wide scale. We genetically encoded phosphoserine in recoded E. coli and generated a peptide-based heterologous representation of the human serine phosphoproteome. We designed a single-plasmid library encoding >100,000 human phosphopeptides and confirmed the site-specific incorporation of phosphoserine in >36,000 of these peptides. We then integrated our phosphopeptide library into an approach known as Hi-P to enable proteome-level screens for serine-phosphorylation-dependent human protein interactions. Using Hi-P, we found hundreds of known and potentially new phosphoserine-dependent interactors with 14-3-3 proteins and WW domains. These phosphosites retained important binding characteristics of the native human phosphoproteome, as determined by motif analysis and pull-downs using full-length phosphoproteins. This technology can be used to interrogate user-defined phosphoproteomes in any organism, tissue, or disease of interest.
-
BACKGROUND/AIMS: Variability in the grade of atherosclerosis among patients with chronic kidney disease (CKD) could affect the ultrasound measurements of intima media thickness (IMT). We sought to investigate IMTs of carotid (cIMT) and femoral (fIMT) arteries in CKD patients and assess the degree of their correlation with histopathological atherosclerosis., METHODS: Eighty-nine out of 99 enrolled subjects completed this study. The subjects were divided into 3 groups: 34 patients with CKD (Case group), 31 with coronary artery disease undergoing coronary artery bypass graft (CABG, positive control group), and 24 healthy kidney donors (negative control group). For histopathological assessment of atherosclerosis, arterial tissue samples were obtained from the patients in each study group. The cIMT and fIMTs were measured by ultrasonography., RESULTS: Histopathological atherosclerosis was present in 82.3, 100, and 20.8% of CKD, CABG, and donor groups respectively (p < 0.001). CKD patients had higher values of cIMT and fIMT than the donor group (p = 0.01 and 0.004, respectively). cIMT was positively correlated with the grade of atherosclerosis in the CKD group only (p < 0.001), while fIMT was correlated with the grade of atherosclerosis in both CKD and donor groups (p < 0.001 and p = 0.009 respectively). In CKD patients, cIMT >0.65 mm and femoral values >0.57 mm predicted the presence of histopathological atherosclerosis with sensitivities of 96 and 92% respectively., CONCLUSION: Higher values of cIMT and fIMT in CKD patients are associated with higher rates and degrees of histopathological atherosclerosis. Additionally, when compared to fIMT, cIMT has a higher sensitivity for detecting atherosclerosis in CKD patients. Copyright © 2018 S. Karger AG, Basel.
-
Rapidity-odd directed-flow measurements at midrapidity are presented for Λ, Λ, K±, Ks0, and φ at sNN=7.7, 11.5, 14.5, 19.6, 27, 39, 62.4, and 200 GeV in Au+Au collisions recorded by the Solenoidal Tracker detector at the Relativistic Heavy Ion Collider. These measurements greatly expand the scope of data available to constrain models with differing prescriptions for the equation of state of quantum chromodynamics. Results show good sensitivity for testing a picture where flow is assumed to be imposed before hadron formation and the observed particles are assumed to form via coalescence of constituent quarks. The pattern of departure from a coalescence-inspired sum rule can be a valuable new tool for probing the collision dynamics. © 2018 authors. Published by the American Physical Society.
-
We report first measurements of e^{+}e^{-} pair production in the mass region 0.4
-
The challenge was to make appealing something that was not. To transform this godforsaken city, populated by people as indifferent to art as tourists are to the idea of visiting the only historic building—a vague ruin of a castle—in a glamorous destination. After all, what the Guggenheim did for Bilbao could be reproduced […]
-
A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper.
-
To accommodate execution mode change and hardware malfunction, dynamic system reconfiguration, which invokes application migration across different processing cores, needs to be supported on multi-core embedded systems. Different application migration strategies will impact system's timing behaviors in different manners, it is important to select an appropriate one such that the system's timing performance after the migration process is still acceptable. The focus of our research is to predict the system's timing change of possible migration strategies and upon which to choose the optimal one. Extensive experiments have been set up by running multiple benchmarks and experimental results validate the effectiveness of our proposed approach.
-
Reliability, longevity, availability, and deadline guarantees are the four most important metrics to measure the QoS of long-running safety-critical real-time applications. Software aging is one of the major factors that impact the safety of long-running real-time applications as the degraded performance and increased failure rate caused by software aging can lead to deadline missing and catastrophic consequences. Software rejuvenation is one of the most commonly used approaches to handle issues caused by software aging. In this paper, we study the optimal time when software rejuvenation shall take place so that the system's reliability, longevity, and availability are maximized, and application delays caused by software rejuvenation is minimized. In particular, we formally analyze the relationships between software rejuvenation frequency and system reliability, longevity, and availability. Based on the theoretic analysis, we develop approaches to maximizing system reliability, longevity, and availability, and use simulation to evaluate the developed approaches. In addition, we design the MIN-DELAY semi-priority-driven scheduling algorithm to minimize application delays caused by rejuvenation processes. The simulation experiments show that the developed semi-priority-driven scheduling algorithm reduces application delays by 9.01% and 14.24% over the earliest deadline first (EDF) and least release time (LRT) scheduling algorithms, respectively.
Explore
Resource type
- Book (32)
- Book Section (50)
- Conference Paper (23)
- Journal Article (241)
- Magazine Article (3)
- Report (2)