Iuclid 6 Desktop For Mac
. Rudner, Lawrence This study estimated the number of people searching the ERIC database each day. The Educational Resources Information Center (ERIC) is a national information system designed to provide ready access to an extensive body of education-related literature. Federal funds traditionally have paid for the development of the database, but not the.
Chatterjee, Sandip; Stupp, Gregory S; Park, Sung Kyu Robin; Ducom, Jean-Christophe; Yates, John R; Su, Andrew I; Wolan, Dennis W 2016-08-16 Mass spectrometry-based shotgun proteomics experiments rely on accurate matching of experimental spectra against a database of protein sequences. Existing computational analysis methods are limited in the size of their sequence databases, which severely restricts the proteomic sequencing depth and functional analysis of highly complex samples. The growing amount of public high-throughput sequencing data will only exacerbate this problem. We designed a broadly applicable metaproteomic analysis method (ComPIL) that addresses protein database size limitations.
Our approach to overcome this significant limitation in metaproteomics was to design a scalable set of sequence databases assembled for optimal library querying speeds. ComPIL was integrated with a modified version of the search engine ProLuCID (termed 'Blazmass') to permit rapid matching of experimental spectra. Proof-of-principle analysis of human HEK293 lysate with a ComPIL database derived from high-quality genomic libraries was able to detect nearly all of the same peptides as a search with a human database (500x fewer peptides in the database), with a small reduction in sensitivity. We were also able to detect proteins from the adenovirus used to immortalize these cells. We applied our method to a set of healthy human gut microbiome proteomic samples and showed a substantial increase in the number of identified peptides and proteins compared to previous metaproteomic analyses, while retaining a high degree of protein identification accuracy and allowing for a more in-depth characterization of the functional landscape of the samples.
The combination of ComPIL with Blazmass allows proteomic searches to be performed with database sizes much larger than previously possible. These large database searches can be applied to complex meta-samples with unknown composition or proteomic samples where unexpected proteins may be identified. The protein database, proteomic search engine, and the proteomic data files for.
Stone, Benjamin V; Forde, James C; Levit, Valerie B; Lee, Richard K; Te, Alexis E; Chughtai, Bilal 2016-11-01 In July 2011, the US Food and Drug Administration ( FDA) issued a safety communication regarding serious complications associated with surgical mesh for pelvic organ prolapse, prompting increased media and public attention. This study sought to analyze internet search activity and news article volume after this FDA warning and to evaluate the quality of websites providing patient-centered information.
Google Trends™ was utilized to evaluate search engine trends for the term 'pelvic organ prolapse' and associated terms between 1 January 2004 and 31 December 2014. Google News™ was utilized to quantify the number of news articles annually under the term 'pelvic organ prolapse.' The search results for the term 'pelvic organ prolapse' were assessed for quality using the Health On the Net Foundation (HON) certification. There was a significant increase in search activity from 37.42 in 2010 to 57.75 in 2011, at the time of the FDA communication (p = 0.021).
No other annual interval had a statistically significant increase in search activity. The single highest monthly search activity, given the value of 100, was August 2011, immediately following the July 2011 notification, with the next highest value being 98 in July 2011. Linear regression analysis of news articles per year since the FDA communication revealed r 2 = 0.88, with a coefficient of 186. Quality assessment demonstrated that 42% of websites were HON-certified, with.gov sites providing the highest quality information.
Although the 2011 FDA safety communication on surgical mesh was associated with increased public and media attention, the quality of relevant health information on the internet remains of poor quality. Future quality assurance measures may be critical in enabling patients to play active roles in their own healthcare. Stokes, William A; Glick, Benjamin S 2006-01-01 Background Molecular biologists work with DNA databases that often include entire genomes.
A common requirement is to search a DNA database to find exact matches for a nondegenerate or partially degenerate query. The software programs available for such purposes are normally designed to run on remote servers, but an appealing alternative is to work with DNA databases stored on local computers. We describe a desktop software program termed MICA (K-Mer Indexing with Compact Arrays) that allows large DNA databases to be searched efficiently using very little memory. Results MICA rapidly indexes a DNA database. On a Macintosh G5 computer, the complete human genome could be indexed in about 5 minutes. The indexing algorithm recognizes all 15 characters of the DNA alphabet and fully captures the information in any DNA sequence, yet for a typical sequence of length L, the index occupies only about 2L bytes.
The index can be searched to return a complete list of exact matches for a nondegenerate or partially degenerate query of any length. A typical search of a long DNA sequence involves reading only a small fraction of the index into memory. As a result, searches are fast even when the available RAM is limited. Conclusion MICA is suitable as a search engine for desktop DNA analysis software. PMID:17018144. Bright, Jo-Anne; Taylor, Duncan; Curran, James; Buckleton, John 2014-03-01 DNA databases have revolutionised forensic science.
They are a powerful investigative tool as they have the potential to identify persons of interest in criminal investigations. Routinely, a DNA profile generated from a crime sample could only be searched for in a database of individuals if the stain was from single contributor (single source) or if a contributor could unambiguously be determined from a mixed DNA profile. This meant that a significant number of samples were unsuitable for database searching. The advent of continuous methods for the interpretation of DNA profiles offers an advanced way to draw inferential power from the considerable investment made in DNA databases.
Using these methods, each profile on the database may be considered a possible contributor to a mixture and a likelihood ratio (LR) can be formed. Those profiles which produce a sufficiently large LR can serve as an investigative lead. In this paper empirical studies are described to determine what constitutes a large LR. We investigate the effect on a database search of complex mixed DNA profiles with contributors in equal proportions with dropout as a consideration, and also the effect of an incorrect assignment of the number of contributors to a profile.
I have also tried 80, 3535 This doesn't work either. Office 365 for mac.
In addition, we give, as a demonstration of the method, the results using two crime samples that were previously unsuitable for database comparison. We show that effective management of the selection of samples for searching and the interpretation of the output can be highly informative. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved. Zaki, Nazar; Tennakoon, Chandana 2017-10-02 There are a large number of biological databases publicly available for scientists in the web.
Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language.
Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form.
We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases.
Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search. Tenopir, Carol 1985-01-01 This article examines the Harvard Business Review Online (HBRO) database (bibliographic description fields, abstracts, extracted information, full text, subject descriptors) and reports on 31 sample HBRO searches conducted in Bibliographic Retrieval Services to test differences between searching full text and searching bibliographic record. Sample.
Deutsch, Eric W; Sun, Zhi; Campbell, David S; Binz, Pierre-Alain; Farrah, Terry; Shteynberg, David; Mendoza, Luis; Omenn, Gilbert S; Moritz, Robert L 2016-11-04 The results of analysis of shotgun proteomics mass spectrometry data can be greatly affected by the selection of the reference protein sequence database against which the spectra are matched. For many species there are multiple sources from which somewhat different sequence sets can be obtained. This can lead to confusion about which database is best in which circumstances-a problem especially acute in human sample analysis. All sequence databases are genome-based, with sequences for the predicted gene and their protein translation products compiled. Our goal is to create a set of primary sequence databases that comprise the union of sequences from many of the different available sources and make the result easily available to the community. We have compiled a set of four sequence databases of varying sizes, from a small database consisting of only the ∼20,000 primary isoforms plus contaminants to a very large database that includes almost all nonredundant protein sequences from several sources.
This set of tiered, increasingly complete human protein sequence databases suitable for mass spectrometry proteomics sequence database searching is called the Tiered Human Integrated Search Proteome set. In order to evaluate the utility of these databases, we have analyzed two different data sets, one from the HeLa cell line and the other from normal human liver tissue, with each of the four tiers of database complexity. The result is that approximately 0.8%, 1.1%, and 1.5% additional peptides can be identified for Tiers 2, 3, and 4, respectively, as compared with the Tier 1 database, at substantially increasing computational cost. This increase in computational cost may be worth bearing if the identification of sequence variants or the discovery of sequences that are not present in the reviewed knowledge base entries is an important goal of the study. We find that it is useful to search a data set against a simpler database, and then check the uniqueness of the.
Deutsch, Eric W.; Sun, Zhi; Campbell, David S.; Binz, Pierre-Alain; Farrah, Terry; Shteynberg, David; Mendoza, Luis; Omenn, Gilbert S.; Moritz, Robert L. 2016-01-01 The results of analysis of shotgun proteomics mass spectrometry data can be greatly affected by the selection of the reference protein sequence database against which the spectra are matched. For many species there are multiple sources from which somewhat different sequence sets can be obtained. This can lead to confusion about which database is best in which circumstances – a problem especially acute in human sample analysis. All sequence databases are genome-based, with sequences for the predicted gene and their protein translation products compiled. Our goal is to create a set of primary sequence databases that comprise the union of sequences from many of the different available sources and make the result easily available to the community. We have compiled a set of four sequence databases of varying sizes, from a small database consisting of only the 20,000 primary isoforms plus contaminants to a very large database that includes almost all non-redundant protein sequences from several sources.
This set of tiered, increasingly complete human protein sequence databases suitable for mass spectrometry proteomics sequence database searching is called the Tiered Human Integrated Search Proteome set. In order to evaluate the utility of these databases, we have analyzed two different data sets, one from the HeLa cell line and the other from normal human liver tissue, with each of the four tiers of database complexity. The result is that approximately 0.8%, 1.1%, and 1.5% additional peptides can be identified for Tiers 2, 3, and 4, respectively, as compared with the Tier 1 database, at substantially increasing computational cost. This increase in computational cost may be worth bearing if the identification of sequence variants or the discovery of sequences that are not present in the reviewed knowledge base entries is an important goal of the study. We find that it is useful to search a data set against a simpler database, and then check the uniqueness of the. Perlman, A; Heyman, S N; Matok, I; Stokar, J; Muszkat, M; Szalat, A 2017-12-01 Sodium-glucose-cotransporter-2 (SGLT2) inhibitors have recently been approved for the treatment of type II diabetes mellitus (T2DM).
It has been proposed that these agents could induce acute renal failure (ARF) under certain conditions. This study aimed to evaluate the association between SGLT2-inhibitors and ARF in the FDA adverse event report system (FAERS) database. We analyzed adverse event cases submitted to FAERS between January 2013 and September 2016. ARF cases were identified using a structured medical query. Medications were identified using both brand and generic names. During the period evaluated, 18,915 reports (out of a total of 3,832,015 registered in FAERS) involved the use of SGLT2-inhibitors. SGLT2-inhibitors were reportedly associated with ARF in 1224 of these cases (6.4%), and were defined as the 'primary' or 'secondary' cause of the adverse event in 96.8% of these cases.
The proportion of reports with ARF among reports with SGLT2 inhibitor was almost three-fold higher compared to reports without these drugs (ROR 2.88, 95% CI 2.71-3.05, p. Walsh, Emily S; Peterson, Jana J; Judkins, Dolores Z 2014-01-01 As researchers in disability and health conduct systematic reviews with greater frequency, the definition of disability used in these reviews gains importance. Translating a comprehensive conceptual definition of 'disability' into an operational definition that utilizes electronic databases in the health sciences is a difficult step necessary for performing systematic literature reviews in the field. Consistency of definition across studies will help build a body of evidence that is comparable and amenable to synthesis. To illustrate a process for operationalizing the World Health Organization's International Classification of Disability, Functioning, and Health concept of disability for MEDLINE, PsycINFO, and CINAHL databases. We created an electronic search strategy in conjunction with a reference librarian and an expert panel.
Quality control steps included comparison of search results to results of a search for a specific disabling condition and to articles nominated by the expert panel. The complete search strategy is presented. Results of the quality control steps indicated that our strategy was sufficiently sensitive and specific. Our search strategy will be valuable to researchers conducting literature reviews on broad populations with disabilities.
Copyright © 2014 Elsevier Inc. All rights reserved.
Deciu, Cosmin; Sun, Jun; Wall, Mark A 2007-09-01 We discuss several aspects related to load balancing of database search jobs in a distributed computing environment, such as Linux cluster. Load balancing is a technique for making the most of multiple computational resources, which is particularly relevant in environments in which the usage of such resources is very high. The particular case of the Sequest program is considered here, but the general methodology should apply to any similar database search program. We show how the runtimes for Sequest searches of tandem mass spectral data can be predicted from profiles of previous representative searches, and how this information can be used for better load balancing of novel data. A well-known heuristic load balancing method is shown to be applicable to this problem, and its performance is analyzed for a variety of search parameters.
Angier, Jennifer J.; Epstein, Barbara A. 1980-01-01 Outlines practical searching techniques in seven core behavioral science databases accessing psychological literature: Psychological Abstracts, Social Science Citation Index, Biosis, Medline, Excerpta Medica, Sociological Abstracts, ERIC. Use of individual files is discussed and their relative strengths/weaknesses are compared. Appended is a list. Gittelson, S; Biedermann, A; Bozza, S; Taroni, F 2012-10-10 This paper applies probability and decision theory in the graphical interface of an influence diagram to study the formal requirements of rationality which justify the individualization of a person found through a database search. The decision-theoretic part of the analysis studies the parameters that a rational decision maker would use to individualize the selected person. The modeling part (in the form of an influence diagram) clarifies the relationships between this decision and the ingredients that make up the database search problem, i.e., the results of the database search and the different pairs of propositions describing whether an individual is at the source of the crime stain.
These analyses evaluate the desirability associated with the decision of 'individualizing' (and 'not individualizing'). They point out that this decision is a function of (i) the probability that the individual in question is, in fact, at the source of the crime stain (i.e., the state of nature), and (ii) the decision maker's preferences among the possible consequences of the decision (i.e., the decision maker's loss function). We discuss the relevance and argumentative implications of these insights with respect to recent comments in specialized literature, which suggest points of view that are opposed to the results of our study. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved. Wang, Jian; Bourne, Philip E.; Bandeira, Nuno 2011-01-01 In high-throughput proteomics the development of computational methods and novel experimental strategies often rely on each other. In certain areas, mass spectrometry methods for data acquisition are ahead of computational methods to interpret the resulting tandem mass spectra.
Particularly, although there are numerous situations in which a mixture tandem mass spectrum can contain fragment ions from two or more peptides, nearly all database search tools still make the assumption that each tandem mass spectrum comes from one peptide. Common examples include mixture spectra from co-eluting peptides in complex samples, spectra generated from data-independent acquisition methods, and spectra from peptides with complex post-translational modifications. We propose a new database search tool (MixDB) that is able to identify mixture tandem mass spectra from more than one peptide. We show that peptides can be reliably identified with up to 95% accuracy from mixture spectra while considering only a 0.01% of all possible peptide pairs (four orders of magnitude speedup). Comparison with current database search methods indicates that our approach has better or comparable sensitivity and precision at identifying single-peptide spectra while simultaneously being able to identify 38% more peptides from mixture spectra at significantly higher precision. PMID:21862760. Tuchscherer, Rhianna M; Nair, Kavita; Ghushchyan, Vahram; Saseen, Joseph J 2015-02-01 Muscle-related events, or myopathies, are a commonly reported adverse event associated with statin use.
In June 2011, the US FDA released a Drug Safety Communication that provided updated product labeling with dosing restrictions for simvastatin to minimize the risk of myopathies. Our objective was to describe prescribing patterns of simvastatin in combination with medications known to increase the risk of myopathies following updated product labeling dosing restrictions in June 2011. A retrospective observational analysis was carried out, in which administrative claims data were utilized to identify prescribing patterns of simvastatin in combination with calcium channel blockers (CCBs) and other pre-specified drug therapies. Prescribing patterns were analyzed on a monthly basis 24 months prior to and 9 months following product label changes. Incidence of muscle-related events was also analyzed.
In June 2011, a total of 60% of patients with overlapping simvastatin-CCB claims and 94% of patients with overlapping simvastatin-non-CCB claims were prescribed an against-label combination. As of March 2012, a total of 41% and 93% of patients continued to be prescribed against-label simvastatin-CCB and simvastatin-non-CCB combinations, respectively. The most commonly prescribed dose of simvastatin was 20 mg (39%). Against-label combinations were most commonly prescribed at a simvastatin dose of 40 mg (56%). Amlodipine was the most commonly prescribed CCB in combination with simvastatin (70%) and the most common CCB prescribed against-label (67%).
Despite improvements in prescribing practices, many patients are still exposed to potentially harmful simvastatin combinations. Aggressive changes in simvastatin prescribing systems and processes are needed to improve compliance with FDA labeling to improve medication and patient safety. Fitzgibbons, Megan; Meert, Deborah 2010-01-01 The use of bibliographic management software and its internal search interfaces is now pervasive among researchers. This study compares the results between searches conducted in academic databases' search interfaces versus the EndNote search interface.
The results show mixed search reliability, depending on the database and type of search. Taylor, Faith E.; Malamud, Bruce D.; Freeborough, Katy; Demeritt, David 2015-11-01 Our understanding of where landslide hazard and impact will be greatest is largely based on our knowledge of past events.
Here, we present a method to supplement existing records of landslides in Great Britain by searching an electronic archive of regional newspapers. In Great Britain, the British Geological Survey (BGS) is responsible for updating and maintaining records of landslide events and their impacts in the National Landslide Database (NLD). The NLD contains records of more than 16,500 landslide events in Great Britain. Data sources for the NLD include field surveys, academic articles, grey literature, news, public reports and, since 2012, social media. We aim to supplement the richness of the NLD by (i) identifying additional landslide events, (ii) acting as an additional source of confirmation of events existing in the NLD and (iii) adding more detail to existing database entries.
This is done by systematically searching the Nexis UK digital archive of 568 regional newspapers published in the UK. In this paper, we construct a robust Boolean search criterion by experimenting with landslide terminology for four training periods. We then apply this search to all articles published in 2006 and 2012. This resulted in the addition of 111 records of landslide events to the NLD over the 2 years investigated (2006 and 2012).
We also find that we were able to obtain information about landslide impact for 60-90% of landslide events identified from newspaper articles. Spatial and temporal patterns of additional landslides identified from newspaper articles are broadly in line with those existing in the NLD, confirming that the NLD is a representative sample of landsliding in Great Britain. This method could now be applied to more time periods and/or other hazards to add richness to databases and thus improve our ability to forecast future events based on records of past events. Alves, Gelio; Ogurtsov, Aleksey Y; Kwok, Siwei; Wu, Wells W; Wang, Guanghui; Shen, Rong-Fong; Yu, Yi-Kuo 2008-01-01 Background Current experimental techniques, especially those applying liquid chromatography mass spectrometry, have made high-throughput proteomic studies possible. The increase in throughput however also raises concerns on the accuracy of identification or quantification. Most experimental procedures select in a given MS scan only a few relatively most intense parent ions, each to be fragmented (MS2) separately, and most other minor co-eluted peptides that have similar chromatographic retention times are ignored and their information lost. Results We have computationally investigated the possibility of enhancing the information retrieval during a given LC/MS experiment by selecting the two or three most intense parent ions for simultaneous fragmentation.
A set of spectra is created via superimposing a number of MS2 spectra, each can be identified by all search methods tested with high confidence, to mimick the spectra of co-eluted peptides. The generated convoluted spectra were used to evaluate the capability of several database search methods – SEQUEST, Mascot, X!Tandem, OMSSA, and RAIdDbS – in identifying true peptides from superimposed spectra of co-eluted peptides. We show that using these simulated spectra, all the database search methods will gain eventually in the number of true peptides identified by using the compound spectra of co-eluted peptides. Open peer review Reviewed by Vlad Petyuk (nominated by Arcady Mushegian), King Jordan and Shamil Sunyaev. For the full reviews, please go to the Reviewers' comments section. PMID:18597684. Kementsietsidis, Anastasios; Lim, Lipyeow; Wang, Min 2008-11-06 The proliferation of medical terms poses a number of challenges in the sharing of medical information among different stakeholders.
Ontologies are commonly used to establish relationships between different terms, yet their role in querying has not been investigated in detail. In this paper, we study the problem of supporting ontology-based keyword search queries on a database of electronic medical records. We present several approaches to support this type of queries, study the advantages and limitations of each approach, and summarize the lessons learned as best practices. Eng, Jimmy K; Jahan, Tahmina A; Hoopmann, Michael R 2013-01-01 Proteomics research routinely involves identifying peptides and proteins via MS/MS sequence database search. Thus the database search engine is an integral tool in many proteomics research groups. Here, we introduce the Comet search engine to the existing landscape of commercial and open-source database search tools.
Comet is open source, freely available, and based on one of the original sequence database search tools that has been widely used for many years. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Meyer, Daniel E.; And Others 1983-01-01 Evaluates overlapping coverage of 21 scientific databases used in 10 online pesticide searches in an attempt to identify minimum number of databases needed to generate 90 percent of unique, relevant citations for given search. Comparison of searches combined under given pesticide usage (herbicide, fungicide, insecticide) is discussed. Nine.
Wang, H; Volarath, P; Harrison, R 2005-01-01 A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated.
Free Wallpaper For Mac
The convenience of the implementation emphasizes the efficiency and simplicity of the framework. Cavaleri, Piero 2008-01-01 Purpose: The purpose of this paper is to describe the use of AJAX for searching the Biblioteche Oggi database of bibliographic records. Design/methodology/approach: The paper is a demonstration of how bibliographic database single page interfaces allow the implementation of more user-friendly features for social and collaborative tasks.
Findings:. Wolfson, Ouri (Inventor); Xu, Bo (Inventor) 2010-01-01 Information is stored in a plurality of mobile peers. The peers communicate in a peer to peer fashion, using a short-range wireless network. Occasionally, a peer initiates a search for information in the peer to peer network by issuing a query. Queries and pieces of information, called reports, are transmitted among peers that are within a transmission range.
For each search additional peers are utilized, wherein these additional peers search and relay information on behalf of the originator of the search. Stojanov, Done; Koceski, Sašo; Mileva, Aleksandra; Koceska, Nataša; Bande, Cveta Martinovska 2014-09-03 In order to facilitate and speed up the search of massive DNA databases, the database is indexed at the beginning, employing a mapping function. By searching through the indexed data structure, exact query hits can be identified. If the database is searched against an annotated DNA query, such as a known promoter consensus sequence, then the starting locations and the number of potential genes can be determined.
This is particularly relevant if unannotated DNA sequences have to be functionally annotated. However, indexing a massive DNA database and searching an indexed data structure with millions of entries is a time-demanding process. In this paper, we propose a fast DNA database indexing and searching approach, identifying all query hits in the database, without having to examine all entries in the indexed data structure, limiting the maximum length of a query that can be searched against the database.
By applying the proposed indexing equation, the whole human genome could be indexed in 10 hours on a personal computer, under the assumption that there is enough RAM to store the indexed data structure. Analysing the methodology proposed by Reneker, we observed that hits at starting positions Formula: see text are not reported, if the database is searched against a query shorter than Formula: see text nucleotides, such that Formula: see text is the length of the DNA database words being mapped and Formula: see text is the length of the query.
A solution of this drawback is also presented. Mackey, Aaron J; Pearson, William R 2004-10-01 Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdbdemo, which is used as a basis for the other protocols.
These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdbdemo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis. Peng, Zhaohui; Li, Jing; Wang, Shan Keyword Search Over Relational Databases (KSORD) enables casual users to use keyword queries (a set of keywords) to search relational databases just like searching the Web, without any knowledge of the database schema or any need of writing SQL queries. Based on our previous work, we design and implement a novel KSORD platform named EasyKSORD for users and system administrators to use and manage different KSORD systems in a novel and simple manner.
Iuclid Data Set
EasyKSORD supports advanced queries, efficient data-graph-based search engines, multiform result presentations, and system logging and analysis. Through EasyKSORD, users can search relational databases easily and read search results conveniently, and system administrators can easily monitor and analyze the operations of KSORD and manage KSORD systems much better. Maluf, David A. (Inventor); Okimura, Takeshi (Inventor); Gurram, Mohana M. (Inventor); Tran, Vu Hoang (Inventor); Knight, Christopher D. (Inventor); Trinh, Anh Ngoc (Inventor) 2017-01-01 The present invention is a distributed computer system of heterogeneous databases joined in an information grid and configured with an Application Programming Interface hardware which includes a search engine component for performing user-structured queries on multiple heterogeneous databases in real time. This invention reduces overhead associated with the impedance mismatch that commonly occurs in heterogeneous database queries.
Berget, Gerd; Sandnes, Frode Eika 2015-01-01 Introduction: Few studies document the information searching behaviour of users with cognitive impairments. This paper therefore addresses the effect of dyslexia on information searching in a database with no tolerance for spelling errors and no query-building aids. The purpose was to identify effective search interface design guidelines that.
Cogo, Elise; Sampson, Margaret; Ajiferuke, Isola; Manheimer, Eric; Campbell, Kaitryn; Daniel, Raymond; Moher, David 2011-01-01 This project aims to assess the utility of bibliographic databases beyond the three major ones (MEDLINE, EMBASE and Cochrane CENTRAL) for finding controlled trials of complementary and alternative medicine (CAM). Fifteen databases were searched to identify controlled clinical trials (CCTs) of CAM not also indexed in MEDLINE.
Searches were conducted in May 2006 using the revised Cochrane highly sensitive search strategy (HSSS) and the PubMed CAM Subset. Yield of CAM trials per 100 records was determined, and databases were compared over a standardized period (2005). The Acudoc2 RCT, Acubriefs, Index to Chiropractic Literature (ICL) and Hom-Inform databases had the highest concentrations of non-MEDLINE records, with more than 100 non-MEDLINE records per 500. Other productive databases had ratios between 500 and 1500 records to 100 non-MEDLINE records—these were AMED, MANTIS, PsycINFO, CINAHL, Global Health and Alt HealthWatch. Five databases were found to be unproductive: AGRICOLA, CAIRSS, Datadiwan, Herb Research Foundation and IBIDS.
Acudoc2 RCT yielded 100 CAM trials in the most recent 100 records screened. Acubriefs, AMED, Hom-Inform, MANTIS, PsycINFO and CINAHL had more than 25 CAM trials per 100 records screened. Global Health, ICL and Alt HealthWatch were below 25 in yield. There were 255 non-MEDLINE trials from eight databases in 2005, with only 10% indexed in more than one database. Yield varied greatly between databases; the most productive databases from both sampling methods were Acubriefs, Acudoc2 RCT, AMED and CINAHL. Low overlap between databases indicates comprehensive CAM literature searches will require multiple databases.
PMID:19468052. Jacobs, Grant H.; Stockwell, Peter A.; Tate, Warren P.; Brown, Chris M.
2006-01-01 Transterm has now been publicly available for 10 years. Major changes have been made since its last description in this database issue in 2002. The current database provides data for key regions of mRNA sequences, a curated database of mRNA motifs and tools to allow users to investigate their own motifs or mRNA sequences.
The key mRNA regions database is derived computationally from Genbank. It contains 3′ and 5′ flanking regions, the initiation and termination signal context and coding sequence for annotated CDS features from Genbank and RefSeq. The database is non-redundant, enabling summary files and statistics to be prepared for each species. Advances include providing extended search facilities, the database may now be searched by BLAST in addition to regular expressions (patterns) allowing users to search for motifs such as known miRNA sequences, and the inclusion of RefSeq data. The database contains 40 motifs or structural patterns important for translational control.
In this release, patterns from UTRsite and Rfam are also incorporated with cross-referencing. Users may search their sequence data with Transterm or user-defined patterns. The system is accessible at. PMID:16381889.
Panaich, Sidakpal S; Arora, Shilpkumar; Badheka, Apurva; Kumar, Varun; Maor, Elad; Raphael, Claire; Deshmukh, Abhishek; Reeder, Guy; Eleid, Mackram; Rihal, Charanjit S 2018-05-01 There are sparse clinical data on the procedural trends, outcomes and readmission rates following FDA approval and expansion of Transcatheter mitral valve repair/MitraClip ®. Whether a complex new technology can be disseminated safely and quickly is controversial. The study cohort was derived from the National Readmission Data (NRD) 2013-14. MitraClip ® was identified using appropriate International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) codes. The primary outcome was a composite of in-hospital mortality + procedural complications. Secondary outcome included 30-day readmissions.
Hierarchical two level logistic models were used to evaluate study outcomes. Our analysis included 2003 MitraClip ® procedures. Overall in-hospital mortality was 3.9%.
As expected, there was a significant increase in procedural volume post- FDA approval. Importantly, a corresponding downward trend in mortality and procedural complications was observed. Significant predictors of in-hospital mortality and procedural complications included the use of vasopressors (P.
Khitun, Alexander In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploitmore » wave superposition for parallel data processing.
We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors.
Physical limitations and technological constrains of the spin wave approach are also discussed.« less. MacDonald, Randall M.; MacDonald, Susan Priest Students are using electronic resources more than ever before to locate information for assignments. Without the proper search terms, results are incomplete, and students are frustrated. Using the keywords, key people, organizations, and Web sites provided in this book and compiled from the most commonly used databases, students will be able to. Meredith, Meri 1986-01-01 This second installment describes databases irregularly searched in the Business Information Center, Cummins Engine Company (Columbus, Indiana).
Highlights include typical research topics (happenings among similar manufacturers); government topics (Department of Defense contracts); market and industry topics; corporate intelligence; and personnel,. Lynch, Clifford A. 1997-01-01 Union catalogs and distributed search systems are two ways users can locate materials in print and electronic formats.
This article examines the advantages and limitations of both approaches and argues that they should be considered complementary rather than competitive. Discusses technologies creating linkage between catalogs and databases and. Dumont, B.; Fuks, B.; Kraml, S.; Bein, S.; Chalons, G.; Conte, E.; Kulkarni, S.; Sengupta, D.; Wymant, C.
2015-02-01 We present the implementation, in the MadAnalysis 5 framework, of several ATLAS and CMS searches for supersymmetry in data recorded during the first run of the LHC. We provide extensive details on the validation of our implementations and propose to create a public analysis database within this framework.
Rzepa, Henry S. 2016-01-01 Three new examples are presented illustrating three-dimensional chemical information searches of the Cambridge structure database (CSD) from which basic core concepts in organic and inorganic chemistry emerge.
These include connecting the regiochemistry of aromatic electrophilic substitution with the geometrical properties of hydrogen bonding. Janke, Richard V.; And Others 1988-01-01 The first article describes SPORT, a database providing international coverage of athletics and physical education, and compares it to other online services in terms of coverage, thesauri, possible search strategies, and actual usage. The second article reviews available online information on sports medicine. (CLB).
Hernandez, Marco A. 2013-01-01 PubMed® is an on-line literature database hosted by the U.S.
National Library of Medicine. Containing over 21 million citations for biomedical literature-both abstracts and full text-in the areas of the life sciences, behavioral studies, chemistry, and bioengineering, PubMed® represents an important tool for researchers. PubMed® searches return.
Khitun, Alexander 2015-12-01 In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing.
Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of num.