Search Results

Search found 40567 results on 1623 pages for 'database performance'.

Page 289/1623 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • Create Mssql database from c# - using Parameters

    - by Alon M
    i am trying to put up a code to create a databases from my c# code (asp.net website). this is my code - SqlCommand myCommand = new SqlCommand("CREATE DATABASE @dbname", nn); myCommand.Parameters.Add("dbname", dbname); myCommand.ExecuteNonQuery(); nn.Close(); well, its not working. its giveing me an error - this one : incoreect syntex near '@dbname'. BUT. if i wont use parameters, peolpe can sql inj to my database. do you have any idea how can use anything, to get the database name from a textbox. and that peolpe cant sql inj me db?

    Read the article

  • javascript object's - private methods: which way is better.

    - by Praveen Prasad
    (function () { function User() { //some properties } //private fn 1 User.prototype._aPrivateFn = function () { //private function defined just like a public function, //for convetion underscore character is added } //private function type 2 //a closure function _anotherPrivateFunction() { // do something } //public function User.prototype.APublicFunction = function () { //call private fn1 this._aPrivateFn(); //call private fn2 _anotherPrivateFunction(); } window.UserX = User; })(); //which of the two ways of defining private methods of a javascript object is better way, specially in sense of memory management and performance.

    Read the article

  • Managing Large Database Entity Models

    - by ChiliYago
    I would like hear how other's are effectively (or not) working with the Visual Studio Entity Designer when many database tables exists. It seems to me that navigating the Designer is tough enough to find what you are looking for with just a few tables but how about a database with say 100 to 200 tables? When a table change is made at the database level how is the model updated? Does it overwrite any manual changes you have made to the model? How would you quickly find an entity in the designer to make a change or inspect a change? Seems unrealistic to be scrolling around looking for specific entity. Thanks for your feedback!

    Read the article

  • calendar.getInstance() or calendar.clone()

    - by Pangea
    I need to make a copy of a given date 100s of times (I cannot pass-by-reference). I am wondering which of the below two are better options newTime=Calendar.getInstance().setTime(originalDate); OR newTime=originalDate.clone(); Performance is of main conern here. thx.

    Read the article

  • Upload File to Database in ColdFusion

    - by George Johnston
    I simply would like to upload a file to my database using ColdFusion. I understand how to upload an image to a directory, but I would like to place it directly in the database. I have set a database field to varbinary(MAX) to accept the image and have the stored procedure to insert it. Currently my code for uploading the image to my file system is: <cfif isdefined("form.FileUploadImage")> <cffile action="upload" filefield="FileUploadImage" destination="#uploadfolder#" nameconflict="overwrite" accept="image/*" > </cfif> I've obviously left some of the supporting code out, but really all I need to do is get a binary representation of the file stored in memory, instead of the file system. Any experts out there that can help? Thanks, George

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • How fast should an interpreted language be today?

    - by Tarbal
    Is speed of the (main/only viable) implementation of an interpreted programming language a criteria today? What would be the optimal balance between speed and abstraction? Should scripting languages completely ignore all thoughts about performance and just follow the concepts of rapid development, readability, etc.? I'm asking this because I'm currently designing some experimental languages and interpreters

    Read the article

  • How to handle very frequent updates to a Lucene index

    - by fsm
    I am trying to prototype an indexing/search application which uses very volatile indexing data sources (forums, social networks etc), here are some of the performance requirements, Very fast turn-around time (by this I mean that any new data (such as a new message on a forum) should be available in the search results very soon (less than a minute)) I need to discard old documents on a fairly regular basis to ensure that the search results are not dated. Last but not least, the search application needs to be responsive. (latency on the order of 100 milliseconds, and should support at least 10 qps) All of the requirements I have currently can be met w/o using Lucene (and that would let me satisfy all 1,2 and 3), but I am anticipating other requirements in the future (like search relevance etc) which Lucene makes easier to implement. However, since Lucene is designed for use cases far more complex than the one I'm currently working on, I'm having a hard time satisfying my performance requirements. Here are some questions, a. I read that the optimize() method in the IndexWriter class is expensive, and should not be used by applications that do frequent updates, what are the alternatives? b. In order to do incremental updates, I need to keep committing new data, and also keep refreshing the index reader to make sure it has the new data available. These are going to affect 1 and 3 above. Should I try duplicate indices? What are some common approaches to solving this problem? c. I know that Lucene provides a delete method, which lets you delete all documents that match a certain query, in my case, I need to delete all documents which are older than a certain age, now one option is to add a date field to every document and use that to delete documents later. Is it possible to do range queries on document ids (I can create my own id field since I think that the one created by lucene keeps changing) to delete documents? Is it any faster than comparing dates represented as strings? I know these are very open questions, so I am not looking for a detailed answer, I will try to treat all of your answers as suggestions and use them to inform my design. Thanks! Please let me know if you need any other information.

    Read the article

  • microsoft access query speed...

    - by V.S.
    Hello everyone! I am now writing a report about MS Access and I can't find any information about its performance speed in comparison to other alternatives such as Micorsoft SQL Server, MySQL, Oracle, etc... It's obvious that MS Access is going to be the slowest among the rest, but there is no solid documents confirming this other than forums threads, and I don't have the time and resources to do the research myself :( Hoping for your help, V.S.

    Read the article

  • Huge page buffer vs. multiple simultaneous processes

    - by Andrei K.
    One of our customer has a 35 Gb database with average active connections count about 70-80. Some tables in database have more than 10M records per table. Now they have bought new server: 4 * 6 Core = 24 Cores CPU, 48 Gb RAM, 2 RAID controllers 256 Mb cache, with 8 SAS 15K HDD on each. 64bit OS. I'm wondering, what would be a fastest configuration: 1) FB 2.5 SuperServer with huge buffer 8192 * 3500000 pages = 29 Gb or 2) FB 2.5 Classic with small buffer of 1000 pages. Maybe some one has tested such case before and will save me days of work :) Thanks in advance.

    Read the article

  • limit of connections with database and number of java threads in an application

    - by Jyoti
    Hi, I am working to develop a JMS application(stand alone multithreaded java application) which can receive 100 messages at a time , they need to be processed and database procedures need to be called for inserting/updating data. Procedures are very heavy as validations are also performed in them. Each procedure is taking about 30 to 50 seconds of time to execute and they are capable to run concurrently. My concern is to execute 100 procedures for all 100 messages and also send reply within time limit of 90 seconds by jms application. No application server to be used(requirement) and database is Teradata (RDBMS) I am using connection pool and thread pool in java code and testing code with 90 connections. Question is : (1) What should be the limit on number of connections with database at a time? (2) How many threads at a time are recommended? Thanks, Jyoti

    Read the article

  • Is there a good .Net CSS aggregator that combines style sheets and minifies them?

    - by vfilby
    I am looking to see if there is an open source/free project that provides a CSS manager. I am looking for this mainly for performance tweaking and hoping there is a readymade project rather than building from scratch. Features I am looking for include: Combines multiple .css files into a single css file Optionally minifies the resulting .css file Works well with .Net (a user control, custom handler, etc) Is there a project out that that handles this?

    Read the article

  • Is using joins in select clause slow in Oracle?

    - by gniquil
    I would like to write a query like the following select username, (select state from addresses where addresses.username = users.username) email from users This works in Oracle (assuming the result from the inner query is unique). However, is there a performance penalty associated with this style of writing query?

    Read the article

  • If-else-if versus map

    - by perezvon
    Hi, Suppose I have such an if/else-if chain: if( x.GetId() == 1 ) { } else if( x.GetId() == 2 ) { } // ... 50 more else if statements What I wonder is, if I keep a map, will it be any better in terms of performance? (assuming keys are integers)

    Read the article

  • mySQL & Relational databases: How to handle sharding/splitting on application level?

    - by Industrial
    Hi everybody, I have thought a bit about sharding tables, since partitioning cannot be done with foreign keys in a mySQL table. Maybe there's an option to switch to a different relational database that features both, but I don't see that as an option right now. So, the sharding idea seems like a pretty decent thing. But, what's a good approach to do this on a application level? I am guessing that a take-off point would be to prefix tables with a max value for the primary key in each table. Something like products_4000000 , products_8000000 and products_12000000. Then the application would have to check with a simple if-statement the size of the id (PK) that will be requested is smaller then four, eight or twelve million before doing any actual database calls. So, is this a step in the right direction or are we doing something really stupid?

    Read the article

  • Hadoop: Processing large serialized objects

    - by restrictedinfinity
    I am working on development of an application to process (and merge) several large java serialized objects (size of order GBs) using Hadoop framework. Hadoop stores distributes blocks of a file on different hosts. But as deserialization will require the all the blocks to be present on single host, its gonna hit the performance drastically. How can I deal this situation where different blocks have to cant be individually processed, unlike text files ?

    Read the article

  • Queue access to the database to avoid multiple cache items

    - by MikeJ
    I have a music related ASP.NET web site which caches a lot of static information from the database on the first request. Sometimes, the application is reset and cache is cleared while the application is on heavy load and then all http requests go to the database to retrieve that static data and cache it for other requests. How can I ensure that only one request go to the database and cache the results, so that other request simply read that info from cache and not needlessly retrieve the same info over and over again. Can I use thread locking? For example, can I do something like lock(this) { db access here }?

    Read the article

  • Looking for a file database

    - by Mitchell Skurnik
    A few years ago there was a PHP script around called paFileDB. Essentially it created a database of files that end users could download and rate. It would keep track of the number of downloads as well. I am looking for a replacement of this coded in ASP.net C#. For the last week I have been searching around for "asp.net C# file database" and have just gotten how to upload files to a database. I have also search with some other terms. I do not need a full blown CMS like DotNetNuke or SharePoint. All I have been looking for is a system I could easily add to my existing .net C# site. I have thought about coding one myself, but with me being rather busy at work and this being a personal project, I simply do not have the time to code this myself. I hope that you guys can help. Thank you!

    Read the article

  • Can I change the database server and database a report is pointing to dynamically?

    - by Randy Minder
    I have a Crystal 2008 report that will be deployed to an InfoView server. There are four different databases the user might want to execute the report against. Each of the four databases have exactly the same schema. Only the data in each is different. Each database corresponds to a plant we have around the world. Instead of creating four different reports (each one connected to one of the four databases), am I able to dynamically change the server/database the report hits based on a value the user enters into a parameter? I'm really trying to avoid having to create four identical reports except for the database connection on each. If this isn't possible, how do developers typically deal with this sort of scenario? I would imagine it's fairly common. Thanks very much.

    Read the article

  • Is Core Animation causing my subviews to call -drawRect for every single frame?

    - by mystify
    I made a nice UIView subclass which paints all its stuff in -drawRect:, because people said that's good. That view is a subview of another. This another view is beeing animated with Core Animation: It's scaled down, rotated and moved. However, I encountered this: -drawRect seems to get called trillion of times during animation, and performance sucks. Is that normal or did I do something wrong, probably?

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >