Search Results

Search found 40567 results on 1623 pages for 'database performance'.

Page 364/1623 | < Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >

  • "No database selected" even when db clearly selected

    - by Someone
    One of my webpages gets a recurring error: "No database selected", even though the DB is selected. Right about now it's a 50-50 chance whether the page will load just fine, or whether I receive this error. After one or two reloads, the page works again. I am including the exact same connection file on my other pages, and I don't have this problem. What could be the cause of this? I'm using ensim pro for webhosting. TIA.

    Read the article

  • Make a usable Join relationship with LINQ on top of a database CSV design error

    - by jdk
    I'm looking for a way to fix and/or abstract away a comma-separated values (CSV) list in a database field in order to reconstruct a usable relationship such that I can properly join the two tables below and query them using LINQ and its Join method. Following is a sample showing the Person table and CsvArticleIds field having a CSV value to represent a one-to-many association with Article records. TABLE [dbo].[Person] Id Name CsvArticleIds -- ---------- -------- 1 Joe "15,22" 5 Ed "22" 10 Arnie "8,15,22" ^^^(Of course a link table should have been created; nonetheless the relationship with articles is trapped inside that list of CSV values.) TABLE [dbo].[Article] Id Title -- ---------- 8 Beginning C# 15 A Historic look at Programming in the 90s 22 Gardening in January Additional Info the fix can be at any level: C#.NET or SQL Server something easy because I will be repeating the solution for many other CSV values in other tables. Elegant is nice too. not looking for efficiency because this is part of a one-time data migration task and can take as long as it wants to run.

    Read the article

  • Is there a lightweight datagrid alternative in Flex ?

    - by Wayne
    What is the most performant way of displaying a table of data in Flex? Are there alternatives to the native Flex Datagrid Component? Alternatives that are noted for their rendering speed? Are there other ways to display a table? I have a datagrid with roughly 70 lines and 7 columns of simple text data. This is currently created and loaded in memory. This is being refreshed rapidly (about 800 msec) and there is a slight lag in other animations when it is rendering the table... So I am trying to cut down this render time.

    Read the article

  • XSLT 1.0: restrict entries in a nodeset

    - by Mike
    Hi, Being relatively new to XSLT I have what I hope is a simple question. I have some flat XML files, which can be pretty big (eg. 7MB) that I need to make 'more hierarchical'. For example, the flat XML might look like this: <D0011> .... .... and it should end up looking like this: <D0011> .... .... I have a working XSLT for this, and it essentially gets a nodeset of all the b elements and then uses the 'following-sibling' axis to get a nodeset of the nodes following the current b node (ie. following-sibling::*[position() =$nodePos]). Then recursion is used to add the siblings into the result tree until another b element is found (I have parameterised it of course, to make it more generic). I also have a solution that just sends the position in the XML of the next b node and selects the nodes after that one after the other (using recursion) via a *[position() = $nodePos] selection. The problem is that the time to execute the transformation increases unacceptably with the size of the XML file. Looking into it with XML Spy it seems that it is the 'following-sibling' and 'position()=' that take the time in the two respective methods. What I really need is a way of restricting the number of nodes in the above selections, so fewer comparisons are performed: every time the position is tested, every node in the nodeset is tested to see if its position is the right one. Is there a way to do that ? Any other suggestions ? Thanks, Mike

    Read the article

  • Jmeter- HTTP Cache Manager, Unable to cache everything what it is being cached by Browser

    - by chinmay brahma
    I used HTTP Chache Manager to Cache files which are being cached in browser. I am successful of doing it for some of the pages. Number of files being cached in Jmeter is equal to Number of files being cached by browser. But in some cases : I found number files being cached is lesser than the files being cached by browser. Using Jmeter I found only 5 files are being cached but in real browser 12 files are getting cached. Thanks in advance

    Read the article

  • Database over 2GB in MongoDB

    - by configurator
    We've got a file-based program we want to convert to use a document database, specifically MongoDB. Problem is, MongoDB is limited to 2GB on 32-bit machines (according to http://www.mongodb.org/display/DOCS/FAQ#FAQ-Whatarethe32bitlimitations%3F), and a lot of our users will have over 2GB of data. Is there a way to have MongoDB use more than one file somehow? I thought perhaps I could implement sharding on a single machine, meaning I'd run more than one mongod on the same machine and they'd somehow communicate. Could that work?

    Read the article

  • How can I set my deployment options to script the incremental release of a Visual Studio 2010 databa

    - by littlechris
    I've just started using a VS2010 database project to manage the release of an update to an existing database. I want the deployment option to generate a script that will contain the commands to change my existing database rather than create an entirely new one. E.g I have 10 existing tables - one of which I drop in the new version and I create some new sprocs. I only want the deploy to script the Drop table and Create Procedure commands. I am using VS2010 Premium. Thanks!

    Read the article

  • advantages of Zend_Db_Table vs raw (My)SQL?

    - by sunwukung
    Currently working on a new Zend application and developing the Model. Having worked with Zend_Db_Table before, I opted to replace references in the Model to the Table API with a custom SQL script to take care of data access duties. Now I'm looking at developing a new application/domain model, and I wanted to get some feedback from people re: their experiences with Zend_Db API vs raw SQL, and use cases where it would be preferable to use the API. From a project perspective, the DB platform is unlikely to change from MySQL - so it doesn't need to be particularly abstract - and I assume writing a custom SQL API will be more performant than the assorted classes the Zend DB API requires.

    Read the article

  • Why are difference lists more efficient than regular concatenation?

    - by Craig Innes
    I am currently working my way through the Learn you a haskell book online, and have come to a chapter where the author is explaining that some list concatenations can be ineffiecient: For example ((((a ++ b) ++ c) ++ d) ++ e) ++ f Is supposedly inefficient. The solution the author comes up with is to use 'difference lists' defined as newtype DiffList a = DiffList {getDiffList :: [a] -> [a] } instance Monoid (DiffList a) where mempty = DiffList (\xs -> [] ++ xs) (DiffList f) `mappend` (DiffList g) = DiffList (\xs -> f (g xs)) I am struggling to understand why DiffList is more computationally efficient than a simple concatenation in some cases. Could someone explain to me in simple terms why the above example is so inefficient, and in what way the DiffList solves this problem?

    Read the article

  • Is there such a thing as too many tables?

    - by Stacey
    I've been searching stackoverflow for about an hour now and couldn't find any topics related, so I apologize if this is a duplicate question. My inquery is this. Is there a point at which there are too many tables in a database? Even if the structure is well organized, thought out, and perfectly facilitates the design intent? I have a database that is quickly approaching 40 tables - about 10 main ones, and over 30 ancillary tables (junction tables, 'enumeration' tables, etc). Am I just a bad developer - or should I be trying something different? It seems like so many to me, I'm really afraid at how it will impact the performance of the project. I have done a lot of condensing where possible, grouped similar things where possible, etc. The database is built in MS-SQL 2008.

    Read the article

  • web service filling gridview awfully slow, as is paging/sorting

    - by nat
    Hi I am making a page which calls a web service to fill a gridview this is returning alot of data, and is horribly slow. i ran the svcutil.exe on the wsdl page and it generated me the class and config so i have a load of strongly typed objects coming back from each request to the many service functions. i am then using LINQ to loop around the objects grabbing the necessary information as i go, but for each row in the grid i need to loop around an object, and grab another list of objects (from the same request) and loop around each of them.. 1 to many parent object child one.. all of this then gets dropped into a custom datatable a row at a time.. hope that makes sense.... im not sure there is any way to speed up the initial load. but surely i should be able to page/sort alot faster than it is doing. as at the moment, it appears to be taking as long to page/sort as it is to load initially. i thought if when i first loaded i put the datasource of the grid in the session, that i could whip it out of the session to deal with paging/sorting and the like. basically it is doing the below protected void Page_Load(object sender, EventArgs e) { //init the datatable //grab the filter vars (if there are any) WebServiceObj WS = WSClient.Method(args); //fill the datatable (around and around we go) foreach (ParentObject po in WS.ReturnedObj) { var COs = from ChildObject c in WS.AnotherReturnedObj where c.whatever.equals(...) ...etc foreach(ChildObject c in COs){ myDataTable.Rows.Add(tlo.this, tlo.that, c.thisthing, c.thatthing, etc......); } } grdListing.DataSource = myDataTable; Session["dt"] = myDataTable; grdListing.DataBind(); } protected void Listing_PageIndexChanging(object sender, GridViewPageEventArgs e) { grdListing.PageIndex = e.NewPageIndex; grdListing.DataSource = Session["dt"] as DataTable; grdListing.DataBind(); } protected void Listing_Sorting(object sender, GridViewSortEventArgs e) { DataTable dt = Session["dt"] as DataTable; DataView dv = new DataView(dt); string sortDirection = " ASC"; if (e.SortDirection == SortDirection.Descending) sortDirection = " DESC"; dv.Sort = e.SortExpression + sortDirection; grdListing.DataSource = dv.ToTable(); grdListing.DataBind(); } am i doing this totally wrongly? or is the slowness just coming from the amount of data being bound in/return from the Web Service.. there are maybe 15 columns(ish) and a whole load of rows.. with more being added to the data the webservice is querying from all the time any suggestions / tips happily received thanks

    Read the article

  • Python Speeding Up Retrieving data from extremely large string

    - by Burninghelix123
    I have a list I converted to a very very long string as I am trying to edit it, as you can gather it's called tempString. It works as of now it just takes way to long to operate, probably because it is several different regex subs. They are as follow: tempString = ','.join(str(n) for n in coords) tempString = re.sub(',{2,6}', '_', tempString) tempString = re.sub("[^0-9\-\.\_]", ",", tempString) tempString = re.sub(',+', ',', tempString) clean1 = re.findall(('[-+]?[0-9]*\.?[0-9]+,[-+]?[0-9]*\.?[0-9]+,' '[-+]?[0-9]*\.?[0-9]+'), tempString) tempString = '_'.join(str(n) for n in clean1) tempString = re.sub(',', ' ', tempString) Basically it's a long string containing commas and about 1-5 million sets of 4 floats/ints (mixture of both possible),: -5.65500020981,6.88999986649,-0.454999923706,1,,,-5.65500020981,6.95499992371,-0.454999923706,1,,, The 4th number in each set I don't need/want, i'm essentially just trying to split the string into a list with 3 floats in each separated by a space. The above code works flawlessly but as you can imagine is quite time consuming on large strings. I have done a lot of research on here for a solution but they all seem geared towards words, i.e. swapping out one word for another. EDIT: Ok so this is the solution i'm currently using: def getValues(s): output = [] while s: # get the three values you want, discard the 3 commas, and the # remainder of the string v1, v2, v3, _, _, _, s = s.split(',', 6) output.append("%s %s %s" % (v1.strip(), v2.strip(), v3.strip())) return output coords = getValues(tempString) Anyone have any advice to speed this up even farther? After running some tests It still takes much longer than i'm hoping for. I've been glancing at numPy, but I honestly have absolutely no idea how to the above with it, I understand that after the above has been done and the values are cleaned up i could use them more efficiently with numPy, but not sure how NumPy could apply to the above. The above to clean through 50k sets takes around 20 minutes, I cant imagine how long it would be on my full string of 1 million sets. I'ts just surprising that the program that originally exported the data took only around 30 secs for the 1 million sets

    Read the article

  • A question about indexes regarding to the gain of inserts & updates in database

    - by Mestika
    Hi, I’m having a question about the fine line between the gain of an index to a table there is growing steadily in size every month and the gain of queries with an index. The situation is, that I’ve two tables, Table1 and Table2. Each table grows slowly but regularly each month (with about 100 new rows for Table1 and a couple of rows for Table2). My concrete question is whether to have an index or to drop it. I’ve made some measurement that an covering index on Table2 improve my SELECT queries and some rather much but again, I’ve to consider the pros and cons but having a really hard time to decide. For Table1 it might not be necessary to have an index because the SELECT queries there is not that common. I would appreciate any suggestion, tips or just good advice to what is a good solution. By the way, I’m using IBM DB2 version 9.7 as my Database system Sincerely Mestika

    Read the article

  • How to delete duplicate/aggregate rows faster in a file using Java (no DB)

    - by S. Singh
    I have a 2GB big text file, it has 5 columns delimited by tab. A row will be called duplicate only if 4 out of 5 columns matches. Right now, I am doing dduping by first loading each coloumn in separate List , then iterating through lists, deleting the duplicate rows as it encountered and aggregating. The problem: it is taking more than 20 hours to process one file. I have 25 such files to process. Can anyone please share their experience, how they would go about doing such dduping? This dduping will be a throw away code. So, I was looking for some quick/dirty solution, to get job done as soon as possible. Here is my pseudo code (roughly) Iterate over the rows i=current_row_no. Iterate over the row no. i+1 to last_row if(col1 matches //find duplicate && col2 matches && col3 matches && col4 matches) { col5List.set(i,get col5); //aggregate } Duplicate example A and B will be duplicate A=(1,1,1,1,1), B=(1,1,1,1,2), C=(2,1,1,1,1) and output would be A=(1,1,1,1,1+2) C=(2,1,1,1,1) [notice that B has been kicked out]

    Read the article

  • Can this loop be sped up in pure Python?

    - by Noctis Skytower
    I was trying out an experiment with Python, trying to find out how many times it could add one to an integer in one minute's time. Assuming two computers are the same except for the speed of the CPUs, this should give an estimate of how fast some CPU operations may take for the computer in question. The code below is an example of a test designed to fulfill the requirements given above. This version is about 20% faster than the first attempt and 150% faster than the third attempt. Can anyone make any suggestions as to how to get the most additions in a minute's time span? Higher numbers are desireable. EDIT: This experiment is being written in Python 3.1 and is 15% faster than the fourth speed-up attempt. def start(seconds): import time, _thread def stop(seconds, signal): time.sleep(seconds) signal.pop() total, signal = 0, [None] _thread.start_new_thread(stop, (seconds, signal)) while signal: total += 1 return total if __name__ == '__main__': print('Testing the CPU speed ...') print('Relative speed:', start(60))

    Read the article

  • How to do some preformance testing in asp.net mvc?

    - by chobo2
    Hi I am using asp.net mvc 2.0 and I want to test how long it takes to do some of my code. In one senario I do this load xml file up. validate xml file and deserailze. validate all rows in the xml file with more advanced validation that cannot be done in the schema validation. then I do a bulk insert. I want to know how long steps 1 to 3 take and how long step 4 takes. I tried to do like DateTime.UtcNow in areas and subtract them but it told me it took like 3 seconds but I know that is not right as steps 1 to 4 take 2mins to do.

    Read the article

  • Effecient data structure design

    - by Sway
    Hi there, I need to match a series of user inputed words against a large dictionary of words (to ensure the entered value exists). So if the user entered: "orange" it should match an entry "orange' in the dictionary. Now the catch is that the user can also enter a wildcard or series of wildcard characters like say "or__ge" which would also match "orange" The key requirements are: * this should be as fast as possible. * use the smallest amount of memory to achieve it. If the size of the word list was small I could use a string containing all the words and use regular expressions. however given that the word list could contain potentially hundreds of thousands of enteries I'm assuming this wouldn't work. So is some sort of 'tree' be the way to go for this...? Any thoughts or suggestions on this would be totally appreciated! Thanks in advance, Matt

    Read the article

  • Are there any e-commerce websites that use NoSQL databases

    - by Saif Bechan
    I have read a lot lately about 'NoSQL' databases such as CouchDB, MongoDB etc. Most of the websites I have seen using this are mainly text based websites such as The New York Times and Source forge. I was wondering if you could apply this to websites where payment is a huge issue. I am thinking of the following issues: How well can you secure the data Do these system provide an easy backup/restore machanism How are transactions handled commit/rollback I have read the following articles that cover some aspects: Can I do transactions and locks in CouchDB? Pros/Cons of document based database vs relational database In these posts the aspect of transactions if covered. However the questions of security and backups is not covered. Can someone shed some light on this subject? And if possible, does anyone know of some e-commerce websites that have successfully implemented the document based database.

    Read the article

  • List of divisors of an integer n (Haskell)

    - by Code-Guru
    I currently have the following function to get the divisors of an integer: -- All divisors of a number divisors :: Integer -> [Integer] divisors 1 = [1] divisors n = firstHalf ++ secondHalf where firstHalf = filter (divides n) (candidates n) secondHalf = filter (\d -> n `div` d /= d) (map (n `div`) (reverse firstHalf)) candidates n = takeWhile (\d -> d * d <= n) [1..n] I ended up adding the filter to secondHalf because a divisor was repeating when n is a square of a prime number. This seems like a very inefficient way to solve this problem. So I have two questions: How do I measure if this really is a bottle neck in my algorithm? And if it is, how do I go about finding a better way to avoid repetitions when n is a square of a prime?

    Read the article

  • How to integrate access control with my ORM in a .net windows form application?

    - by Ying
    I am developing a general database query tools, a .Net 3.5 Windows Form application. In order to make the presentation layer is independent of the database layer. I use an ORM framework, XPO from DevExpress. But, I have no access control function built in. I surfed Internet and I found in WCF Data Services, there is an interesting concept, Interceptor, which is following AOP(Aspect Oriented Programming). I am wondering who has such an experience to build access control in ORM. My basic requirement is : It should be a general method and controlled by users in runtime. So any hard coding is not acceptable. It could be based on attribute, database table, or even an external assembly. I am willing to buy a ready solution. According to the idea of AOP, an access control function can be integrated with existing functions easily and nearly not knowingly to the previous developer;) Any suggestions are welcome.

    Read the article

  • generate images with labels from a database

    - by Duncan Benoit
    hi there I need to have some images into my database, and the thing is that i need that images to have certain file names, dimensions and text on it. I know how to generate some images using the opencv lib, but this means that i need to install the lib and do just that job(which sounds as reinventing the wheel). Do you think is worth to do that or maybe you have a better idea? ps: the images are for testing stage of a software application, so i don;t need anything fancy or artistic.

    Read the article

  • C++ & proper TDD

    - by Kotti
    Hi! I recently tried developing a small-sized project in C# and during the whole project our team used the Test-Driven-Development (TDD) technique (xunit, moq). I really think this was awesome, because (when paired with C#) this approach allowed to relax when coding, relax when projecting and relax when refactoring. I suspect that all this TDD-stuff actually simplifies the coding process and, well, it allowed (eventually, for me) to get the same result with fewer brain cells working. Right after that I tried using TDD paired with C++ (I used Google Test and Google Mock libraries), and, I don't know why but I actually think that TDD here was a step back in terms of rapid application development. I had some moments when I had to spend huge amounts of time thinking of my tests, building proper mocks, rebuilding them and swearing at my monitor. And, well, I obviously can't ask something like "what I did wrong?" or "what was wrong in my approach?", because I don't know what to describe. But if there are any people who are used to TDD in C++ (and, probably C#) too, could you please advise me how to do this properly. Framework recommendations, architecture approaches, plain coding advices - if you are experienced in TDD & C++, please respond.

    Read the article

< Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >