Search Results

Search found 40567 results on 1623 pages for 'database performance'.

Page 220/1623 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • Cannot login to Postgrest database despite setting password for user 'postgres'

    - by Serg
    I'm trying to use pgAdmin III to manage my Postgres database. Here are the commands I've run on my machine: sudo apt-get install postgresql Then I installed the pgAdmin III application: sudo apt-get install pgadmin3 Next I focused on setting my username and password in order to login: sudo -u postgres psql postgres Here I set my password \password postgres Finally I just created my database: sudo -u postgres createdb repairsdatabase When I try to login using pgAdmin III, I get the error: An error has occurred: Error connecting to the server: FATAL: Peer authentication failed for user "postgres"

    Read the article

  • Export ASPNETDB data to another Database.

    - by raziiq
    Hi there. I am developing in Visual Web developer 2008. I have SQLEXPRESS 2005 and SQL Management Studio 2008 installed on my PC. I purchased a Database MS SQL 2008 on DiscountASP.net. Since the host provides only 1 database and my project has 2 database. One is the ASPNETDB that contains the roles and user etc (created using the Website Configuration Wizard) and the other is my database containing data to my website and is named MainDB. As Host allows only 1 database so i exported my ASPNETDB's tables and stored procedures to my MainDB using aspnet_regsql.exe, but the problem is that stored procedures and tables are exported to my MainDB but data is not exported, i mean there are no users in the tables. My Question is how to export everything of ASPNETDB including stored procedures, tables and data to my MainDB??

    Read the article

  • SQL Server 2008 Copy Database Wizard: Fail

    - by Nai
    I am trying to use the SQL Server 2008 Copy Database Wizard to copy a SQL Server 2008 database. I am using the SQL Management Object method. However, the copy fails with the following error: ERROR : errorCode=-1073548784 description=Executing the query "/* '==============================================..." failed with the following error: "Cannot use a CONTAINS or FREETEXT predicate on table or indexed view 'Product' because it is not full-text indexed." Any ideas on how I can proceed with this will be super helpful Kind Regards Nai

    Read the article

  • Logging strategy vs. performance

    - by vtortola
    Hi, I'm developing a web application that has to support lots of simultaneous requests, and I'd like to keep it fast enough. I have now to implement a logging strategy, I'm gonna use log4net, but ... what and how should I log? I mean: How logging impacts in performance? is it possible/recomendable logging using async calls? Is better use a text file or a database? Is it possible to do it conditional? for example, default log to the database, and if it fails, the switch to a text file. What about multithreading? should I care about synchronization when I use log4net? or it's thread safe out of the box? In the requirements appear that the application should cache a couple of things per request, and I'm afraid of the performance impact of that. Cheers.

    Read the article

  • iPhone Foundation - performance implications of mutable and xxxWithCapacity:0

    - by Adam Eberbach
    All of the collection classes have two versions - mutable and immutable, such as NSArray and NSMutableArray. Is the distinction merely to promote careful programming by providing a const collection or is there some performance hit when using a mutable object as opposed to immutable? Similarly each of the collection classes has a method xxxxWithCapacity, like [NSMutableArray arrayWithCapacity:0]. I often use zero as the argument because it seems a better choice than guessing wrongly how many objects might be added. Is there some performance advantage to creating a collection with capacity for enough objects in advance? If not why isn't the function something like + (id)emptyArray?

    Read the article

  • .NET or Windows Synchronization Primitives Performance Specifications

    - by ovanes
    Hello *, I am currently writing a scientific article, where I need to be very exact with citation. Can someone point me to either MSDN, MSDN article, some published article source or a book, where I can find performance comparison of Windows or .NET Synchronization primitives. I know that these are in the descending performance order: Interlocked API, Critical Section, .NET lock-statement, Monitor, Mutex, EventWaitHandle, Semaphore. Many Thanks, Ovanes P.S. I found a great book: Concurrent Programming on Windows by Joe Duffy. This book is written by one of the head concurrency developers for .NET Framework and is simply brilliant with lots of explanations, how things work or were implemented.

    Read the article

  • Google Gears - Database - VACUUM

    - by Sirber
    With this code: var db = google.gears.factory.create('beta.database'); db.open('cominar'); db.execute('CREATE TABLE IF NOT EXISTS Ajax (AJAX_ID INTEGER PRIMARY KEY AUTOINCREMENT , MODULE TEXT, FUNCTION TEXT, CONTENT_JSON TEXT);'); db.execute('VACUUM;'); // nettoye la DB I'm trying to clean the database (VACUUM) the database at each initialisation but I get this error: Uncaught Error: Database operation failed. ERROR: authorization denied DETAILS: not authorized The database was created by me (the same page). Thank you!

    Read the article

  • How do continuously update data to an asp page?

    - by Lori
    Hi, I have an asp page based on a very simple database. It references a single table of probably 30 records and maybe 12 data fields and everything works great as I am only uploading a new database every week or so. I have a special circumstance where I would like upload new data to the database and display automatically on the page every 20 to 30 seconds without the user having to refresh their screen. I would expect up to 1000 concurrent users accessing the data. I have been manually uploading the database via ftp, which will obviously not work on this timeline and would also run the risk of error pages as the database is being replaced. So, can anyone point me the right direction to setup this scenario? Other details that might be helpful: The database is an Access database (but I could change to another format if needed) Running on Windows platform hosted by an ISP, not my own server Thanks in advance for any help on this! Lori

    Read the article

  • How would the conversion of a custom CMS using a text-file-based database to Drupal be tackled?

    - by James Morris
    Just today I've started using Drupal for a site I'm designing/developing. For my own site http://jwm-art.net I wrote a user-unfriendly CMS in PHP. My brief experience with Drupal is making me want to convert from the CMS I wrote. A CMS whose sole method (other than comments) of automatically publishing content is by logging in via SSH and using NANO to create a plain text file in a format like so*: head<<END_HEAD title = Audio keywords= open,source,audio,sequencing,sampling,synthesis descr = Music, noise, and audio, created by James W. Morris. parent = home END_HEAD main<<END_MAIN text<<END_TEXT Digital music, noise, and audio made exclusively with @=xlink=http://www.linux-sound.org@:Linux Audio Software@_=@. END_TEXT image=gfb@--@;Accompanying image for penonpaper-c@right ilink=audio_2008 br= ilink=audio_2007 br= ilink=audio_2006 END_MAIN info=text<<END_TEXT I've been making PC based music since the early nineties - fortunately most of it only exists as tape recordings. END_TEXT ( http://jwm-art.net/dark.php?p=audio - There's just over 400 pages on there. ) *The jounal-entry form which takes some of the work out of it, has mysteriously broken. And it still required SSH access to copy the file to the main dat dir and to check I had actually remembered the format correctly and the code hadn't mis-formatted anything (which it always does). I don't want to drop all the old content (just some), but how much work would be involved in converting it, factoring into account I've been using Drupal for a day, have not written any PHP for a couple of years, and have zero knowledge of SQL? How might a team of developers tackle this? How do-able is it for one guy in his spare time?

    Read the article

  • PHP/MySQL database connection priority?

    - by Josh
    Hello, I have a production database where usage statistics for a service we're working on. I use php to periodically roll up different resolutions (day, week, month, year) of interesting statistics in buckets dictated by the resolution. The php application I've written "completes" its data when its run, such that it will calculate all the rolled-up statistics for the resolutions and periods since it was last run. This is useful if we want to turn this off to debug database performance issues, because I can turn it back on and have it complete its data set. The problem I have, is the calculations are fairly intensive and drive the QPS of the production database server up. Is there a way to set a "priority" on a particular database connection so that it will only use "off-cycles" to do these calculations? Maybe a proper response would be to replicate the tables I'm working on into a different stats database, but, unfortunately I don't have the resources in place to attempt such a thing (yet). Thanks in advance for any help, Josh

    Read the article

  • NSURLConnection performance

    - by oksk
    Hi all, I'm using NSURLConnection for downloading some images in my app currently. Before implementing via this, I implemented it by NSData(dataWithContentOfURL) in NSThread. But I wanted to cancel during downloading images, So I changed it to NSURLConnection. But It happens other problem. Performance was very low after changing. For example, There is at least 5seconds for downloading images at NSThread(NSData async) But, There is more than 2 or 3 times than it at NSURLConnection(async) !! Can I enhance performance ?? How?? (* sorry about my question with NSData(dataWithContentOfFile). correct question is dataWithContentOfURL)

    Read the article

  • A way to measure performance

    - by Andrei Ciobanu
    Given Exercise 14 from 99 Haskell Problems: (*) Duplicate the elements of a list. Eg.: *Main> dupli''' [1..10] [1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10] I've implemented 4 solutions: {-- my first attempt --} dupli :: [a] -> [a] dupli [] = [] dupli (x:xs) = replicate 2 x ++ dupli xs {-- using concatMap and replicate --} dupli' :: [a] -> [a] dupli' xs = concatMap (replicate 2) xs {-- usign foldl --} dupli'' :: [a] -> [a] dupli'' xs = foldl (\acc x -> acc ++ [x,x]) [] xs {-- using foldl 2 --} dupli''' :: [a] -> [a] dupli''' xs = reverse $ foldl (\acc x -> x:x:acc) [] xs Still, I don't know how to really measure performance . So what's the recommended function (from the above list) in terms of performance . Any suggestions ?

    Read the article

  • Tool for documentation of the fields of a database ?

    - by Jerome WAGNER
    Hello, I need to add documentation to all fields of 2 databases (1 postgresql & 1 sql server), around 100 tables each. What tool would be convenient to do that (reverse the schema + add documentation manually on all fields) ? My favors would go to an open source tool with a graphical & xml output. Thanks for your help Jerome WAGNER

    Read the article

  • Performance benefits of upgrading Richfaces to newer version

    - by peteDog
    I have a client that's running an application based on JBoss 4.0.5, Seam 1.2 and RichFaces 3.0.1. Their system is having performance problems due to the fact that a lot of data is coming back from the server to be displayed on screen and it seems like the rendering of that data is taking forever. The data brought back is displayed in a tabbed interface, but the tabs aren't currently being loaded individually, but all at once. I'm trying to build up a case to present to the client on the benefits of upgrading to never version of RichFaces, which, as I understand it, has added a great number of features related to tabbed panels and being able to use ajax to page the data and load the chunks you actually need to display at the moment, and not the rest that's in other tabs. The move to a newer version of RichFaces will also result in never versions of Jboss and Seam, as the current production build of RichFaces 3.2.1 requires JSF 1.2. IF anyone has some suggestions or experience on performance of current versions RichFaces, paging, etc, I would really appreciate some feedback.

    Read the article

  • Setting up nHibernate with an Oracle database and Visual Studio 2010

    - by Geoff
    I'm creating a .ASPNET project and I would like to setup nHibernate as my ORM tool. I will be using an existing oracle database and Visual Studio 2010. ORM tools are very new to me and really could use any advice to better understand the tool and the process required to implement them. I've been following an article at http://nhforge.org/wikis/howtonh/your-first-nhibernate-based-application.aspx to learn about it and am stuck where they say to create a local database as mine only give me the option to create a SQL server database (perhaps this a new for visual studio 2010?). Is the purpose of this database just to cache results from the live database? Thanks for your help! Geoff

    Read the article

  • Cost of exception handlers in Python

    - by Thilo
    In another question, the accepted answer suggested replacing a (very cheap) if statement in Python code with a try/except block to improve performance. Coding style issues aside, and assuming that the exception is never triggered, how much difference does it make (performance-wise) to have an exception handler, versus not having one, versus having a compare-to-zero if-statement?

    Read the article

  • How does glClear() improve performance?

    - by Jon-Eric
    Apple's Technical Q&A on addressing flickering (QA1650) includes the following paragraph. (Emphasis mine.) You must provide a color to every pixel on the screen. At the beginning of your drawing code, it is a good idea to use glClear() to initialize the color buffer. A full-screen clear of each of your color, depth, and stencil buffers (if you're using them) at the start of a frame can also generally improve your application's performance. On other platforms, I've always found it to be an optimization to not clear the color buffer if you're going to draw to every pixel. (Why waste time filling the color buffer if you're just going to overwrite that clear color?) How can a call to glClear() improve performance?

    Read the article

  • Virtual Earth Shape Rendering Performance

    - by Mike
    I am overlaying a transparent image on my VEMap control by rendering it as a single VEShape. The shape changes sizes dynamically depeding on the zoom level of my map and can be as large as 4000*4000px. In older browsers such as IE6 and early versions of Firefox 2.x, map control performance degrades rapidly when my shape gets larger than 1500*1500px. The mouse pointer moves slowly and the map responds very slowly to events. I don't see this issue at all in newer browsers (IE7+). Are there any workarounds to boost performance of rendering a large shape for IE6 users?

    Read the article

  • Retrieving many huge sized EPS files and converting them to JPEG in ASP.NET application

    - by Ashish Gupta
    I have many (600) EPS files(300 KB - 1 MB) in database. In my ASP.NET application (using ASP.NET 4.0) I need to retrieve them one by one and call a web service which would convert the content to the JPEG file and update the database (JPEGContent column with the JPEG content). However, retrieving the content for 600 of them itself takes too long from the SQL management studio itself (takes 5 minutes for 10 EPS contents). So I have two issues:- 1) How to get the EPS content ( unfortunately, selecting certain number of content is not an option :-( ):- Approach 1:- foreach(var DataRow in DataTable.Rows) { // get the Id and byte[] of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or foreach(var DataRow in DataTable.Rows) { // get only the Id of EPS // Hit database to get the content of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or Any other approach? 2) Converting EPS to JPEG using a web method for 600 contents. Ofcourse, each call would be a long running operation. Would task parellel library (TPL) be a better way to achieve this? Also, is doing the entire thing in a SQL CLR function a good idea?

    Read the article

  • Normalize or Denormalize in high traffic websites

    - by Inam Jameel
    what is the best practice for database design for high traffic websites like this one stackoverflow? should one must use normalize database for record keeping or normalized technique or combination of both? is it sensible to design normalize database as main database for record keeping to reduce redundancy and at the same time maintain another denormalized form of database for fast searching? or main database should be denormalize and one can make normalized views in the application level for fast database operations? or beside above mentioned approach? what is the best practice of designing high traffic websites???

    Read the article

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • database importing problem with sql server

    - by tibin mathew
    Hi, I have a database working in mu local sql server 2005 express edition. I have to import my local dtabase to a remote servr database. For that i established connection to that remote server, and i can now see that database . but when i tried to restore database fro my local machine i'm getting an error message when i tried to give backup file location. Below is the error message The EXECUTE permission was denied on the object 'xp_availablemedia', database 'mssqlsystemresource', schema 'sys'. The user does not have permission to perform this action. The statement has been terminated. (Microsoft SQL Server, Error: 229) waht is the problem, how can i solve this. Please help me

    Read the article

  • Switch from Microsofts STL to STLport

    - by Laserallan
    Hi! I'm using quite much STL in performance critical C++ code under windows. One possible "cheap" way to get some extra performance would be to change to a faster STL library. According to this post STLport is faster and uses less memory, however it's a few years old. Has anyone made this change recently and what were your results?

    Read the article

  • Entity Framework Performance Problem

    - by Steve Horn
    I'm hoping that someone can help me understand how to overcome a performance problem I'm running into with the latest version of the Entity Framework. In my test, I created my model from a database consisting of around 80 tables. The problem that I'm running into is that the cost of the very first query I run on a thread is very expensive. If I run without pre-compiling views the first query takes anywhere from 5800 to 6600 milliseconds. If I pre-compile the views (see this article) I can get the initial query cost down to about 2800 to 3200 milliseconds. 3 seconds for each request is still unacceptable for my needs. Subsequent queries are very fast. Can you please help me understand how to eliminate the poor performance of the initial query? I'm using the version of entity framework that ships with Visual Studio 2010 RC.

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >