Search Results

Search found 40567 results on 1623 pages for 'database performance'.

Page 288/1623 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • Android insert into sqlite database

    - by Josh
    I know there is probably a simple thing I'm missing, but I've been beating my head against the wall for the past hour or two. I have a database for the Android application I'm currently working on (Android v1.6) and I just want to insert a single record into a database table. My code looks like the following: //Save information to my table sql = "INSERT INTO table1 (field1, field2, field3) " + "VALUES (" + field_one + ", " + field_two + ")"; Log.v("Test Saving", sql); myDataBase.rawQuery(sql, null); the myDataBase variable is a SQLiteDatabase object that can select data fine from another table in the schema. The saving appears to work fine (no errors in LogCat) but when I copy the database from the device and open it in sqlite browser the new record isn't there. I also tried manually running the query in sqlite browser and that works fine. The table schema for table1 is _id, field1, field2, field3. Any help would be greatly appreciated. Thanks!

    Read the article

  • CLR Function not working in restored database

    - by Ben Thul
    So I've restore a database with CLR assemblies in it from one physical server to another. One of the functions in the CLR assembly essentially does an uncompress of some compressed data. When I run this function against data in the restored database it returns the compressed data (rather than the uncompressed data). No error is thrown in SSMS or in the SQL server error logs. At the suggestion of others, I've checked differences in the database ownership (both owned by sa), trustworthiness (both set to not trustworthy). I also checked differences in the .NET framework installs on both machines but found only that the target machine had a 1.1 version installed that the source didn't. I don't know what else to try. Any suggestions would be most appreciated. Thanks in advance. Thanks in advance, Ben

    Read the article

  • Switch statements: do you need the last break? (Javascript mainly)

    - by Jon Raasch
    When using a switch() statement, you add break; in between separate case: declarations. But what about the last one? Normally I just leave it off, but I'm wondering if this has some performance implication I'm not thinking about? I've been wondering about this for a while and don't see it asked elsewhere on Stack-O, but sorry if I missed it. I'm mainly asking this question regarding Javascript, although I'm guessing the answer will apply to all switch() statements.

    Read the article

  • Storing dates as UTC in database

    - by James
    I am storing date/times in the database as UTC and computing them inside my application back to local time based on the specific timezone. Say for example I have the following date/time: 01/01/2010 00:00 Say it is for a country e.g. UK which observes DST (Daylight Savings Time) and at this particular time we are in daylight savings. When I convert this date to UTC and store it in the database it is actually stored as: 31/12/2009 23:00 As the date would be adjusted -1 hours for DST. This works fine when your observing DST. However, what happens when the clock is adjusted back? When I pull that date from the database and convert it to local time that particular datetime would be seen as 31/12/2009 23:00 when in reality it was processed as 01/01/2010 00:00. Correct me if I am wrong but isn't this a bit of a flaw when storing times as UTC?

    Read the article

  • Access Database Connection

    - by Gerory
    I am using C#.net application with oledbConnection to open database in exclusive mode what exactly process in backend when we open database in exclusive mode , it not increse size of database compare to shared mode connectivity. Is it doing Compat or Repair when it close connection of Exclusive Mode?? As we are facing unExpected Errors at time of open connection using Exclusive mode & its come abnormally even we have dispossed all connection properly. Please share ur view ...if it doing compact/reapai at time of closing connection is there a way to wait before opening new Connection?

    Read the article

  • Fastest possible way to render 480 x 320 background as iPhone OpenGL ES textures

    - by unknownthreat
    I need to display 480 x 320 background image in OpenGL ES. The thing is I experienced a bit of a slow down in iPhone when I use 512 x 512 texture size. So I am finding an optimum case for rendering iPhone resolution size background in OpenGL ES. How should I slice the background in this case to obtain the best possible performance? My main concern is speed. Should I go for 256 x 256 or other texture sizes here?

    Read the article

  • Efficient paging with large tables in sql 2008

    - by Kumar
    for tables with 1,000,000 rows and possibly many many more ! haven't done any benchmarking myself so wanted to get the experts opinion. Looked at some articles on row_number() but it seems to have performance implications What are the other choices/alternatives ?

    Read the article

  • NETBEANS JAVADB - How do I integrate a JavaDB DataBase into my main Java Package

    - by Stefanos Kargas
    JAVA I am working on a desktop application which uses JavaDB. I am using NetBeans 6.8 and JDK 6 Update 20 I created the database I need and connected to it through my application using ClientDriver: String driver = "org.apache.derby.jdbc.ClientDriver"; String connectionURL = "jdbc:derby://localhost:1527/myDB;create=true;user=user;password=pass"; try { Class.forName(driver); } catch (java.lang.ClassNotFoundException e) { e.printStackTrace(); } try { schedoDBConnection = DriverManager.getConnection(connectionURL); } catch (Exception e) { e.printStackTrace(); } This works fine. But in that case the service of the database comes from NetBeans. If I move my application to another PC I won't be able to access my database. How can I integrate my JavaDB into my application?

    Read the article

  • Difference in linq-to-sql query performance using GenericRespositry

    - by Neil
    Given i have a class like so in my Data Layer public class GenericRepository<TEntity> where TEntity : class { [System.ComponentModel.DataObjectMethod(System.ComponentModel.DataObjectMethodType.Select)] public IQueryable<TEntity> SelectAll() { return DataContext.GetTable<TEntity>(); } } I would be able to query a table in my database like so from a higher layer using (GenericRepositry<MyTable> mytable = new GenericRepositry<MyTable>()) { var myresult = from m in mytable.SelectAll() where m.IsActive select m; } is this considerably slower than using the usual code in my Data Layer using (MyDataContext ctx = new MyDataContext()) { var myresult = from m in ctx.MyTable where m.IsActive select m; } Eliminating the need to write simple single table selects in the Data layer saves a lot of time, but will i regret it?

    Read the article

  • Html combo box to database record Id

    - by LanguaFlash
    I'm fairly sure there has to be a simple solution to my problem, but I am a new web developer and can't quite figure it out. On my page I have a combo box whose values are filled from my database. When the user submits the form, how to I go about converting those values back to the record numbers in the database? Up to now I have been just doing a sort of reversed lookup in my database to try to get the record's ID. This has quite a few obvious flaws and I am sure that there has to be a better way. I am used to MS Forms combo boxes where the record data and ID are never separated. But in the case of a web form, I have no way to do multiple columns in the combo box like I am used to. Thanks! Jeff

    Read the article

  • Migrating from hand-written persistence layer to ORM

    - by Sergey Mikhanov
    Hi community, We are currently evaluating options for migrating from hand-written persistence layer to ORM. We have a bunch of legacy persistent objects (~200), that implement simple interface like this: interface JDBC { public long getId(); public void setId(long id); public void retrieve(); public void setDataSource(DataSource ds); } When retrieve() is called, object populates itself by issuing handwritten SQL queries to the connection provided using the ID it received in the setter (this usually is the only parameter to the query). It manages its statements, result sets, etc itself. Some of the objects have special flavors of retrive() method, like retrieveByName(), in this case a different SQL is issued. Queries could be quite complex, we often join several tables to populate the sets representing relations to other objects, sometimes join queries are issued on-demand in the specific getter (lazy loading). So basically, we have implemented most of the ORM's functionality manually. The reason for that was performance. We have very strong requirements for speed, and back in 2005 (when this code was written) performance tests has shown that none of mainstream ORMs were that fast as hand-written SQL. The problems we are facing now that make us think of ORM are: Most of the paths in this code are well-tested and are stable. However, some rarely-used code is prone to result set and connection leaks that are very hard to detect We are currently squeezing some additional performance by adding caching to our persistence layer and it's a huge pain to maintain the cached objects manually in this setup Support of this code when DB schema changes is a big problem. I am looking for an advice on what could be the best alternative for us. As far as I know, ORMs has advanced in last 5 years, so it might be that now there's one that offers an acceptable performance. As I see this issue, we need to address those points: Find some way to reuse at least some of the written SQL to express mappings Have the possibility to issue native SQL queries without the necessity to manually decompose their results (i.e. avoid manual rs.getInt(42) as they are very sensitive to schema changes) Add a non-intrusive caching layer Keep the performance figures. Is there any ORM framework you could recommend with regards to that?

    Read the article

  • Better way to do SELECT with GROUP BY

    - by Luca Romagnoli
    Hi i've wrote a query that works: SELECT `comments`.* FROM `comments` RIGHT JOIN (SELECT MAX( id ) AS id, core_id, topic_id FROM comments GROUP BY core_id, topic_id order by id desc) comm ON comm.id = comments.id LIMIT 10 I want know if it is possible (and how) to rewrite it to get better performance. Thanks

    Read the article

  • python fdb save huge data from database to file

    - by peter
    I have this script SELECT = """ select coalesce (p.ID,'') as id, coalesce (p.name,'') as name, from TABLE as p """ self.cur.execute(SELECT) for row in self.cur.itermap(): xml +=" <item>\n" xml +=" <id>" + id + "</id>\n" xml +=" <name>" + name + "</name>\n" xml +=" </item>\n\n" #save xml to file here f = open... and I need to save data from huge database to file. There are 10 000s (up to 40000) of items in my database and it takes very long time when script runs (1 hour and more) until finish. How can I take data I need from database and save it to file "at once"? (as quick as possible? I don't need xml output because I can process data from output on my server later. I just need to do it as quickly as possible. Any idea?) Many thanks!

    Read the article

  • Vb.exe performance time

    - by vinodacharyabva
    Hi I am running a vb.exe through automation. In exe I have return a code which takes a data from database and saves that data into file. I ran that .exe for the first time. It took 1 mins. For testing baseline I called same .exe 5 times one after the other. But it took nearly 10 mins to generate. My question is if it takes 1 min for 1 report to generate then it should take 5 mins to generate 5 report but why it is taking 10 mins (more than the double). Is there any problem while calling a exe one after the other?

    Read the article

  • Do you use Scimore SQL database ?

    - by Darian Miller
    There's a database engine that looks amazing for a free tool and that is Scimore. Have you had much experience with it? If so, how does it rate..particularly against Firebird? How resilient/self reliant is it? (Meaning how much downtime/maintenance is expected?) The scale out capabilities also look very interesting. I just downloaded it and have been playing around and so far it looks good. I had been looking for an easy to deploy single-user type embedded database (which Scimore has an option) and was toying with MS SQL Compact Edition and SQLite and remembered this database from a trial a few years ago. (Windows platform) I was about ready to settle in on SQLite but started thinking about other projects which are multi-user and wanted to stick with a single solution...which is why I started looking at Firebird as well.

    Read the article

  • any faster alternative??

    - by kaushik
    cost=0 for i in range(12): cost=cost+math.pow(float(float(q[i])-float(w[i])),2) cost=(math.sqrt(cost)) Any faster alternative to this? i am need to improve my entire code so trying to improve each statements performance. thanking u

    Read the article

  • Rails: has_many association with a table in another database and without foreign key

    - by Fernando
    Here is my situation. I have model called Account. An account can have one or more contracts. The problem is that i'm dealing with a legacy application and each account's contracts are stored in a different database. Example: Account 1's contract are in account1_db.contracts. Account 2's contract are in account2_db.contracts. The database name is a field stored in accounts table. How can i make rails association work with this? This is a legacy PHP application and i simply can't change it to store everything in one table. I need to make it work somehow. I tried this, but it didn't worked: has_many :contracts, :conditions => [lambda{ Contract.set_table_name(self.database + '.contracts'); return '1' }] Any ideas?

    Read the article

  • Display the newest result from my database

    - by nogggin1
    Im building a webpage that displays the newest result from a database as a news article. The parts of the database are title, bodytext and created although I wish to keep created hidden. I am quite new to PHP and dont have any idea how to do this, could i please get some help just to display it as: title bodytext i need to be able to connect to the database with my details then display the results in a div i have set up. although i only want to show the newest result! thank you. Ned Perkins

    Read the article

  • is there anyway to know if your supposedly fully dedicated server is really a virtually resource-sha

    - by siran
    Hi, sometimes I feel my server not responding as smoothly as I would expect (i have a Intel(R) Xeon(TM) CPU 2.80GHz Quad Core), given that for example, the 'top' commands reports a low load < 0.5, CPU are almost completely idle ... I maybe have internet connectivity issues, so I don't really know if it's me or if it's the server itself. Is there anykind of benchmarking script (or something analogous) I could run and see the actual performance of the server ?

    Read the article

  • migrate database from sybase to mysql

    - by jindalsyogesh
    I have been trying to migrate a database from sybase to Mysql. This is my approach: Generate pojo classes from my sybase database using hibernate in eclipse Use these pojo classes to generate the schema in mysql database Then somehow migrate the data from sybase to mysql I guess this approach should work??? Please let me know if there is any better or easier approach. The thing is I am not even able to get the first step done. I added hibernate plugin in eclipse from this link: [http://download.jboss.org/jbosstools/updates/stable/][1] I added sybase jar file to my project classpath Then I added hibernate console configuration file Then I added hibernate configuration file Then I added hibernate code generation configuration When I try to run the code generation configuration file, I am getting java.lang.NullPointerException and I have no idea how to fix it. I searched a lot of forums, tried to google it, but I not able to find any solution. Can anybody tell me what mistake I am making here or point to some hibernate tutorial for eclipse??

    Read the article

  • Postgresql count+sort performance

    - by invictus
    I have built a small inventory system using postgresql and psycopg2. Everything works great, except, when I want to create aggregated summaries/reports of the content, I get really bad performance due to count()'ing and sorting. The DB schema is as follows: CREATE TABLE hosts ( id SERIAL PRIMARY KEY, name VARCHAR(255) ); CREATE TABLE items ( id SERIAL PRIMARY KEY, description TEXT ); CREATE TABLE host_item ( id SERIAL PRIMARY KEY, host INTEGER REFERENCES hosts(id) ON DELETE CASCADE ON UPDATE CASCADE, item INTEGER REFERENCES items(id) ON DELETE CASCADE ON UPDATE CASCADE ); There are some other fields as well, but those are not relevant. I want to extract 2 different reports: - List of all hosts with the number of items per, ordered from highest to lowest count - List of all items with the number of hosts per, ordered from highest to lowest count I have used 2 queries for the purpose: Items with host count: SELECT i.id, i.description, COUNT(hi.id) AS count FROM items AS i LEFT JOIN host_item AS hi ON (i.id=hi.item) GROUP BY i.id ORDER BY count DESC LIMIT 10; Hosts with item count: SELECT h.id, h.name, COUNT(hi.id) AS count FROM hosts AS h LEFT JOIN host_item AS hi ON (h.id=hi.host) GROUP BY h.id ORDER BY count DESC LIMIT 10; Problem is: the queries runs for 5-6 seconds before returning any data. As this is a web based application, 6 seconds are just not acceptable. The database is heavily populated with approximately 50k hosts, 1000 items and 400 000 host/items relations, and will likely increase significantly when (or perhaps if) the application will be used. After playing around, I found that by removing the "ORDER BY count DESC" part, both queries would execute instantly without any delay whatsoever (less than 20ms to finish the queries). Is there any way I can optimize these queries so that I can get the result sorted without the delay? I was trying different indexes, but seeing as the count is computed it is possible to utilize an index for this. I have read that count()'ing in postgresql is slow, but its the sorting that are causing me problems... My current workaround is to run the queries above as an hourly job, putting the result into a new table with an index on the count column for quick lookup. I use Postgresql 9.2.

    Read the article

  • PHP eval() code in between <?php ?> from database

    - by kr1zmo
    Some of you may be annoyed with this question, and claim it's unsafe blah blah. I want to be able to put php into the database and run it. I have to do this because I store page layouts in the database and each our different for each other, however in some cases I want to use dynamic content for some of the pages. assume $query_from_db is the string returned from the database. php should only eval() the code in between <?php and ?> $query_from_db = '<div> <?php //php to run function dosomething() { //bleh } ?> </div> '; php echo eval($query_from_db);

    Read the article

  • Compiling .xsl files into .class files

    - by Alex Ciminian
    I'm currently working on a Java web project (Spring) which involves heavy use of xsl transformations. The stylesheets seldom change, so they are currently cached. I was thinking of improving performance by compiling the xsl-s into class files so they wouldn't have to be interpreted on each request. I'm new to Java, so I don't really know the ecosystem that well. What's the best way of doing this (libraries, methods etc.)? Thanks, Alex

    Read the article

  • Improve performance writing 10 million records to text file using windows service

    - by user1039583
    I'm fetching more than 10 millions of records from database and writing to a text file. It takes hours of time to complete this operation. Is there any option to use TPL features here? It would be great if someone could get me started implementing this with the TPL. using (FileStream fStream = new FileStream("d:\\file.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite)) { BufferedStream bStream = new BufferedStream(fStream); TextWriter writer = new StreamWriter(bStream); for (int i = 0; i < 100000000; i++) { writer.WriteLine(i); } bStream.Flush(); writer.Flush(); // empty buffer; fStream.Flush(); }

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >