Search Results

Search found 30279 results on 1212 pages for 'database drift'.

Page 564/1212 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • Adding a row to an existing datatable in JSF

    - by shyamb
    Hi, I have a requirement of changing an existing JSF 1.1 project where I need to add an additional row to a datatable on click of a button. Currently the datatable loads 3 rows from the backing bean and this new button should add additional rows to the datatable on each click. Using the suggestion provided by http://balusc.blogspot.com/2006/06/using-datatables.html I was able to display the additional row on the UI but I could not save the new data back to the database because the backing bean is in request scope and I cannot change the scope of this bean as it would create other issues. Can somebody provide me a solution to display the new row and also to save the data back to the database when the backing bean is in request scope. Thanks Shyam

    Read the article

  • Rails CSV import, adding to a related table

    - by Jack
    Hi, I have a csv importing system on my app (used locally only) which parses the csv file line by line and adds the data to the database table. This is based on a tutorial here. require 'csv' def csv_import @parsed_file=CSV::Reader.parse(params[:dump][:file]) n = 0 @parsed_file.each_with_index do |row, i| next if i == 0 #ignore the first row course = Course.new course.title = row[0] course.unit_code = row[1] course.course_type = row[2] course.value = row[3] course.pass_mark = row[4] if course.save n = n+1 GC.start if n%50==0 end flash.now[:message] = "CSV Import Successful, #{n} new courses added to the database." end redirect_to(courses_url) end This is all in the courses controller and works fine. There is a relationship that courses HABTM years and years HABTM courses. In the csv file (effectively in row[5] to row[8]) are the year_id s. Is there a way that I can add this within the method above. I am confused as to how to loop over the 4 items and add them to the courses_years table. Thank you Jack

    Read the article

  • How to get nearby POIs

    - by balexandre
    I have a database with Points of Interest that all have an address. I want to know what is the method/name/call to get all nearby POIs from a given position. I understand that I need to convert all my addresses to LAT / LON coordinates at least, but my question is: for a given LAT / LONG how do I get from the database/array what POIs are nearby by distance, for example: You are here 0,0 nearest POIs in a 2km radius are: POI A (at 1.1 Km) POI C (at 1.3 Km) POI F (at 1.9 Km) I have no idea what should I look into to get what I want :-( Any help is greatly appreciated. Thank you

    Read the article

  • node.js with SQL Server Native Client 11 scope_identity not being returned

    - by binderbound
    I'm having trouble with inserting a value into a database through node.js. Here is the offending code: sql.query(conn_str ,"INSERT INTO Login(email, hash, salt, firstName, lastName) VALUES(?, ?, ?, ?, ?); SELECT SCOPE_IDENTITY() AS 'Identity';" , [email, hash, salt, firstName, lastName], function(err, results){ console.log(results) } Unfortunately, the console is just echoing [], meaning results is an empty array, I suppose. Does anyone know why the identity is not being returned? Even if it was null, why isn't results then [{Identity: null }] ? Database is on Azure, which does have a "Scope_Identity" function, and the native client also recognises this function. Using node package "msnodesql" Please Help

    Read the article

  • Android v1.5 w/ browser data storage

    - by Sirber
    I'm trying to build an offline web application which can sync online if the network is available. I tryed jQuery jStore but the test page stop at "testing..." whitout result, then I tryed Google Gears which is supposed to be working on the phone but it gears is not found. if (window.google && google.gears) { google.gears.factory.getPermission(); // Database var db = google.gears.factory.create('beta.database'); db.open('cominar-compteurs'); db.execute('create table if not exists Lectures' + ' (ID_COMPTEUR int, DATE_HEURE timestamp, kWh float, Wmax float, VAmax float, Wcum float, VAcum float);'); } else { alert('Google Gears non trouvé.'); } the code does work on Google Chrome v5.

    Read the article

  • How to transfer SQLite db to web server on android phone

    - by Shane
    Hi, I have an application that creates an SQLite database and saves information to it over the course of a day. At the end of the day i want to export this database to a web server. Could anyone point me in the right direction for this? Should I use httppost or put. I have researched this myself online but there seems to be so many different ways to explore. The server side does not exist yet either. I have access to an apache server so i am hoping to use that. Could anyone advise me the best/most simple way to do this? Thanks

    Read the article

  • Error in MySQL Workbench Forward Engineer Stored Procedures

    - by colithium
    I am using MySQL Workbench (5.1.18 OSS rev 4456) to forward engineer a SQL CREATE script. For every stored procedure, the automatic process outputs something like: DELIMITER // USE DB_Name// DB_Name// DROP procedure IF EXISTS `DB_Name`.`SP_Name` // USE DB_Name// DB_Name// CREATE PROCEDURE `DB_Name`.`SP_Name` (id INT) BEGIN SELECT * FROM Table_Name WHERE Id = id; END// The two lines that are simply the database name followed by the delimiter are errors and are reported as such when running the script. As long as they are ignored, it looks like everything gets created just fine. But why would it add those lines? I am creating the database in the WAMP environment which uses MySQL 5.1.36

    Read the article

  • Mysql-how to update the "domain.com" in "[email protected]"

    - by w00t
    Hi there, In my database I have a lot of users who've misspelled their e-mail address. This in turn causes my postfix to bounce a lot of mails when sending the newsletter. Forms include (but are not limited to) "yaho.com", "yahho .com" etc. Very annoying! So i have been trying to update those record to the correct value. After executing select email from users where email like '%@yaho%' and email not like '%yahoo%'; and getting the list, I'm stuck because I do not know how to update only the yaho part. I need the username to be left intact. So I thought I would just dump the database and use vim to replace, but I cannot escape the @ symbol.. BTW, how do I select all email addresses written in CAPS? select upper(email) from users; would just transform everything into CAPS, whereas I just needed to find out the already-written-in-CAPS mails.

    Read the article

  • wamp server not working? or bad php code

    - by lclaud
    I have this PHP code: <?php $username="root"; $password="******";// censored out $database="bazadedate"; mysql_connect("127.0.0.1",$username,$password); // i get unknown constant localhost if used instead of the loopback ip @mysql_select_db($database) or die( "Unable to select database"); $query="SELECT * FROM backup"; $result=mysql_query($query); $num=mysql_numrows($result); $i=0; $raspuns=""; while ($i < $num) { $data=mysql_result($result,$i,"data"); $suma=mysql_result($result,$i,"suma"); $cv=mysql_result($result,$i,"cv"); $det=mysql_result($result,$i,"detaliu"); $raspuns = $raspuns."#".$data."#".$suma."#".$cv."#".$det."@"; $i++; } echo "<b> $raspuns </b>"; mysql_close(); ?> And it should return a single string containing all data from the table. But it says "connection reset when loading page". the log is : [Tue Jun 15 16:20:31 2010] [notice] Parent: child process exited with status 255 -- Restarting. [Tue Jun 15 16:20:31 2010] [notice] Apache/2.2.11 (Win32) PHP/5.3.0 configured -- resuming normal operations [Tue Jun 15 16:20:31 2010] [notice] Server built: Dec 10 2008 00:10:06 [Tue Jun 15 16:20:31 2010] [notice] Parent: Created child process 2336 [Tue Jun 15 16:20:31 2010] [notice] Child 2336: Child process is running [Tue Jun 15 16:20:31 2010] [notice] Child 2336: Acquired the start mutex. [Tue Jun 15 16:20:31 2010] [notice] Child 2336: Starting 64 worker threads. [Tue Jun 15 16:20:31 2010] [notice] Child 2336: Starting thread to listen on port 80. [Tue Jun 15 16:20:35 2010] [notice] Parent: child process exited with status 255 -- Restarting. [Tue Jun 15 16:20:35 2010] [notice] Apache/2.2.11 (Win32) PHP/5.3.0 configured -- resuming normal operations [Tue Jun 15 16:20:35 2010] [notice] Server built: Dec 10 2008 00:10:06 [Tue Jun 15 16:20:35 2010] [notice] Parent: Created child process 1928 [Tue Jun 15 16:20:35 2010] [notice] Child 1928: Child process is running [Tue Jun 15 16:20:35 2010] [notice] Child 1928: Acquired the start mutex. [Tue Jun 15 16:20:35 2010] [notice] Child 1928: Starting 64 worker threads. [Tue Jun 15 16:20:35 2010] [notice] Child 1928: Starting thread to listen on port 80. Any idea why it outputs nothing?

    Read the article

  • NHibernate - is property lazy loading possible?

    - by Ben
    I've got some binary data that I store and was going to separate this out into a separate table so it could be lazy loaded. However, i then came across this post by Ayende (http://ayende.com/Blog/archive/2010/01/27/nhibernate-new-feature-lazy-properties.aspx) which suggests that property lazy loading is now possible. I have added the lazy="true" attribute to my property mapping but the field is still loaded from the database (I am using a simple text field to test). My query: return _session.CreateQuery("from Product") .SetMaxResults(1) .UniqueResult<Product>(); Mapping: <property name="Description" type="string" column="FullDescription" lazy="true"/> Has anyone been able to get this working? Personally I prefer this approach than having to add another table to my database.

    Read the article

  • What SQL is being sent from a SqlCommand object

    - by Justin808
    I have a SqlCommand object on my c# based asp.net page. The SQL and the passed parameters are working the majority of the time. I have one case that is not working, I get the following error: String or binary data would be truncated. The statement has been terminated. I understand the error and but all the columns in the database should be long enough to hold everything being sent. My questions, Is there a way to see what the actual SQL being sent to the database is from SqlCommand object? I would like to be able to email the SQL when an error occurs. Thanks, Justin

    Read the article

  • How to "defragment" MongoDB index effectively in production?

    - by dfrankow
    I've been looking at MongoDB. Feels good. I added some indexes to a collection, uploaded a bunch of data, then removed all the data, and I noticed the indexes did not change size, similar to the behavior reported here. If I call db.repairDatabase() the indexes are then squashed to near-zero. Similarly if I don't remove all the data, but call repairDatabase(), the indexes are squashed somewhat (perhaps because unused extends are truncated?). I am getting index size from "totalIndexSize" of db.collection.stats(). However, that takes a long time (I've read it could be hours on a large database). It's unclear to me how available the database is for reads or writes while it is running. I am guessing not so available. Since I want to run as few instances of mongod as possible, I want to understand more about how indexes are managed after deletes. Can anyone point me to anything or give any advice?

    Read the article

  • Restarting service from a client computer without rights

    - by Jason
    I have already created the program to restart a SQL database but it only works if the client has the rights. This is going to be done on a local network from a client computer when they can't get a person that has the password on the phone. Any thoughts I'm currently using the servicecontroller to start and stop database. When I don't have the rights I get a access denied error, or This operation might require other privileges. Not sure if impersonation would work since I don't have the userid and password.

    Read the article

  • What are the Pros & Cons of using SQL Azure for existing apps on dedicated servers

    - by Mark Redman
    We currently own our own servers, and rent a rack in a datacentre. Looking at the pricing, scalabilty and SLAs for Azure SQL, I am thinking that it might be viable to only use Azure SQL but continue to use our existing applications on our own servers in a datacentres. This will enable us to not worry about the database and its infrastructure so we can concentrate on building an application server farm with disk storeage for files etc. Our application is quite big and has various windows services and parts of it used unmanaged libraries that may not be feasible in the cloud, so probably coulnt have everything in the Azure cloud. The pros: Reduced Total Cost of ownership (no database servers, no sql server licenses) The Cons: I guess there would be overhead in the transfer of data between the Azure Cloud and our datacentre (ie cloud may be in US and datacentre is in the UK) but would this overhead be usable?

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • Extending configuration for .Net 3.5 Applications

    - by Maximiliano Rios
    Due to a requirement in my current project, I have to build a configuration manager to handle configurations that merge local config info with database one. Custom configuration doesn't fit my needs, problem is that I don't know what's the type before loading certain information, for example: Loading database information I will able to know what's myhandler's type. Not previously. So I thought to write my own handler but I can't let set blank as type for sections, in fact .net requires to know what's the type to match myhandler nodes. I'm thinking on building a different parser to read XML nodes but I would prefer to match this structure. I've not found any information to do that yet, is there any way? Can I extend or hook up something into the framework to be capable of loading on-the-fly types and validate nodes? Thanks in advance.

    Read the article

  • How do I mock a custom field that is deleted so that south migrations run?

    - by muhuk
    I have removed an app that contained a couple of custom fields from my project. Now when I try to run my migrations I get ImportError, naturally. These fields were very basic customizations like below: from django.db.models.fields import IntegerField class SomeField(IntegerField): def get_internal_type(self): return "SomeField" def db_type(self, connectio=None): return 'integer' def clean(self, value): # some custom cleanup pass So, none of them contain any database level customizations. When I removed this code, I've created migrations so the subsequent migration all ran fine. But when I try to run them on a pre-deletion database I realized my mistake. I can re-create a bare-bones app and make these imports work, but Ideally I would like to know if South has a mechanism to resolve these issues? Or is there any best practises? It would be cool if I could solve these issues just by modifying my migrations and not touching the codebase. (Django 1.3, South 0.7.3)

    Read the article

  • Count query with 3 coloumn iin SQL

    - by asher baig
    I have one database Library with table named called Medien. Having multiple columns named as Fname,Mname,Lname and ISBN. I want to calculate database records with ISBN and without ISBN? I have execute following command Select COUNT(ISBN) as Verf1 FROM library.MEDIEN where verf1 = isbn; Select COUNT(ISBN) as Verf2 FROM library.MEDIEN where verf2 = isbn; Select COUNT(ISBN) as Verf3 FROM library.MEDIEN where verf3 = isbn; Select COUNT(ISBN) as Ntverf1 FROM library.MEDIENwhere verf1 != isbn; Select COUNT(ISBN) as Ntverf2 FROM library.MEDIENwhere verf2 != isbn; Select COUNT(ISBN) as Ntverf3 FROM library.MEDIENwhere verf3 != isbn; I am not sure i execute correct command or not. Because some ISBN records have Fname,Mname or Fname,Lname or Mname,Lname or Fname , Lname,Mname only respectively. Please kindly help me solving this query

    Read the article

  • Java framework "suggestion" for persisting the results from an Oracle 9i stored procedure using Apac

    - by chocksaway
    Hello, I am developing a Java servlet which calls an Oracle stored procedure. The stored procedure is likely to "grow" over time, and I have concerns the amount of time taken to "display the results on a web page". While I am at the implementation stage, I would like some suggestions of a Persistence framework which will work on Apache Tomcat 5.5? I see two approaches to persisting the database results. A scheduled database query every N minutes, or something which utilises triggers. Hibernate seems like the obvious answer, but I have never called stored procedures from Hibernate (HQL and Criteria). Is there a more appropriate framework which can be used? Thank you. cheers Miles.

    Read the article

  • How to efficiently manage files on a filesystem in Java?

    - by Tuukka Mustonen
    I am creating a few JAX-WS endpoints, for which I want to save the received and sent messages for later inspection. To do this, I am planning to save the messages (XML files) into filesystem, in some sensible hierarchy. There will be hundreds, even thousands of files per day. I also need to store metadata for each file. I am considering to put the metadata (just a couple of fields) into database table, but the XML file content itself into files in a filesystem in order not to bloat the database with content data (that is seldomly read). Is there some simple library that helps me in saving, loading, deleting etc. the files? It's not that tricky to implement it myself, but I wonder if there are existing solutions? Just a simple library that already provides easy access to filesystem (preferrably over different operating systems). Or do I even need that, should I just go with raw/custom Java?

    Read the article

  • Customized User Registration Form

    - by Nitz
    Hey Guys, i have made user-register.tpl.php file. And i have set many text field in that. But now i need that.... i want to store the users information to the database. bcz i have created the customized registration page, so i need that my text field values should be store in the database. like this....... Username: <input type="text" name="myuser" id="myuser" /> Now i want to store the username, which will entered in this myuser text filed. NitishPanchjanya Corporation

    Read the article

  • Recommended Tutorials for php mysql and graphs

    - by Vinit Joshi
    I need help to find a tutorial or anything to help me create a comparison chart. The user is able to search for device names. The information about the device names is in a drop down box dynamically added from the database. I want the user to be able to select two separate devices and view the devices information plotted onto a graph. I'll be grateful for keywords that I can search for or any tutorials that will help me to carry on with this task. Currently on my screen I can see the data that has been inserted into the database. This data is placed inside a table. I have so far used xhtml, php and mysql. I've tried to make the question as clear as possible so sorry if it does confuse anyone.

    Read the article

  • Unhandled exceptions in BackgroundWorker

    - by edg
    My WinForms app uses a number of BackgroundWorker objects to retrieve information from a database. I'm using BackgroundWorker because it allows the UI to remain unblocked during long-running database queries and it simplifies the threading model for me. I'm getting occasional DatabaseExceptions in some of these background threads, and I have witnessed at least one of these exceptions in a worker thread while debugging. I'm fairly confident these exceptions are timeouts which I suppose its reasonable to expect from time to time. My question is about what happens when an unhandled exception occurs in one of these background worker threads. I don't think I can catch an exception in another thread, but can I expect my WorkerCompleted method to be executed? Is there any property or method of the BackgroundWorker I can interrogate for exceptions?

    Read the article

  • What is the Reason large sites don't use MySQL with ASP.NET?

    - by Luke101
    I have read this article from highscalability about stackoverflow and other large websites. Many large high traffic .NET sites such as plentyoffish.com, mysapce and SO all use .NET technologies and use SQL SERver for their database. In the article it says SO said As you add more and more database servers the SQL Server license costs can be outrageous. So by starting scale up and gradually going scale out with non-open source software you can be in a world of financial hurt. I don't understand why don't high traffic .NET sites convert their databases to MySQL as it is waay cheaper then SQL Server

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >