Search Results

Search found 31293 results on 1252 pages for 'database agnostic'.

Page 700/1252 | < Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >

  • check properties of two objects for changes

    - by k-hoffmann
    Hi, i have to develop a mechanism to check two object properties for changes. All properties which are needed to check are marked with an attribute. Atm i - read all properties from acutal object via linq - read the corresponding property from old object - fill an own object with the two properties (old and new value) In Code the call to the workerclass looks like this public void CreateHistoryMap(BaseEntity actual, BaseEntity old) { CreateHistoryMap(actualEntity, oldEntity) .ForEach(mapEntry => CreateHistoryEntry(mapEntry), mapEntry => IfChangesDetected(mapEntry)); } CreateHistoryMap builds up the HistoryMapEntry which contains the two properties. CreateHistoryEntry build up the object which is saved to database, the IfChangesDetected check the object for changes. I have to handle own special application types to generate history values to database (like concatinating list values and so on). My problem is now, that i have to read the values of the properties twice - for change detection - and for the concreate CreateHistoryEntry How can i eliminate this problem or how can i implement the change tracking scenario with the nice c# 3.5 features? Thanks a lot.

    Read the article

  • Oracle Forms on-button-pressed trigger to solve three scenarios

    - by DBase486
    Hello, I'm writing a when-button-pressed trigger on a save button for an Oracle Forms 6i form, and it has to fulfill a couple of scenarios. Here's some background information: the fields we're primarily concerned with are: n_number, alert_id, end_date For all three scenarios we are comparing candidate records against the following records in the database (for the sake of argument, let's assume they're the only records in the database so far): alert_id|| n_number|| end_date ------------------------------------- 1|| 5|| _______ 2|| 6|| 10/25/2009 Scenario 1: The user enters a new record: alert_id 1 n_number 5 end_date NULL Objective: prevent the user from committing duplicate rows Scenario 2: The user enters a new record: alert_id 1 n_number 10 end_date NULL Objective: Notify the user that this alert_id already exists, but allow the user the ability to commit the row, if desired. Scenario 3: The user enters a new record: alert_id 2 n_number 6 end_date NULL Objective: Notify the user that this alert_id has occurred in the past (i.e. it has a not-null end_date), but allow the user to commit the row, if desired. I've written the code, which seems to comply with the first two scenarios, but prevents me from fulfilling the third. Issues: When I enter the third scenario case, I am prompted to commit the record, but when I attempt this, the "duplicate_stop" alert pops up, preventing me. Issues: I'm getting the following error: ORA-01843: not a valid month. While testing the code for the third scenario in Toad (hard-coding the values, etc) things seemed to be fine. Why would I encounter these problems at run-time? Help is very much appreciated. Thank you

    Read the article

  • Deleting While Iterating in Ruby?

    - by Jesse J
    I'm iterating over a very large set of strings, which iterates over a smaller set of strings. Due to the size, this method takes a while to do, so to speed it up, I'm trying to delete one of the strings from the smaller set that no longer needs to be used. Below is my current code: Ms::Fasta.foreach(@database) do |entry| all.each do |set| if entry.header[1..40].include? set[1] + "|" startVal = entry.sequence.scan_i(set[0])[0] if startVal != nil @locations << [set[0], set[1], startVal, startVal + set[1].length] all.delete(set) end end end end The problem I face is that the easy way, array.delete(string), effectively adds a break statement to the inner loop, which messes up the results. The only way I know how to fix this is to do this: Ms::Fasta.foreach(@database) do |entry| i = 0 while i < all.length set = all[i] if entry.header[1..40].include? set[1] + "|" startVal = entry.sequence.scan_i(set[0])[0] if startVal != nil @locations << [set[0], set[1], startVal, startVal + set[1].length] all.delete_at(i) i -= 1 end end i += 1 end end This feels kind of sloppy to me. Is there a better way to do this? Thanks.

    Read the article

  • Objective C map view delegate viewForAnnotation for MKAnnotation gets just called after click

    - by user1185486
    I have a simple Map view. It has a method -(void)loadAndDisplayPois{ NSLog(@"loadAndDisplayPois"); if(mapView.annotations.count > 0) [mapView removeAnnotations:mapView.annotations]; self.pois = [self loadPoisFromDatabase]; NSLog(@"self.pois.count: %i", self.pois.count); [mapView addAnnotations:self.pois]; NSLog(@"mapView.annotations.count: %i",mapView.annotations.count);} This method gets called, and I am sure that the method gets called because of the Log, after I downloaded data and saved it into the database. The class which handles the download executes after saving the data to the database [self.senderObj performSelector:@selector(loadAndDisplayPois)]; Where senderObj is the MapViewControlller. The count Log from the pois array shows 4 after the first time I clicked. But no Annotations on the view, because viewForAnnotation is not called (one Annotation in the array ( my current position)). After I execute the method again by clicking a TEST button shows everything on the map. The viewForAnnotation method gets called after viewWillAppear and after I clicked the TEST button. It is driving me nuts since 2 days. I cant anymore ...

    Read the article

  • Ideas Needed for a Base Code System

    - by Tegan Snyder
    I've developed a PHP web application that is currently in need of a strategic restructuring. Currently when we setup new clients we give them the entire code base on a subdomain of our main domain and create a new table for them in the database. This results in each client having the entire codebase, meaning when we make bug changes, fixes we have to go back and apply them independently across all clients and this is a pain. What I'd like to create is a base code server that holds all the core PHP files. base.domain.com Then all of our clients (client.domain.com) will only need a few files: config.php would have the database connection information. index.php - displays the login box if session non-existant, otherwise it loads baseline code via remote includes to base.domain.com. My question is does my logic seem feasible? How do other people handle similar situations by having a base code? Also.... Is it even possbile to remotely include PHP files from base.domain.com and include them in client.domain.com? Thanks, Tegan

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

  • Getting a UIImage from MySQL using PHP and jSON

    - by Daniel
    I'm developing a little news reader that retrieves the info from a website by doing a POST request to a URL. The response is a jSON object with the unread-news. E.g. the last news on the App has a timeStamp of "2013-03-01". When the user refreshes the table, it POSTS "domain.com/api/api.php?newer-than=2013-03-01". The api.php script goes to the MySQL database and fetches all the news posted after 2013-03-01 and prints them json_encoded. This is // do something to get the data in an array echo $array_of_fetched_data; for example the response would be [{"title": "new app is coming to the market", "text": "lorem ipsum dolor sit amet...", image: XXX}] the App then gets the response and parses it, obtaining an NSDictionary and adds it to a Core Data db. NSDictionary* obtainedNews = [NSJSONSerialization JSONObjectWithData:responseData options:kNilOptions error:&error]; My question is: How can I add an image to the MySQL database, store it, pass it using jSON trough a POST HTTP Request and then interpret it as an UIImage. It's clear that to store an UIImage in CoreData, they must be transform into/from NSData. How can I pass the NSData back and forth to a MySQL db using php and jSON? How should I upload the image to the db? (Serialized, as a BLOB, etc)

    Read the article

  • MySQL Can Connect Remotely but not Locally

    - by A Wizard Did It
    This is a weird problem and I'm not sure what's going on. I installed MySQL on a linux box I have running Ubuntu 10.04 LTS. I can access mysql via SSH mysql -p and perform all my commands that way. I added a user, and I can use AddedUser to connect remotely from my machine, but not from the local machine. It makes no sense to me... SELECT host, user FROM mysql.user Yields: +-----------+------------------+ | host | user | +-----------+------------------+ | % | AddedUser | | 127.0.0.1 | root | | li241-255 | root | | localhost | debian-sys-maint | | localhost | root | +-----------+------------------+ Problem is I'm developing on this machine using Node.js, and I can't connect locally from the server using the same username. I've tried FLUSH PRIVILEGES but that seems to have no effect. I know it's not Node.js because I'm using the same code on another database and it's working in that environment. Edit This is the error node is giving me. node.js:50 throw e; // process.nextTick error, or 'error' event on first tick ^ Error: ECONNREFUSED, Connection refused at Stream._onConnect (net.js:687:18) at IOWatcher.onWritable [as callback] (net.js:284:12) Edit 2 I have the right port & server as best I can tell. My /etc/mysql/my.cnf contains this: port = 3306 socket = /var/run/mysqld/mysqld.sock My MySQL object contains: { host: 'localhost', port: 3306, user: 'removed', password: 'removed', database: '', typeCast: true, flags: 260047, maxPacketSize: 16777216, charsetNumber: 192, debug: false, ending: false, connected: false, _greeting: null, _queue: [], _connection: null, _parser: null, server: 'ExternalIpAddress' } Possibly useful? netstat -ln | grep mysql unix 2 [ ACC ] STREAM LISTENING 1016418 /var/run/mysqld/mysqld.sock

    Read the article

  • MS Access Bulk Insert exits app

    - by Brij
    In the web service, I have to bulk insert data in MS Access database. First, There was single complex Insert-Select query,There was no error but app exited after inserting some records. I have divided the query to make it simple and using linq to remove complexity of query. now I am inserting records using for loop. there are approx 10000 records. Again, same problem. I put a breakpoint after loop, but No hit. I have used try catch, but no error found. For Each item In lstSelDel Try qry = String.Format("insert into Table(Id,link,file1,file2,pID) values ({0},""{1}"",""{2}"",""{3}"",{4})", item.WebInfoID, item.Links, item.Name, item.pName, pDateID) DAL.ExecN(qry) Catch ex As Exception Throw ex End Try Next Public Shared Function ExecN(ByVal SQL As String) As Integer Dim ret As Integer = -1 Dim nowConString As String = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + System.AppDomain.CurrentDomain.BaseDirectory + "DataBase\\mydatabase.mdb;" Dim nowCon As System.Data.OleDb.OleDbConnection nowCon = New System.Data.OleDb.OleDbConnection(nowConString) Dim cmd As New OleDb.OleDbCommand(SQL, nowCon) nowCon.Open() ret = cmd.ExecuteNonQuery() nowCon.Close() nowCon.Dispose() cmd.Dispose() Return ret End Function After exiting app, I see w3wp.exe uses more than 50% of Memory. Any idea, What is going wrong? Is there any limitation of MS Access?

    Read the article

  • Performance improvement of client server system

    - by Tanuj
    I have a legacy client server system where the server maintains a record of some data stored in a sqlite database. The data is related to monitoring access patterns of files stored on the server. The client application is basically a remote viewer of the data. When the client is launched, it connects to the server and gets the data from the server to display in a grid view. The data gets updated in real time on the server and the view in the client automatically gets refreshed. There are two problems with the current implementation: When the database gets too big, it takes a lot of time to load the client. What are the best ways to deal with this. One option is to maintain a cache at the client side. How to best implement a cache ? How can the server maintain a diff so that it only sends the diff during the refresh cycle. There can be multiple clients and each client needs to display the latest data available on the server. The server is a windows service daemon. Both the client and the server are implemented in C#

    Read the article

  • JasperReports reports access image via authenticated URL

    - by user363115
    Hi all, I am hoping that this is a simple issue with a simple solution and that I have missed something obvious. Let me explain the problem; We have an application that generates PDF reports (using Jasper). These reports contain data from our database, as well as imagery (photographs). These photographs are stored in S3. We use signed URLs to access these photographs. We link these photographs into our Jasper reports using these S3 URLs. Because the S3 URLs are signed and time-limited (by design), the process is as follows; User requests a report to be generated, Report is filled, and goes to our database (at which time UUIDs to any required images are retrieved), For each UUID an S3 signed URL must be generated, To do this the URL behind each report image is a call to an authenticated URL in our app (/get_img?uuid=foo), The controller behind this URL generates a signed S3 URL and returns it, Reports loads the image. The problem is with step (4) - the call to the authenticated URL fails because Jasper does not pass any authentication information with the request. Is there a solution here? Thanks all for your time. Ben

    Read the article

  • Form validation

    - by kielie
    Hi guys, I need to create a form that has many of the same fields, that have to be inserted into a database, but the problem I have is that if a user only fills in one or two of the rows, the form will still submit the blank data of the empty fields along with the one or two fields the user has filled in. How can I check for the rows that have not been filled in and leave them out of the query? or check for those that have been filled in and add them to the query. . . The thank_you.php file will capture the $_POST variables and add them to the database. <form method="post" action="thank_you.php"> Name: <input type="text" size="28" name="name1" /> E-mail: <input type="text" size="28" name="email1" /> <br /> Name: <input type="text" size="28" name="name2" /> E-mail: <input type="text" size="28" name="email2" /> <br /> Name: <input type="text" size="28" name="name3" /> E-mail: <input type="text" size="28" name="email3" /> <br /> Name: <input type="text" size="28" name="name4" /> E-mail: <input type="text" size="28" name="email4" /> <input type="image" src="images/btn_s.jpg" /> </form> I am assuming that I could use javascript or jQuery to accomplish this, how would I go about doing this? Thanx in advance for the help.

    Read the article

  • ASP.NET Web Application: use 1 or multiple virtual directories

    - by tster
    I am working on a (largish) internal web application which has multiple modules (security, execution, features, reports, etc.). All the pages in the app share navigation, CSS, JS, controls, etc. I want to make a single "Web Application" project, which includes all the pages for the app, then references various projects which will have the database and business logic in them. However, some of the people on the project want to have separate projects for the pages of each module. To make this more clear, this is what I'm advocating to be the projects. /WebInterface* /SecurityLib /ExecutionLib etc... And here is what they are advocating: /SecurityInterface* /SecutiryLib /ExecutionInterface* /ExecutionLib etc... *project will be published to a virtual directory of IIS Basically What I'm looking for is the advantages of both approaches. Here is what I can think of so far: Single Virtual Directory Pros Modules can share a single MasterPage Modules can share UserControls (this will be common) Links to other modules are within the same Virtual directory, and thus don't need to be fully qualified. Less chance of having incompatible module versions together. Multiple Virtual Directories Pros Can publish a new version of a single module without disrupting other modules Module is more compartmentalized. Less likely that changes will break other modules. I don't buy those arguments though. First, using load balanced servers (which we will have) we should be able to publish new versions of the project with zero downtime assuming there are no breaking database changes. Second, If something "breaks" another module, then there is either an improper dependency or the break will show up eventually in the other module, when the developers copy over the latest version of the UserControl, MasterPage or dll. As a point of reference, there are about 10 developers on the project for about 50% of their time. The initial development will be about 9 months.

    Read the article

  • Is something along the lines of nested memoization needed here?

    - by Daniel
    System.Transactions notoriously escalates transactions involving multiple connections to the same database to the DTC. The module and helper class, ConnectionContext, below are meant to prevent this by ensuring multiple connection requests for the same database return the same connection object. This is, in some sense, memoization, although there are multiple things being memoized and the second is dependent on the first. Is there some way to hide the synchronization and/or mutable state (perhaps using memoization) in this module, or perhaps rewrite it in a more functional style? (It may be worth nothing that there's no locking when getting the connection by connection string because Transaction.Current is ThreadStatic.) type ConnectionContext(connection:IDbConnection, ownsConnection) = member x.Connection = connection member x.OwnsConnection = ownsConnection interface IDisposable with member x.Dispose() = if ownsConnection then connection.Dispose() module ConnectionManager = let private _connections = new Dictionary<string, Dictionary<string, IDbConnection>>() let private getTid (t:Transaction) = t.TransactionInformation.LocalIdentifier let private removeConnection tid = let cl = _connections.[tid] for (KeyValue(_, con)) in cl do con.Close() lock _connections (fun () -> _connections.Remove(tid) |> ignore) let getConnection connectionString (openConnection:(unit -> IDbConnection)) = match Transaction.Current with | null -> new ConnectionContext(openConnection(), true) | current -> let tid = getTid current // get connections for the current transaction let connections = match _connections.TryGetValue(tid) with | true, cl -> cl | false, _ -> let cl = Dictionary<_,_>() lock _connections (fun () -> _connections.Add(tid, cl)) cl // find connection for this connection string let connection = match connections.TryGetValue(connectionString) with | true, con -> con | false, _ -> let initial = (connections.Count = 0) let con = openConnection() connections.Add(connectionString, con) // if this is the first connection for this transaction, register connections for cleanup if initial then current.TransactionCompleted.Add (fun args -> let id = getTid args.Transaction removeConnection id) con new ConnectionContext(connection, false)

    Read the article

  • Protecting critical sections based on a condition in C#

    - by NAADEV
    Hello, I'm dealing with a courious scenario. I'm using EntityFramework to save (insert/update) into a SQL database in a multithreaded environment. The problem is i need to access database to see whether a register with a particular key has been already created in order to set a field value (executing) or it's new to set a different value (pending). Those registers are identified by a unique guid. I've solved this problem by setting a lock since i do know entity will not be present in any other process, in other words, i will not have same guid in different processes and it seems to be working fine. It looks something like that: static readonly object LockableObject = new object(); static void SaveElement(Entity e) { lock(LockableObject) { Entity e2 = Repository.FindByKey(e); if (e2 != null) { Repository.Insert(e2); } else { Repository.Update(e2); } } } But this implies when i have a huge ammount of requests to be saved, they will be queued. I wonder if there is something like that (please, take it just as an idea): static void SaveElement(Entity e) { (using ThisWouldBeAClassToProtectBasedOnACondition protector = new ThisWouldBeAClassToProtectBasedOnACondition(e => e.UniqueId) { Entity e2 = Repository.FindByKey(e); if (e2 != null) { Repository.Insert(e2); } else { Repository.Update(e2); } } } The idea would be having a kind of protection that protected based on a condition so each entity e would have its own lock based on e.UniqueId property. Any idea?

    Read the article

  • JAVA MySql multiple word search

    - by user1703849
    i have a database in MySql that has a name column in it which contains several words(description). I am connected to database with java through eclipse. I have a search, that returns results if only name field contains one word. id: name: info: type: 1 balloon big red balloon big 2 house expensive beautiful luxury 3 chicken wings deep fried wings tasty these are just random words but as an example my search can only see ex. balloon and then show info, but if i type chicken wings, it does nothing. so it possible somehow to search from columns with multiple words? this is my search code below import java.io.*; import java.sql.*; import java.util.*; class Search { public static void main(String[] args) { Scanner inp``ut = new Scanner(System.in); try { Connection con = DriverManager.getConnection( "jdbc:mysql://example/mydb", "user", "password"); Statement stmt = (Statement) con.createStatement(); System.out.print("enter search: "); String name = input.next(); String SQL = "SELECT * FROM menu where name LIKE '" + name + "'"; ResultSet rs = stmt.executeQuery(SQL); while (rs.next()) { System.out.println("Name: " +rs.getString("name")); System.out.println("Description: " + rs.getString("info") ); System.out.println("Price: " + rs.getString("Price")); } } catch (Exception e) { System.out.println("ERROR: " + e.getMessage()); } } }

    Read the article

  • How to execute stored procedure from Access using linked tables

    - by webworm
    I have an Access 2003 database that connects to a SQL Server 2008 box via ODBC. The tables from SQL Server are connected as linked tables in Access. I have a stored procedure on the SQL Server that I am trying to execute via ADO code. The problem I have is that Access cannot seem to find the procedure. What do I have to do within Access to be able to execute this stored procedure? Some facts ... The stored procedure in question accepts one parameter which is an integer. The stored procedure returns a recordset which I am hoping to use as the datasource for a ListBox. Here is my ADO code in Access ... Private Sub LoadUserCaseList(userID As Integer) Dim cmd As ADODB.Command Set cmd = New ADODB.Command cmd.ActiveConnection = CurrentProject.Connection cmd.CommandType = adCmdStoredProc cmd.CommandText = "uspGetUserCaseSummaryList" Dim par As New ADODB.Parameter Set par = cmd.CreateParameter("userID", adInteger) cmd.Parameters.Append par cmd.Parameters("userID") = userID Dim rs As ADODB.Recordset Set rs = cmd.Execute() lstUserCases.Recordset = rs End Sub The error I get is "the microsoft jet database engine cannot find the input table or query "uspGetUserCaseSummaryList".

    Read the article

  • Create Session Variable from different datasources?

    - by Szafranamn
    Currently I am developing a dynamic website using Dreamweaver cs5 with ColdFusion 9 and using Access to create my databases along with QuickBooks and QODBC to create database. I have established a login session variable stemming from the login page. This session variable is being drawn from one Datasource "Access" Table "Logininfo" Field "FullName" but I wanted to create another session variable either at this point or further into the member's page to use in a query sequence. This session variable would stem from another Datasoucre "QBs" Table "Invoice" Field "CustomerRefFullName" which is generated through Quickbooks and QODBC. I am not sure if this is possible but if it is how do I do it. I want to do this so I can query the Invoice database to upload the customer's Invoices unique to them onto their page. So it would have to be related to their login credentials. If there is another better route to take I would greatly appreciate the advice. Below is the login code if there is additional information needed let me know. This is my current thinking/plan to do what I wish to intend hence the need to create the session variable: I have another Datasource "QBs" with a Table "Invoice" when I create another webpage for the customer to see their invoice I need to create a recordset that accesses that Table. In order to do so I think the best way would some home convert the session.FullName (which came from Access Datasource, Logininfor Table) into a session.CustomerRefFullName (which would have to come from (Datasource: QBs Table: Invoice Field: CustomerRefFullName) that way I could set the query WHERE CustomerRefFullName and have each logged in user see their specific Invoices. So is there a way to turn the session variable off one datasource/table into a different sessionvariable off a new datasource/table even if it is unique just to that page??? <cfif IsDefined("FORM.username")> SELECT FullName, Username,Password,AccessLevels FROM Logininfo WHERE Username= AND Password=

    Read the article

  • In mysql, is "explain ..." always safe?

    - by tye
    If I allow a group of users to submit "explain $whatever" to mysql (via Perl's DBI using DBD::mysql), is there anything that a user could put into $whatever that would make any database changes, leak non-trivial information, or even cause significant database load? If so, how? I know that via "explain $whatever" one can figure out what tables / columns exist (you have to guess names, though) and roughly how many records are in a table or how many records have a particular value for an indexed field. I don't expect one to be able to get any information about the contents of unindexed fields. DBD::mysql should not allow multiple statements so I don't expect it to be possible to run any query (just explain one query). Even subqueries should not be executed, just explained. But I'm not a mysql expert and there are surely features of mysql that I'm not even aware of. In trying to come up with a query plan, might the optimizer actual execute an expression in order to come up with the value that an indexed field is going to be compared against? explain select * from atable where class = somefunction(...) where atable.class is indexed and not unique and class='unused' would find no records but class='common' would find a million records. Might 'explain' evaluate somefunction(...)? And then could somefunction(...) be written such that it modifies data?

    Read the article

  • Parse large XML file w/ script or use BioPython API ?

    - by jeremy04
    Hey guys this is my first question on here. I'm trying to make a local copy of the UniprotKB in SQL. The UniprotKB is 2.1GB, and it comes in XML and a special text format used by SwissProt Here are my options: 1) Use a SAX parser (XML) - I chose Ruby, and Nokogiri. I started writing the parser, but my initial reaction: how would I map the XML schema to the SAX parser? 2) BioPython - I already have BioSQL/Biopython installed, which literally created my SQL schema for me, and I was able to successfully insert one SwissProt/Uniprot txt file into the database. I'm running it right now (crosses fingers) on the entire 2.1gb. Here is the code I'm running: from Bio import SeqIO from BioSQL import BioSeqDatabase from Bio import SwissProt server = BioSeqDatabase.open_database(driver = "MySQLdb", user = "root", passwd = "", host="localhost", db = "bioseqdb") db = server["uniprot"] iterator = SeqIO.parse(open("/path/to/uniprot_sprot.dat", "r"), "swiss") db.load(iterator) server.commit() Edit: it's now crashing because the transactions are getting locked (since the tables are Innodb) Error Number: 1205 Lock wait timeout exceeded; try restarting transaction. I'm using MySQL version: 5.1.43 Should I switch my database to Postgrelsql ?

    Read the article

  • How to create C++ istringstream from a char array with null(0) characters?

    - by Morpheus
    I have a char array which contains null characters at random locations. I tried to create an iStringStream using this array (encodedData_arr) as below, I use this iStringStream to insert binary data(imagedata of Iplimage) to a MySQL database blob field(using MySQL Connector/C++'s setBlob(istream *is) ) it only stores the characters upto the first null character. Is there a way to create an iStringStream using a char array with null characters? unsigned char *encodedData_arr = new unsigned char[data_vector_uchar->size()]; // Assign the data of vector<unsigned char> to the encodedData_arr for (int i = 0; i < vec_size; ++i) { cout<< data_vector_uchar->at(i)<< " : "<< encodedData_arr[i]<<endl; } // Here the content of the encodedData_arr is same as the data_vector_uchar // So char array is initializing fine. istream *is = new istringstream((char*)encodedData_arr, istringstream::in || istringstream::binary); prepStmt_insertImage->setBlob(1, is); // Here only part of the data is stored in the database blob field (Upto the first null character)

    Read the article

  • Unserializing an API return object (PHP/Ebay API)

    - by DavidYell
    I have been working with the Ebay api for a project and have found it great. I have however found a problem now, more PHP related. When I read my items from Ebay, I store a bunch of details in the database. Currently, just for the sake of it really, I serialize the whole return object and store it in the database in a related table. The idea being, that when I display my information, I have all the details to hand should I need them. The problem arises in that the pricing information is always in a sub object. [ConvertedAdjustmentAmount] => __PHP_Incomplete_Class Object ( [__PHP_Incomplete_Class_Name] => eBayAmountType [_] => 0 [currencyID] => USD ) As you can see when I unserialize my object, my cunning plan falls foul of the Incomplete class problem. I have checked the following question, without success. http://stackoverflow.com/questions/965611/forcing-access-to-php-incomplete-class-object-properties The main issue lies, as far as I can see, in that the price class is stored in the Ebay api, so how do I recreate it? I have been reading this page, http://uk3.php.net/manual/en/function.unserialize.php and trying to figure out, unserialize_callback_func which I can't figure out either, so any help would be appreciated!

    Read the article

  • ios UITableView reloadData don't increase content height

    - by Zoltan Varadi
    I'm facing a strange error in my iOS app and haven't been able to figure out the reason why. In my UITableView i can add new cells, so i refresh the array that is the datasource and call reloadData. Everything works without error. The numberOfRowsInSection is called again, and it's value is one more than it's previous value was. The new cell even gets inserted at the bottom of the tableview, but i cant scroll down to it. I only know that it's there because the tableview bounces and i can see it. I'm guessing the tableview's content height is not getting increased for some reason, but i have no idea why. I'm using iOS 6 btw. Any help is very much appreciated! Thanks, Zoli EDIT, answers for Srikanth's questions: How is data getting added. There's an NSArray containing objects of a specific type. The array.count is the number of the number of cells. This NSArray gets its values from a database query. When you say, you are refreshing the array, what do you mean. By refreshing the array i mean i execute a new query in the database and put the results of this query into an NSArray. this will be the tableview's datasource array. Like this: dataSourceArray = [dbManager executeQuery]; [tableView reloadData]; Are you adding the data within the same view controller Yes. Can you show some code as to what you are doing See above. Is the table view at the root of the view controllers view or is it within another view The tableview is the main view's first child. Can you try adding data to the array at the beginning, so that you can view the cell being added at the top. I don't understand what you mean.

    Read the article

  • Table not created by Hibernate

    - by User1
    I annotated a bunch of POJO's so JPA can use them to create tables in Hibernate. It appears that all of the tables are created except one very central table called "Revision". The Revision class has an @Entity(name="RevisionT") annotation so it will be renamed to RevisionT so there is not a conflict with any reserved words in MySQL (the target database). I delete the entire database, recreate it and basically open and close a JPA session. All the tables seem to get recreated without a problem. Why would a single table be missing from the created schema? What instrumentation can be used to see what Hibernate is producing and which errors? Thanks. UPDATE: I tried to create as a Derby DB and it was successful. However, one of the fields has a a name of "index". I use @org.hibernate.annotations.IndexColumn to specify the name to something other than a reserved word. However, the column is always called "index" when it is created. Here's a sample of the suspect annotations. @ManyToOne @JoinColumn(name="MasterTopID") @IndexColumn(name="Cx3tHApe") protected MasterTop masterTop; Instead of creating MasterTop.Cx3tHApe as a field, it creates MasterTop.Index. Why is the name ignored?

    Read the article

  • Java "Pool" of longs or Oracle sequence with reusable values

    - by Anthony Accioly
    Several months ago I implemented a solution to choose unique values from a range between 1 and 65535 (16 bits). This range is used to generate unique Route Targets suffixes, which for this customer massive network (it's a huge ISP) are a very disputed resource, so any free index needs to become immediately available to the end user. To tackle this requirement I used a BitSet. Allocate on the RT index with set and deallocate a suffix with clear. The method nextClearBit() can find the next available index. I handle synchronization / concurrency issues manually. This works pretty well for a small range... The entire index is small (around 10k), it is blazing fast and can be easy serialized into a Blob field. The problem is, some new devices can handle RTs of 32 bits (range 1 / 4294967296). Which can't be managed with a BitSet (it would, by itself, consume around 600Mb, plus be limited to int range). Even with this massive range available, the client still wants to free available Route Targets for the end user, mainly because the lowest ones (up to 65535) - which are compatible with old routers - are being heavily disputed. Before I tell the customer that this is impossible and he will have to conform with my reusable index for lower RTs (up to 65550) and use a database sequence for the other ones (which means that when the user frees a Route Target, it will not become available again). Would anyone shed some light? Maybe some kind soul already implemented a high performance number pool for Java (6 if it matters), or I am missing a killer feature of Oracle database (11R2 if it matters)... Wishful thinking. Thank you very much in advance.

    Read the article

< Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >