Search Results

Search found 59118 results on 2365 pages for 'data persistence'.

Page 807/2365 | < Previous Page | 803 804 805 806 807 808 809 810 811 812 813 814  | Next Page >

  • Python + MySQLdb executemany

    - by lhahne
    I'm using Python and its MySQLdb module to import some measurement data into a Mysql database. The amount of data that we have is quite high (currently about ~250 MB of csv files and plenty of more to come). Currently I use cursor.execute(...) to import some metadata. This isn't problematic as there are only a few entries for these. The problem is that when I try to use cursor.executemany() to import larger quantities of the actual measurement data, MySQLdb raises a TypeError: not all arguments converted during string formatting My current code is def __insert_values(self, values): cursor = self.connection.cursor() cursor.executemany(""" insert into values (ensg, value, sampleid) values (%s, %s, %s)""", values) cursor.close() where values is a list of tuples containing three strings each. Any ideas what could be wrong with this? Edit: The values are generated by yield (prefix + row['id'], row['value'], sample_id) and then read into a list one thousand at a time where row is and iterator coming from csv.DictReader.

    Read the article

  • Large file upload into WSS v3

    - by Rubens Farias
    I'd built an WSSv3 application which upload files in small chunks; when every data piece arrives, I temporarly keep it into a SQL 2005 image data type field for performance reasons**. Problem come when upload ends; I need to move data from my SQL Server to Sharepoint Document Library through WSSv3 object model. Right now, I can think two approaches: SPFileCollection.Add(string, (byte[])reader[0]); // OutOfMemoryException and SPFile file = folder.Files.Add("filename", new byte[]{ }); using(Stream stream = file.OpenBinaryStream()) { // ... init vars and stuff ... while ((bytes = reader.GetBytes(0, offset, buffer, 0, BUFFER_SIZE)) 0) { stream.Write(buffer, 0, (int)bytes); // Timeout issues } file.SaveBinary(stream); } Are there any other way to complete successfully this task? ** Performance reasons: if you tries to write every chunk directly at Sharepoint, you'll note a performance degradation as file grows up (100Mb).

    Read the article

  • How do I dinamically load raw assemblies that contains unmanaged code?(bypassing 'Unverifiable code

    - by Thiado de Arruda
    I'm going to give an example of using System.Data.SQLite.DLL which is a mixed assembly with unmanaged code: If I execute this : var assembly= Assembly.LoadFrom("System.Data.SQLite.DLL") No exceptions are thrown, but if I do this : var rawAssembly = File.ReadAllBytes("System.Data.SQLite.DLL"); var assembly = Assembly.Load(rawAssembly); The CLR throws a FileLoadException with "Unverifiable code failed policy check. (Exception from HRESULT: 0x80131402)". Let's say I'm trying to load this assembly on a child AppDomain, how can I customize the AppDomain's security to allow me pass the policy check?

    Read the article

  • SQL Server 2008 pivot without aggregate

    - by Bryan Lewis
    I have table to test score data that I need to pivot and I am stuck on how to do it. I have the data as this: gradelistening speaking reading writing 0 0.0 0.0 0.0 0.0 1 399.4 423.8 0.0 0.0 2 461.6 508.4 424.2 431.5 3 501.0 525.9 492.8 491.3 4 521.9 517.4 488.7 486.7 5 555.1 581.1 547.2 538.2 6 562.7 545.5 498.2 530.2 7 560.5 525.8 545.3 562.0 8 580.9 548.7 551.4 560.3 9 602.4 550.2 586.8 564.1 10 623.4 581.1 589.9 568.5 11 633.3 578.3 598.1 568.2 12 626.0 588.8 600.5 564.8 But I need it like this: gr0 gr1 gr2 gr3 gr4 gr5 gr6 gr7 ... listening 0.0 399.4 461.6 501.0 521.9 555.1 562.7 560.5 580.9... speaking 0.0 423.8... reading 0.0 0.0 424.2... writing 0.0 0.0 431.5... I don't need to aggregate anything, just pivot the data.

    Read the article

  • Haskell's type system treats a numerical value as function?

    - by Long
    After playing around with haskell a bit I stumbled over this function: Prelude Data.Maclaurin> :t ((+) . ($) . (+)) ((+) . ($) . (+)) :: (Num a) => a -> (a -> a) -> a -> a (Data.Maclaurin is exported by the package vector-space.) So it takes a Num, a function, another Num and ultimately returns a Num. What magic makes the following work? Prelude Data.Maclaurin> ((+) . ($) . (+)) 1 2 3 6 2 is obviously not a function (a-a) or did I miss out on something?

    Read the article

  • Problem reading hexadecimal buffer from C socket

    - by Olaseni
    I'm using the SDL_net sockets API to create a server and client. I can easily read a string buffer, but when I try to send hexadecimal data, recv gets the length, but I cannot seem to be a able to read the buffer contents. IPaddress ip; TCPsocket server,client; int bufSize = 1024; char message[bufSize]; int len; server = SDLNet_TCP_Open(&ip); client = SDLNet_TCP_Accept(server); len = SDLNet_TCP_Recv(client,message,bufSize); Here's a snippet. the buffer length "len" is set (i.e. message length) but I can't get to the data contents in the message buffer. Some sample bind_transmitter PDU data was sent by a random client to the server at that port. I can't read the PDU (SMPP).

    Read the article

  • jQuery Star Rating plugin - select in callback causes infinite loop

    - by Ian
    Using the jQuery Star Rating plugin everything works well until I select a star rating from the rating's callback handler. Simple example: $('.rating').rating({ ... callback: function(value){ $.ajax({ type: "POST", url: ... data: {rating: value}, success: function(data){ $('.rating').rating('select', 1); } }); } }); I'm guessing this infinite loop occurs because the callback is fired after a manual 'select' as well. Once a user submits their rating I'd like to 'select' the average rating across all users (this value is in data returned to the success handler). How can I do this without triggering an infinite loop?

    Read the article

  • Best practices for fixed-width processing in .NET

    - by jmgant
    I'm working a .NET web service that will be processing a text file with a relatively long, multilevel record format. Each record in the file represents a different entity; the record contains multiple sub-types. (The same record format is currently being processed by a COBOL job, if that gives you a better picture of what we're looking at). I've created a class structure (a DATA DIVISION if you will) to hold the input data. My question is, what best practices have you found for processing large, complex fixed-width files in .NET? My general approach will be to read the entire line into a string and then parse the data from the string into the classes I've created. But I'm not sure whether I'll get better results working with the characters in the string as an array, or with the string itself. I guess that's the specific question, string vs. char[], but I would appreciate any other pointers anyone has. Thanks.

    Read the article

  • How to display the image in the web view using html code?

    - by Madan Mohan
    Hi Guys, I am getting the data form Parser, In that I am getting a set of urls. Using these urls can I make image url by appending any data values getting from the parser. http://musicbrainz.org/ws/1/artist/f27ec8db-af05-4f36-916e-3d57f91ecf5e?type=xml&inc=url-rels+artist-rels using these url i get data and set of urls.They are not providing image url or thumbnail. So, Is it possible to get or form an image url from parser (http://musicbrainz.org/ws/1/artist/f27ec8db-af05-4f36-916e-3d57f91ecf5e?type=xml&inc=url-rels+artist-rels) and display in the web view. Please help me from this problem. Thank You, Madan Mohan.

    Read the article

  • Hardware Lossless Compression for Hard Drives?

    - by GeoffreyF67
    I happened across this article about hardware based hard drive encryption and realized that not only would this give a great way to protect your data but it would also speed up the applications that we use to encrypt that data. This lead me to wonder... Would it be possible to do the same thing for compression so that all of the data is compressed or uncompressed appropriately as it is read and written to the drive? I haven't done any firmware programming in quite some time so I'm not even sure this is technically possible. If it were, however, it could probably give quite a bit more storage space to folks. What are the pros and cons of programming such an approach to be used in the firmware? G-Man

    Read the article

  • PHP SimpleXML - Remove xpath node

    - by Peter John
    Hi, I'm a little confused as to how i can delete a parent node of something which i can find via an xpath search: $xml = simplexml_load_file($filename); $data = $xml->xpath('//items/info[item_id="' . $item_id . '"]'); $parent = $data[0]->xpath("parent::*"); unset($parent); So, it finds the item id, no problems there - but the unset isnt getting rid of this <items> node. All i want to do is remove the <items>...</items> for this product. Obviously, there are loads of <items> nodes in the xml file so it cant do unset($xml->data->items) as that would delete everything. Any ideas much appreciated :-)

    Read the article

  • Free Large datasets to experiment with Hadoop

    - by Sundar
    Do you know any large datasets to experiment with Hadoop which is free/low cost? Any pointers/links related is appreciated. Prefernce: Atleast one GB of data. Production log data of webserver. Few of them which I found so far: http://dumps.wikimedia.org/enwiki/20100130/ http://wiki.freebase.com/wiki/Data_dumps http://aws.amazon.com/publicdatasets/ Also can we run our own crawler to gather data from sites e.g. Wikipedia? Any pointers on how to do this is appreciated as well.

    Read the article

  • "detached entity passed to persist error" with JPA/EJB code

    - by zengr
    I am trying to run this basic JPA/EJB code: public static void main(String[] args){ UserBean user = new UserBean(); user.setId(1); user.setUserName("name1"); user.setPassword("passwd1"); em.persist(user); } I get this error: javax.ejb.EJBException: javax.persistence.PersistenceException: org.hibernate.PersistentObjectException: detached entity passed to persist: com.JPA.Database Any ideas? I search on the internet and the reason I found was: This was caused by how you created the objects, i.e. If you set the ID property explicitly. Removing ID assignment fixed it. But I didn't get it, what will I have to modify to get the code working?

    Read the article

  • Combine multiple JSON files into one; retrieve using jQuery/getJSON()

    - by frankadelic
    I have some jQuery code which retrieves content using getJSON(). There are n JSON files, which are retrieved from the server as needed: /json-content/data0.json /json-content/data1.json /json-content/data2.json etc... Instead, I want to store all the JSON in a single file to reduce the number of HTTP requests needed to retrieve the data. What is the best way to do this? If I concatenate the JSON files together, it no longer works with getJSON(). I would prefer not to transform the JSON data ahead of time, as it is coming from a third party data source. Any suggestions?

    Read the article

  • how to set charset for MySQL in RODBC

    - by lokheart
    I have a data with chinese characters as field names and data, I have imported them from xls to access 2007 and export them to ODBC. Then I use RODBC to read them in R, the field names is OK, but for the data, all of the chinese characters are shown as ?. I have read the RODBC manual and it said: If it is possible to set the DBMS or ODBC driver to communicate in the character set of the R session then this should be done. For example, MySQL can set the communication character set via SQL, e.g. SET NAMES 'utf8'. I guess this is the problem, but how can I provide this command to MySQL via RODBC? Thanks!

    Read the article

  • Creating a specialised view filtering form in Rails

    - by Schroedinger
    G'day guys, I have a current set of data, and I generate multiple analyses of this data (each analysis into its own active record item called a pricing_interval) using a helper function at the moment. Currently to analyse the set of data, you need a start time(using datetime_select) an integer (using text_field) and a name (using text_field) I would like on submission of the form to be redirected to the index page of my pricing_interval, as the values will be re-generated. Manually generating a range proves that my helper methods work. How would I build a form that on submit would send parameters to a function in the form of (date,integer,name) so that it could immediately begin work whilst redirecting the user to server/pricing_intervals Anything at all would help, I've spent hours over the past few days trying to get the rails form syntax working properly to no avail, a really straightforward guide to what I would implement to get this working would be amazingly appreciated. I've looked through the form guides, as I'm not creating an object, but merely parsing params, there's got to be an easy way to do this, right?

    Read the article

  • Asp.Net(C#) Jquery Ajax with WebMethod In Public Method Call

    - by oraclee
    Hi All; Aspx Page: $(document).ready(function() { $("#btnn").click(function() { $.ajax({ type: "POST", url: "TestPage.aspx/emp", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { } }); }); }); CodeBehind: public void grdload() { GridView1.DataSource = GetEmployee("Select * from Employee"); GridView1.DataBind(); } [WebMethod] public static void emp() { TestPage re = new TestPage(); re.grdload(); } I Can't Gridview Data Load ? How To Make GridView Data Load? Thank You

    Read the article

  • Datanucleus/JDO Level 2 Cache on Google App Engine

    - by Thilo
    Is it possible (and does it make sense) to use the JDO Level 2 Cache for the Google App Engine Datastore? First of all, why is there no documentation about this on Google's pages? Are there some problems with it? Do we need to set up limits to protect our memcache quota? According to DataNucleus on Stackoverflow, you can set the following persistence properties: datanucleus.cache.level2.type=javax.cache datanucleus.cache.level2.cacheName={cache name} Is that all? Can we choose any cache name? Other sources on the Internet report using different settings. Also, it seems we need to download the DataNucleus Cache support plugin. Which version would be appropriate? And do we just place it in WEB-INF/lib or does it need more setup to activate it?

    Read the article

  • Python and Postgresql

    - by Ian
    Hi all, if you wanted to manipulate the data in a table in a postgresql database using some python (maybe running a little analysis on the result set using scipy) and then wanted to export that data back into another table in the same database, how would you go about the implementation? Is the only/best way to do this to simply run the query, have python store it in an array, manipulate the array in python and then run another sql statement to output to the database? I'm really just asking, is there a more efficient way to deal with the data? Thanks, Ian

    Read the article

  • ASP .NET MVC partial views and routing

    - by Johnny
    Hi, I have an MVC view that contains a number of partial views. These partial views are populated using partial requests so the controller for the view itself doesn't pass any data to them. Is it possible to reload the data in one of those partial views if an action was triggered in another? For example, one partial view has a jqGrid and I want to refresh the data in another partial view when a user selects a new row in this grid. Is there a code example for this scenario (in C#) that I can look at to see what am I doing wrong? I am using ajax calls to trigger a new request but non of the partial views are refreshed so I am not sure if the issue is with the routing, the controller, or if this even possible at all! Thanks!

    Read the article

  • Flex ANT tasks can't find my assets

    - by lach
    I'm attempting to compile my Flex project with an ANT build script. One of my MXML components references an external XML data file, like this: <mx:XML id="treeData" source="assets/data/help.xml" /> When I build the project using Flex Builder, it compiles fine. However, when I try to compile it using ANT, I get the following error: Error: Problem finding external XML: assets/data/help.xml How come ANT isn't finding the XML file? Apparently it knows the source path otherwise it would not have found the component to begin with. I added the source path to the target anyway, but it doesn't seem to have made any difference: <source-path path-element="${SRC}" /> Any ideas?

    Read the article

  • ASP.NET MVC Model Binding

    - by Noel
    If i have a Controller Action that may recieve both HTTP GET and HTTP POST from a number of different sources with each source sending different data e.g. Source1 performs a form POST with two form items Item1 and Item2 Source2 performs a GET where the data is contained in the query string (?ItemX=2&ItemY=3) Is it possible to have a controller action that will cater for all these cases and perform binding automatically e.g. public ActionResult Test(Dictionary data) { // Do work ... return View(); } Is this possible with a custom binder or some other way? Dont want to work directly with HttpContext.Request if possible

    Read the article

  • Upgraded to EF6 blew up Universal provider session state for Azure

    - by Ryan
    I have an ASP.NET MVC 4 application that using the Universal providers for session state: <sessionState mode="Custom" sqlConnectionString="DefaultConnection" customProvider="DefaultSessionProvider"> <providers> <add name="DefaultSessionProvider" type="System.Web.Providers.DefaultSessionStateProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" /> </providers> </sessionState> When I upgraded to entity framework 6 I now get this error: Method not found: 'System.Data.Objects.ObjectContext System.Data.Entity.Infrastructure.IObjectContextAdapter.get_ObjectContext()'. I tried adding the reference to System.Data.Entity.dll back in but that didn't work and I know that your not suppose to add that with the new entity framework..

    Read the article

  • converting JSON to an object / dictionary / dynamic

    - by benpage
    I'm currently using jqGrid to display data. Part of jqGrid's interface will give you search options, posting back the search details in a JSON string, for example: {"groupOp":"AND","rules":[{"field":"PersonID","op":"eq","data":"123"},{"field":"LastName","op":"eq","data":"Smith"}]} (meaning i'm searching for personID = 123, and LastName = 'Smith') so what i'm hoping to do is somehow convert that back into something i can use server-side. Does anyone have a solution for this that may convert it back into an object of some kind? My current solution would be to convert into xml, parse with linq and create instances of my own 'search' class with a 'rules' collection.

    Read the article

  • ORA-22835 using JPA (Buffer too small)

    - by Kenneth
    I am trying to persist an Entity with a @Lob annotated String field. The content of that fiels if bigger than the 40k buffer size limit. The first problem I had was related to the setString method used internally by the JPA implementation (Hibernate in my case) and the Oracle JDBC Driver. This problem was solved adding <property name="hibernate.connection.SetBigStringTryClob" value="true"/> to my persistence.xml file. Then, the error changed to a ORA-22835 error (the buffer is too small). ¿Is there any way that JPA solves this problem without going to a low-level implementation? ¿Any suggestions?

    Read the article

< Previous Page | 803 804 805 806 807 808 809 810 811 812 813 814  | Next Page >