Search Results

Search found 59643 results on 2386 pages for 'data migration'.

Page 858/2386 | < Previous Page | 854 855 856 857 858 859 860 861 862 863 864 865  | Next Page >

  • Dojo JSON call back always returns an error

    - by Sunny
    Hi Guys, I am using Dojo and making a AJAX call to a JAVA Class and trying to get the output of the program to a Alert box to the client. var showResult = function(result){ console.log("Showing Result()"); var store = new dojo.data.ItemFileReadStore({ data: result}); console.dir(store); store.fetch( { onItem: function(data) { alert("Hie"); }, onError: function(error,request){ alert("ERROR");} }); }; This is my code, showResult basically is call back function from xhr request. I can see console.dir(store) printed onto Firebug but the fetch function always returns the onError block. My store array is of the form {info="Test Message"} and I need to retrieve "Test Message" and display it in a Alert box. Any help?

    Read the article

  • Is there any significant benefit to reading string directly from control instead of moving it into a

    - by Kevin
    sqlInsertFrame.Parameters.AddWithValue("@UserName", txtUserName.txt); Given the code above...if I don't have any need to move the textbox data into a string variable, is it best to read the data directly from the control? In terms of performance, it would seem smartest to not create any unnecessary variables which use up memory if its not needed. Or is this a situation where its technically true but doesn't yield any real world results due to the size of the data in question. Forgive me, I know this is a very basic question.

    Read the article

  • Best way to reduce consecutive NAs to single NA

    - by digEmAll
    I need to reduce the consecutive NA's in a vector to a single NA, without touching the other values. So, for example, given a vector like this: NA NA 8 7 NA NA NA NA NA 3 3 NA -1 4 what I need to get, is the following result: NA 8 7 NA 3 3 NA -1 4 Currently, I'm using the following function: reduceConsecutiveNA2One <- function(vect){ enc <- rle(is.na(vect)) # helper func tmpFun <- function(i){ if(enc$values[i]){ data.frame(L=c(enc$lengths[i]-1, 1), V=c(TRUE,FALSE)) }else{ data.frame(L=enc$lengths[i], V=enc$values[i]) } } Df <- do.call(rbind.data.frame,lapply(1:length(enc$lengths),FUN=tmpFun)) return(vect[rep.int(!Df$V,Df$L)]) } and it seems to work fine, but probably there's a simpler/faster way to accomplish this task. Any suggestions ? Thanks in advance.

    Read the article

  • jquery to php (wait for php script until result/finish)

    - by user354051
    Hi, I am executing a php call from javascript(jQuery), php sript is sending result back. The problem is that these php scripts are taking some time (milliseconds), and java script is not waiting for them to finish, thus variables are not getting initialized with correct values. The code is simple: $.get("php/validation.php",{'email':email},function(data){ // valid_email now contains true/false alert(data); if(data=="true"){ var valid_email = true;} }); The "alert" is printing true but value of valid_mail is recognized as "false" in the code below. Is there any other better way to call php scripts and wait for until they are not finished? Prashant

    Read the article

  • Documents stored in SQL table

    - by vradenburg
    I have a legacy FoxPro application which stores documents in an SQL table in a field with the image datatype. FoxPro accesses the image datatype as a "General" field which can be used to store various files. I have a FoxPro control which interfaces with the General field for modifying/viewing the document that was stored. I need to migrate this control to .NET and make it easy for users to view/modify documents of various types. Does anyone have any suggestions on some ways to go about this or know of things that I'll need to consider for the migration to .NET? I'm pretty sure that I'll need to migrate the field to either a varbinary(max) or FileStream data type.

    Read the article

  • What next generation low level language is the best bet to migrate the code base ?

    - by e-satis
    Let's say you have a company running a lot of C/C++, and you want to start planning migration to new technologies so you don't end up like COBOL companies 15 years ago. For now, C/C++ runs more than fine and there is plenty dev on the market for it. But you want to start thinking about it now, because given the huge running code base and the data sensitivity, you feel it can take 5-10 years to move to the next step without overloading the budget and the dev teams. You have heard about D, starting to be quite mature, and Go, promising to be quite popular. What would be your choice and why?

    Read the article

  • connect perl with mssql

    - by Bharanikumar
    i have user id, password,databasename, datasource details, i want to connect perl with mssql server, i just used following snippet, but getting error, #!/usr/bin/perl -w use strict; use DBI; my $data_source = q/dbi:ODBC:192.168.3.137/; my $user = q/bharani/; my $password = q/123456/; # Connect to the data source and get a handle for that connection. my $dbh = DBI->connect($data_source, $user, $password) or die "Can't connect to $data_source: $DBI::errstr"; My Error DBI connect('192.168.3.137','bharani',...) failed: [Microsoft][ODBC Driver Manag er] Data source name not found and no default driver specified (SQL-IM002) at my sqlconnect.pl line 14 Can't connect to dbi:ODBC:192.168.3.137: [Microsoft][ODBC Driver Manager] Data s ource name not found and no default driver specified (SQL-IM002) at mysqlconnect .pl line 14. Information: my sql server present in another system, am just trying to connect with above details, plz tellme, i should crease DSN in my system, or anything i missed in my program

    Read the article

  • Divide numpy array

    - by BandGap
    Hi all I have some data represented in a 1300x1341 matrix. I would like to split this matrix in several pieces (e.g. 9) so that I can loop over and process them. The data needs to stay ordered in the sense that x[0,1] stays below (or above if you like) x[0,0] and besides x[1,1]. Just like if you had imaged the data, you could draw 2 vertical and 2 horizontal lines over the image to illustrate the 9 parts. If I use numpys reshape (eg. matrix.reshape(9,260,745) or any other combination of 9,260,745) it doesn't yield the required structure since the above mentioned ordering is lost... Did I misunderstand the reshape method or can it be done this way? What other pythonic/numpy way is there to do this?

    Read the article

  • JsonStore.insert() causes exception in extjs

    - by kalan
    I have an EditorGridPanel with toolbar button to add new records. Everything works fine except one scenario. When I try to insert a record which already exists in database, server sends back: {"success":false,"message":"already exists","data":{}} but grid creates a new row marked with red triangle. If after that I try to insert a new record (even if it doesn't exist in database), everything works fine on the server side, but i get an 'uncaught exception' in firebug. It says: 'uncaught exception: Ext.data.DataReader: #realize was called with invalid remote-data. Please see the docs for DataReader#realize and review your DataReader configuration.' why is that?

    Read the article

  • BindingSource.Filter does not work with affected rows in DataSet

    - by Artru
    I created DataGridView in WinForms App. Then at the initializing create DataSet and fill it with some data, bind to DataGridView . I added textBox as a filter. `myBindingSource = new BindingSource(); myBindingSource.DataSource = ds; myBindingSource.DataMember = dt.TableName; dataGridView1.DataSource = myBindingSource;` string strCmd = "some filter query"; myBindingSource.Filter = strCmd; It works fine with filtering. If I add or delete any row to the DataSet then new data will occur in dridview, BUT filter will not work with new data. As I found on the forums there is a Table cash, which .NET uses for filter and I want to refresh it. How can I do it?

    Read the article

  • Migrating from mssql to firebird: pro and cons

    - by user193655
    i am considering the migration for 3 reasons: 1) SQLSERVER installation is a nightmar, expecially for 1-user software. Software installs in 10 seconds, SQLServer in 1 hour. Firebird installation is much easier. 2) SQLSERVER runs on windows server only 3) My customers have all the express edition 4) i am not using any advanced feature, I am now starting using filestream, but the main reason for this is that Express eidtion has 4/10GB db size limit So these are all Pros of moving to Firebird. Which are the cons? I can also plan to support both platiforms, but this will backfire I fear.

    Read the article

  • how to make a BufferedReader in C

    - by peiska
    I am really new programming in C. How can i do the same in C, maybe in a more simple way then the one a do in Java. Each line of the input have to Integers: X e Y separated by a space. 12 1 12 3 23 4 9 3 InputStreamReader in = new InputStreamReader(System.in); BufferedReader buf = new BufferedReader(in); int n; int k; double sol; String line =""; line=buf.readLine(); while( line != null && !line.equals("")){ String data [] = line.split(" "); n = Integer.parseInt(data[0]); k = Integer.parseInt(data[1]); calculus (n,k); line = buf.readLine(); }

    Read the article

  • C - Discard the edges of an arbitrary level of a multidimensional array

    - by Medivh
    I have some geographical data, that I'm trying to parse into a usable format. The data is kept in NetCDF files, and is read out as a multidimensional array. My problem comes because the source of the geographical data has a strip of longitude on each side of the grid that overlaps the other side. That is, I have a longitude point of -1 degree, and another of 361 degrees. Unfortunately, I've got time, latitude, and sometimes height as dimensions in this array as well, and I have no way of predicting in advance where each dimension will be in the list (or if it's a three dimensional array, or a four dimensional array). Further complicating the problem, the array can be of floats, doubles or integers, so I have to pass it around as a void. Are there any NetCDF tools that I can use to pre-prepare the files? If not, how would you suggest I go about stripping the excess longitudes?

    Read the article

  • How to set variable size for List control item in Flex?

    - by joejax
    The following code display a list of comments using List control. The item height set to a fixed value (150), so it seems working: if the content is too long, the scrollbar shows... However, what I really want is not to set the height but let it to change according to the content size. Is there any way to accomplish this? <mx:List id="commentList" width="100%" dataProvider="{commentSet.commentArrayColl}" rowCount="{commentSet.commentArrayColl.length}" > <mx:itemRenderer> <mx:Component> <mx:VBox width="100%" height="150" > <mx:Text text="{data.commentContent}" /> <mx:Text text="{data.username} ({data.modified})"/> </mx:VBox> </mx:Component> </mx:itemRenderer> </mx:List>

    Read the article

  • MySQL: Efficient Blobbing?

    - by feklee
    I'm dealing with blobs of up to - I estimate - about 100 kilo bytes in size. The data is compressed already. Storage engine: InnoDB on MySQL 5.1 Frontend: PHP (Symfony with Propel ORM) Some questions: I've read somewhere that it's not good to update blobs, because it leads to reallocation, fragmentation, and thus bad performance. Is that true? Any reference on this? Initially the blobs get constructed by appending data chunks. Each chunk is up to 16 kilo bytes in size. Is it more efficient to use a separate chunk table instead, for example with fields as below? parent_id, position, chunk Then, to get the entire blob, one would do something like: SELECT GROUP_CONCAT(chunk ORDER BY position) FROM chunks WHERE parent_id = 187 The result would be used in a PHP script. Is there any difference between the types of blobs, aside from the size needed for meta data, which should be negligible.

    Read the article

  • How can I get around MySQL Errcode 13 with SELECT INTO OUTFILE?

    - by Ryan Olson
    but I am trying to dump the contents of a table to a csv file using a MySQL SELECT INTO OUTFILE statement. If I do: SELECT column1, column2 INTO OUTFILE 'outfile.csv' FIELDS TERMINATED BY ',' FROM table_name; outfile.csv will be created on the server in the same directory this database's files are stored in. However, when I change my query to: SELECT column1, column2 INTO OUTFILE '/data/outfile.csv' FIELDS TERMINATED BY ',' FROM table_name; I get: ERROR 1 (HY000): Can't create/write to file '/data/outfile.csv' (Errcode: 13) Errcode 13 is a permissions error, even if I change ownership of /data to mysql:mysql and give it 777 permissions. MySQL is running as user "mysql". Strangely, I can create the file in /tmp, just not in any other directory I've tried, even with permissions set such that user mysql should be able to write to the directory. This is MySQL 5.0.75 running on Ubuntu.

    Read the article

  • Deciphering Encoding: Packet Analyzation Tools

    - by Zombies
    I am looking for better tools than wireshark for this. The problem with wireshark is that it does not format the data layer (which is the only part I am looking at) cleanly for me to compare the different packets and attempt to understand the third party encoding (which is closed source). Specifically, what are some good tools for viewing data, and not tcp/udp header information? Particularly, a tool that formats the data for comparison. To be very specific: I would like a program that compares multiple (not just 2) files in hex.

    Read the article

  • Awk appears to disconnect my DB2 session when piping

    - by greggannicott
    Hello. I'm attempting to run the following command in Korn Shell (ksh): set -A INDEXES `db2 "describe indexes for table ${TABSCHEMA}.${TABNAME} show detail" | awk '{print $1"."$2}'` What I'm attempting to achieve is place a list of the indexes over a particular table into an array which I can later iterate through. The problem is, when I run the above command the contents of the array starts with the error message of 'SQL1024N' (which is telling me that the database connection does not exist). However, if I remove the 'awk' at the end of the statement as so: set -A INDEXES `db2 "describe indexes for table ${TABSCHEMA}.${TABNAME} show detail"` it works just fine (well, to the extent its returning data. Obviously without the awk I'm not capturing the correct data). Does anyone know why the awk is having this affect? I appreciate there is more than one way to get this data, but it baffles me as to why this is happening. Thanks in advance. Greg.

    Read the article

  • Jquery draggable with zoom problem

    - by Manuel
    I am working on a page in witch all its contents are scaled by using zoom. The problem is that when I drag something in the page the dragging item gets a bad position that seems relative to the zoom amount. To solve this I tried to do some math on the position of the draggable component, but seems that even tho visually its corrected, the "true" position its not recalculated. here is some code to explain better: var zoom = Math.round((parseFloat($("body").css("zoom")) / 100)*10)/10; var x = $(this).data('draggable').position; $(this).data('draggable').position.left = Math.round(x.left/zoom); $(this).data('draggable').position.top = Math.round(x.top/zoom); Any help would be greatly appreciated

    Read the article

  • will_paginate without use of activerecord

    - by truthSeekr
    I apologize if this is a trivial question or my understanding of rails is weak. I have 2 actions in my controller, index and refine_data. index fetches and displays all the data from a database table. refine_data weeds out unwanted data using regex and returns a subset of the data. Controller looks like: def index Result.paginate :per_page => 5, :page => params[:page], :order => 'created_at DESC' end def refine_data results = Result.all new_results = get_subset(results) redirect_to :action => 'index' end I would like to redirect the refine_data action to the same view (index) with new_results. As new_results are not from the database table (or model), how do I go about constructing my paginate?

    Read the article

  • Using two xmlhttprequest calls on a page

    - by blacktooth
    I have two divisions, <div id=statuslist></div><div id=customerlist></div> The function sendReq() creates a xmlhttprequest and fetches the data into the division. sendReq('statuslist','./include/util.php?do=getstatuslist','NULL'); sendReq('customerlist','emphome.php?do=getcustomerlist','NULL'); I have a problem, The data fetched into the 'customerlist' gets copied onto 'statuslist' If i change the order of function calls, sendReq('customerlist','emphome.php?do=getcustomerlist','NULL'); sendReq('statuslist','./include/util.php?do=getstatuslist','NULL'); Now the data of 'statuslist' gets into 'customerlist'.. Whats the problem with the code?

    Read the article

  • jQuery prepend a auto fetch php file database

    - by newinjs
    Hello, Currently i have a php file which fetch data from mysql to display in website. I'm using input value to send as $_GET parameter to php file to determine the data to show. mysql_query("SELECT * FROM messages WHERE msg_id>'$refID' ORDER BY msg_id DESC"); //$refID is input value So once it load, i'm using this jquery code to display it on website setInterval( function () { $.get('load.php?id='+refID, function(html) { $("ol#update").prepend(html); $("ol#update li:first").slideDown("slow"); }); }, 10000); My question is how do i stop it from keep on repeating the same message? i want it to display if there is new data.

    Read the article

  • SQL returns non-array value in PHP

    - by DeadMG
    $request = 'SELECT * FROM flight WHERE Id = \''.$_SESSION['LFLightRadio'].'\''; $data = mysql_fetch_array(mysql_query($request, $SQL)); echo '<table class="table">'; foreach($data as $key => $value) { echo '<th class="head" align="center" height="19">'.$key.'</th>'; } echo '<tr>'; foreach($data as $key => $value) { echo '<td class="cell" align="center" height="19">'.$value.'</td>'; } echo '</tr></table>'; I know that the LFlightRadio value is set, and is a value returned by the Id value of a previously returned row from the flight database. So within "flight", a record definitely exists with this Id. But, this still gives me a non-array result, so that when I try to use foreach on it, it errors out. Suggestions?

    Read the article

  • Two-key encryption/decryption?

    - by Matt
    I'm looking to store some fairly sensitive data using PHP and MySQL and will be using some form of reversible encryption to do so since I need to get the data back out in plain text for it to be of any use. I'll be deriving the encryption key from the users' username/password combination but I'm stumped for what to do in the (inevitable) event of a password being forgotten. I realise that the purpose of encryption is that it can only be undone using the correct key but this must have been addressed before.. I'm trying to get my head around whether or not public key cryptography would apply to the problem but all I can think of is that the private key will still need to be correct to decrypt the data.. Any ideas?

    Read the article

  • MongoDB or CouchDB - fit for production?

    - by Alan
    I was wondering if anyone can tell me if MongoDB or CouchDB are ready for a production environment. I'm now looking at these storage solutions (I'm favouring MongoDB at the moment), however these projects are quite young and so I foresee that I'm going to have to work quite hard to convince my manager that we should adopt this new technology. What I'd like to know is: 1) Who is using MongoDB or CouchDB today in a production environment? 2) How are you using MongoDB/CouchDB? 3) What problems (if any) did you come across when you adopted this new storage mechanism (and how did you overcome them)? 4) How did you deal with any migration issues that you had to deal with? 5) Do you have any good/bad experiences with either of these solutions that you'd like to share? Thanks.

    Read the article

< Previous Page | 854 855 856 857 858 859 860 861 862 863 864 865  | Next Page >