Search Results

Search found 59643 results on 2386 pages for 'data migration'.

Page 1013/2386 | < Previous Page | 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020  | Next Page >

  • No results are returned when using Flickr JSON request

    - by Martijn1981
    I'm still fairly new to AJAX and I'm experimenting with Twitter and Flickr. Twitter is working fine so far, but I've run into some issues with the Flickr API. I'm getting no results back. The URL seems to be working fine and I'm pointing to the right object containing the array ('items'). Can anybody tell me what I'm doing wrong please? Thanks! $('#show_pictures').click(function(e){ e.preventDefault(); $.ajax({ url: 'http://api.flickr.com/services/feeds/photos_public.gne?format=json&jsoncallback=?&tags=home', dataType: 'jsonp', success: function(data) { $.each(data.items, function(i, item){ $('<div></div>') .hide() .append('<h1>'+item.title+'</h1>') .append('<img src="'+item.media.m+'" >') .append('<p>'+item.description+'</p>') .appendTo('#results') .fadeIn(); }) }, error: function(data) { alert('Something went wrong!'); } }); });

    Read the article

  • How do I efficiently parse a CSV file in Perl?

    - by Mike
    I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient. My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application. My question is, what is the most time efficient means of parsing a large CSV file using only built in tools? note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although it might work effectively. edit It can only involve built-in tools that ship with Perl 5.8. For bureaucratic reasons, I cannot use any third party modules (even if hosted on cpan) another edit Let's assume that our solution is only allowed to deal with the file data once it is entirely loaded into memory. yet another edit I just grasped how stupid this question is. Sorry for wasting your time. Voting to close.

    Read the article

  • skip reading headers in matlab

    - by Paul
    I had a similar question. but what i am trying now is to read files in .txt format into matlab. My problem is with the headers. Manytimes due to errors the system rewrites the headers in the middle of file and then matlab cannot read the file. IS there a way to skip it? I know i can skip reading some characters if i know what the character is. here is the code i am using. [c,pathc]=uigetfile({'*.txt'},'Select the data','V:\data); file=[pathc c]; data= dlmread(file, ',', 1,4); this way i let the user pick the file. My files are huge typically [ 86400 125 ] so naturally it has 125 header fields or more depends on files. Thanks Because the files are so big i cannot copy , but its in format like day time col1 col2 col3 col4 ............................... 2/3/2010 0:10 3.4 4.5 5.6 4.4 ............................... .................................................................. .................................................................. and so on

    Read the article

  • AJAX/JSONP Question. Access id denied using IE while requesting corss domain.

    - by Sisir
    Ok, Here we go. I have already searched the Stack for the answer i have found some useful info but i want to clear up some more things. I also search the net for the answer but no real help. I have worked with some api (yelp, ouside.in). In yelp i use to inject the script to head with the url request to the api with a callback funcion. I worked fine in all browsers. But while using outside.in api when i call the url the callback in not working. In yelp they have a url field can be used like that callback=callbackfuncion so the callback will automatically called. But in outside.in there is not such field available. Is there are any standard command for callback function which will work regardless of any server/api? I also tried a standard ajax request using jQuery $.ajax() function. It worked for my local pc for both IE and other browser but did not working in IE showing the error: access denied, other borwser seems ok. Firebug in my FF also don't notice any errors. Outside.in has an javascript example but it is too hard to me to understand github.com/outsidein/api-examples/tree/master/javascript/browser/ site i am working: http://citystir.com yelp: yelp.com outside.in: outside.in Techniqual info: i am using: wampserver in local, wordpress for hosting, Godaddy, apache for remote with linux. Codes: Using Jquery $.ajax url is like: "http://hyperlocal-api.outside.in/v1.1/states/Illinois/cities/chicago/stories?dev_key="+key+"&sig="+signeture+"&limit=3 function makeOutsideRequest(url){ $.ajax({ url: url, dataType: 'json', type: 'GET', success: function (data, status, xhr) { if (data == null) { alert("An error occurred connecting to " + url + ". Please ensure that the server is running and configured to allow cross-origin requests."); }else{ printHomeNews(data); } }, error: function (xhr, status, error) { alert("An error occurred - check the server log for a stack trace."); } }); } Thanks!

    Read the article

  • plot an item map (based on difficulties)

    - by Tyler Rinker
    I have a data set of item difficulties that correspond to items on a questionnaire that looks like this: item difficulty 1 ITEM_6: I DESTROY THINGS BELONGING TO OTHERS 2.31179818 2 ITEM_11: I PHYSICALLY ATTACK PEOPLE 1.95215238 3 ITEM_5: I DESTROY MY OWN THINGS 1.93479536 4 ITEM_10: I GET IN MANY FIGHTS 1.62610855 5 ITEM_19: I THREATEN TO HURT PEOPLE 1.62188759 6 ITEM_12: I SCREAM A LOT 1.45137544 7 ITEM_8: I DISOBEY AT SCHOOL 0.94255210 8 ITEM_3: I AM MEAN TO OTHERS 0.89941812 9 ITEM_20: I AM LOUDER THAN OTHER KIDS 0.72752197 10 ITEM_17: I TEASE OTHERS A LOT 0.61792597 11 ITEM_9: I AM JEALOUS OF OTHERS 0.61288399 12 ITEM_4: I TRY TO GET A LOT OF ATTENTION 0.39947791 13 ITEM_18: I HAVE A HOT TEMPER 0.32209970 14 ITEM_13: I SHOW OFF OR CLOWN 0.31707701 15 ITEM_7: I DISOBEY MY PARENTS 0.20902108 16 ITEM_2: I BRAG 0.19923607 17 ITEM_15: MY MOODS OR FEELINGS CHANGE SUDDENLY 0.06023317 18 ITEM_14: I AM STUBBORN -0.31155481 19 ITEM_16: I TALK TOO MUCH -0.67777282 20 ITEM_1: I ARGUE A LOT -1.15013758 I want to make an item map of these items that looks similar (not exactly) to this (I created this in word but it lacks true scaling as I just eyeballed the scale). It's not really a traditional statistical graphic and so I don't really know how to approach this. I don't care what graphics system this is done in but I am more familiar with ggplot2 and base. I would greatly appreciate a method of plotting this sort of unusual plot. Here's the data set (I'm including it as I was having difficulty using read.table on the dataframe above): DF <- structure(list(item = structure(c(17L, 3L, 16L, 2L, 11L, 4L, 19L, 14L, 13L, 9L, 20L, 15L, 10L, 5L, 18L, 12L, 7L, 6L, 8L, 1L ), .Label = c("ITEM_1: I ARGUE A LOT", "ITEM_10: I GET IN MANY FIGHTS", "ITEM_11: I PHYSICALLY ATTACK PEOPLE", "ITEM_12: I SCREAM A LOT", "ITEM_13: I SHOW OFF OR CLOWN", "ITEM_14: I AM STUBBORN", "ITEM_15: MY MOODS OR FEELINGS CHANGE SUDDENLY", "ITEM_16: I TALK TOO MUCH", "ITEM_17: I TEASE OTHERS A LOT", "ITEM_18: I HAVE A HOT TEMPER", "ITEM_19: I THREATEN TO HURT PEOPLE", "ITEM_2: I BRAG", "ITEM_20: I AM LOUDER THAN OTHER KIDS", "ITEM_3: I AM MEAN TO OTHERS", "ITEM_4: I TRY TO GET A LOT OF ATTENTION", "ITEM_5: I DESTROY MY OWN THINGS", "ITEM_6: I DESTROY THINGS BELONGING TO OTHERS", "ITEM_7: I DISOBEY MY PARENTS", "ITEM_8: I DISOBEY AT SCHOOL", "ITEM_9: I AM JEALOUS OF OTHERS" ), class = "factor"), difficulty = c(2.31179818110545, 1.95215237740899, 1.93479536058926, 1.62610855327073, 1.62188759115818, 1.45137543733965, 0.942552101641177, 0.899418119889782, 0.7275219669431, 0.617925967008653, 0.612883990709181, 0.399477905189577, 0.322099696946661, 0.31707700560997, 0.209021078266059, 0.199236065264793, 0.0602331732900628, -0.311554806052955, -0.677772822413495, -1.15013757942119)), .Names = c("item", "difficulty" ), row.names = c(NA, -20L), class = "data.frame") Thank you in advance.

    Read the article

  • object references an unsaved transient instance

    - by developer
    Hi, I have 2 tables, user and userprofile, both with almost identical fields. user table references userprofile table by primary key ID. My requirement is that on click of a button I need to dump user table record to userprofile table. Now for a particular user table, if there is a corresponding userprofile entry, I am successfully able to dump the data, but if there is no record in userprofile table then I need to create a new record by dumping all the data. My problem is that I am able to update the data when the record is present in userprofile table, but in the case wherein I have to create a new record I get the below error "object references an unsaved transient instance - save the transient instance before flushing". `<class name="User"> <id name="ID" type="Int32"> <generator class="native" /> </id> <many-to-one name="Pid" class="UserProfile" /> </class>` UserProfile is another table and Pid above references the Primary key ID of UserProfile table.

    Read the article

  • Return the exact ID using fetch_assoc

    - by Selom
    Hi, I have a small problem in my code and I need your help. Well Im using fetch_assoc to get data from database and I need to get the ID number of each of returned values. the issue is that my code only return the ID number of the last data. Here's my code: <form method="post" action="action.php"> <select name="album" style="border:1px solid #CCC; font-size:11px; padding:1px"> <?php $sql = "SELECT * FROM table"; $stmt = $dbh -> prepare($sql); $stmt -> execute(); while($row = $stmt -> fetch(PDO::FETCH_ASSOC)) { $album_ID = $row['album_ID']; $value = $row['album_name']; print "<option value ='". $value ."'>". $value. "</option>"; } ?> </select> <input type="hidden" name="album_ID" value="<?php print $album_ID?>"/> </form> I would like the hidden input type holds the selected album id, but it always holds the album id of the last data. Please help.

    Read the article

  • Which are the most useful techniques for faster Bluetooth?

    - by Mike Howard
    Hi. I'm adding peer-to-peer bluetooth using GameKit to an iPhone shoot-em-up, so speed is vital. I'm sending about 40 messages a second each way, most of them with the faster GKSendDataUnreliable, all serializing with NSCoding. In testing between a 3G and 3GS, this is slowing the 3G down a lot more than I'd like. I'm wondering where I should concentrate my efforts to speed it up. How much slower is GKSendDataReliable? For the few packets that have to get there, would it be faster to send a GKSendDataUnreliable and have the peer send an acknowledgement so I can send again if I don't get the Ack within, say, 100ms? How much faster would it be to create the NSData instance using a regular C array rather than archiving with the NSCoding protocol? Is this serialization process (for about a dozen floats) just as slow as you'd expect from an object creation/deallocation overhead, or is something particularly slow happening? I heard that (for example) sending four seperate sets of data is much, much slower, than sending one piece of data four times the size. Would I make a significant saving by sending separate packets of data that wouldn't always go together in the same packet when they happen at the same time? Are there any other bluetooth performance secrets I've missed? Thanks for your help.

    Read the article

  • Web services or shared database for (game) server communication?

    - by jaaronfarr
    We have 2 server clusters: the first is made up of typical web applications backed by SQL databases. The second are highly optimized multiplayer game servers which keep all data in memory. Both clusters communicate with clients via HTTP (Ajax with JSON). There are a few cases in which we need to share data between the two server types, for example, reporting back and storing the results of a game (should ultimately end up in the database). We're considering several approaches for inter-server communication: Just share the MySQL databases between clusters (introduce SQL to the game servers) Sharing data in a distributed key-value store like Memcache, Redis, etc. Use an RPC technology like Google ProtoBufs or Apache Thrift Using RESTful web services (the game server would POST back to the web servers, for example) At the moment, we're leaning towards web services or just sharing the database. Sharing the database seems easy, but we're concerned this adds extra memory and a new dependency into the game servers. Web services provide good separation of concerns and fit with the existing Ajax we use, but add complexity, overhead and many more ways for communication to fail. Are there any other good reasons not to use one or the other approach? Which would be easier to scale?

    Read the article

  • Which way is preferred when doing asynchronous WCF calls?

    - by Mikael Svenson
    When invoking a WCF service asynchronous there seems to be two ways it can be done. 1. public void One() { WcfClient client = new WcfClient(); client.BegindoSearch("input", ResultOne, null); } private void ResultOne(IAsyncResult ar) { WcfClient client = new WcfClient(); string data = client.EnddoSearch(ar); } 2. public void Two() { WcfClient client = new WcfClient(); client.doSearchCompleted += TwoCompleted; client.doSearchAsync("input"); } void TwoCompleted(object sender, doSearchCompletedEventArgs e) { string data = e.Result; } And with the new Task<T> class we have an easy third way by wrapping the synchronous operation in a task. 3. public void Three() { WcfClient client = new WcfClient(); var task = Task<string>.Factory.StartNew(() => client.doSearch("input")); string data = task.Result; } They all give you the ability to execute other code while you wait for the result, but I think Task<T> gives better control on what you execute before or after the result is retrieved. Are there any advantages or disadvantages to using one over the other? Or scenarios where one way of doing it is more preferable?

    Read the article

  • lists searches in SYB or uniplate haskell

    - by Chris
    I have been using uniplate and SYB and I am trying to transform a list For instance type Tree = [DataA] data DataA = DataA1 [DataB] | DataA2 String | DataA3 String [DataA] deriving Show data DataB = DataB1 [DataA] | DataB2 String | DataB3 String [DataB] deriving Show For instance, I would like to traverse my tree and append a value to all [DataB] So my first thought was to do this: changeDataB:: Tree -> Tree changeDataB = everywhere(mkT changeDataB') chanegDataB'::[DataB] -> [DataB] changeDataB' <add changes here> or if I was using uniplate changeDataB:: Tree -> Tree changeDataB = transformBi changeDataB' chanegDataB'::[DataB] -> [DataB] changeDataB' <add changes here> The problem is that I only want to search on the full list. Doing either of these searches will cause a search on the full list and all of the sub-lists (including the empty list) The other problem is that a value in [DataB] may generate a [DataB], so I don't know if this is the same kind of solution as not searching chars in a string. I could pattern match on DataA1 and DataB3, but in my real application there are a bunch of [DataB]. Pattern matching on the parents would be extensive. The other thought that I had was to create a data DataBs = [DataB] and use that to transform on. That seems kind of lame, there must be a better solution.

    Read the article

  • Jquery ajax ($.ajax) not working on chrome. please help

    - by racky
    Hi, I need a little help to figure out why the following code does not work on google chrome 5/windows xp. It works well on all other browsers (IE, FF, Safri, Opera etc). Can someone shed some light around this? /* AJAX Request */ jq("#a-post-request").unbind("click").bind("click", function(e){ //jq("#loading").css({"display":"block"}); jq.ajax({ url: "search_data_table.html", type: "get", cache: false, error: function(){alert ("No data found for your search.");}, success: function(data){ jq("#search-results-table tbody").empty().append(data); jq("#search-results").css({"display":"block"}); jq("#search-results-table").trigger("update"); // this one is for the table sorter plugin // set sorting column and direction, this will sort on the first column. var sorting = [[0,0]];// this one is for the table sorter plugin // sort on the first column . jq("#search-results-table").trigger("sorton",[sorting]);// this one is for the table sorter plugin e.preventDefault(); } }); }); Many thanks, Racky

    Read the article

  • Error when using jquery webservice call to populate dropdownlist

    - by bugz
    For some reason when i tested parsing a json string to a dropdownlist it worked doing this var json = [{ "Id": "12345", "WorkUnitId": "SR0001954", "Description": "Test Service Request From Serena", "WorkUnitCategory": "ServiceRequest" }, { "Id": "12355", "WorkUnitId": "WOR001854", "Description": "Test Work Order From Serena", "WorkUnitCategory": "ServiceRequest" }, { "Id": "12365", "WorkUnitId": "DBR001274", "Description": "Test Database Related Service Request From Serena", "WorkUnitCategory": "ServiceRequest"}]; $(json).each(function() { comboboxWorkUnit.append($('<option>').val(this.Id).text(this.WorkUnitId)); }); However when i tryed taking the json from my webservice and placing it into the dropdownlist i get an error saying the dropdownlist doesnt support this metod. $.ajax({ type: "POST", url: "Services/WorkUnitService.asmx/WorkUnit", data: "{'Number' : '" + Number.text() + "','Id': '" + combobox.val() + "','workerType': '" + EmployeeType + "'}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(json) { $(json).each(function() { comboboxWorkUnit.append($('<option>').val(this.Id).text(this.WorkUnitId)); }); } }); I also tried which called the webservice method but through an error $.postJSON("Services/WorkUnitService.asmx/WorkUnitsForParAndPhase", "{'parNumber' : '" + parNumber + "','phaseId': '" + combobox.val() + "','workerType': '" + EmployeeType + "'}", function(data) { alert(data.toString()); }); And i tryed this which never called the webservice $.getJSON('Services/WorkUnitService.asmx/WorkUnitsForParAndPhase' + parNumber + '/' + combobox.val() + '/' + EmployeeType, function(myData) { alert(myData.toString()); });

    Read the article

  • PHP: Building A Stock Index Using Yahoo Finance [on hold]

    - by Jeremy
    I have the following code which is pulling data but it is not outputting properly. <?php class YahooStock { public function getQuotes(){ $stocks = array(); $result = array(); $s = file_get_contents("http://finance.yahoo.com/d/quotes.csv?s=AMZN+CRM+CNQR+CTL+CTXS+DWRE+EMC+GOOG+HP+IBM+JIVE+LNKD+MKTO+MSFT+N+NFLX+NOW+ORCL+RAX+SAP+T+VEEV+VMW+VZ+WDAY&f=npf6&e=.csv"); $data = explode( ',', $s); $result = $data; return $result; } } $objYahooStock = new YahooStock; foreach( $objYahooStock->getQuotes() as $code => $result){ echo 'Name:' . $result[0] . '<br />'; echo 'Price:' . $result[1] . '<br />'; echo 'Float:' . $result[2] . '<br />'; } ?> The output looks like it is separating every character with a comma instead of each column: Name:" Price:A Float:m Name: Price:I Float:n Name:3 Price:3 Float:2 Name: Price: Float: Any help is appreciated!

    Read the article

  • How to strengthen Mysql database server Security?

    - by i need help
    If we were to use server1 for all files (file server), server2 for mysql database (database server). In order for websites in server1 to access to the database in server2, isn't it needed to connect to to ip address of second (mysql server) ? In this case, is remote mysql connection. However, I seen from some people comment on the security issue. remote access to MySQL is not very secure. When your remote computer first connects to your MySQL database, the password is encrypted before being transmitted over the Internet. But after that, all data is passed as unencrypted "plain text". If someone was able to view your connection data (such as a "hacker" capturing data from an unencrypted WiFi connection you're using), that person would be able to view part or all of your database. So I just wondering ways to secure it? Allow remote mysql access from server1 by allowing the static ip adress allow remote access from server 1 by setting port allowed to connect to 3306 change 3306 to other port? Any advice?

    Read the article

  • JQuery-AJAX: No further request after timeout and delay in form post

    - by Nogga
    I got a form containing multiple checkboxes. This form shall be sent to the server to receive appropriate results from a server side script. This is already working. What I would achieve now: 1) Implementing a timeout: This is already working, but as soon as a timeout occurs, a new request is not working anymore. 2) Implementing a delay in requesting results: A delay shall be implemented so that not every checkbox is resulting in a POST request. This is what I have right now: function update_listing() { // remove postings from table $('.tbl tbody').children('tr').remove(); // get the results through AJAX var request = $.ajax({ type: "POST", url: "http://localhost/hr/index.php/listing/ajax_csv", data: $("#listing_form").serialize(), timeout: 5000, success: function(data) { $(".tbl tbody").append(data); }, error: function(objAJAXRequest, strError) { $(".tbl tbody").append("<tr><td>failed " + strError + "</td></tr>"); } }); return true; } Results are for now passed as HTML table rows - I will transform them to CSV/JSON in the next step. Thanks so much for your advice.

    Read the article

  • SCons and dependencies for python function generating source

    - by elmo
    I have an input file data, a python function parse and a template. What I am trying to do is use parse function to get dictionary out of data and use that to replace fields in template. Now to make this a bit more generic (I perform the same action in few places) I have defined a custom function to do so. Below is definition of custom builder and values is a dictionary with { 'name': (data_file, parse_function) } (you don't really need to read through this, I simply put it here for completeness). def TOOL_ADD_FILL_TEMPLATE(env): def FillTemplate(env, output, template, values): out = output[0] subs = {} for name, (node, process) in values.iteritems(): def Process(env, target, source): with open( env.GetBuildPath(target[0]), 'w') as out: out.write( process( source[0] ) ) builder = env.Builder( action = Process ) subs[name] = builder( env, env.GetBuildPath(output[0])+'_'+name+'_processed.cpp', node )[0] def Fill(env, target, source): values = dict( (name, n.get_contents()) for name, n in subs.iteritems() ) contents = template[0].get_contents().format( **values ) open( env.GetBuildPath(target[0]), 'w').write( contents ) builder = env.Builder( action = Fill ) builder( env, output[0], template + subs.values() ) return output env.Append(BUILDERS = {'FillTemplate': FillTemplate}) It works fine when it comes to checking if data or template changed. If it did it rebuilds the output. It even works if I edit process function directly. However if my process function looks like this: def process( node ): return subprocess(node) and I edit subprocess the change goes unnoticed. Is there any way to get correct builds without making process functions being always invoked?

    Read the article

  • Large Switch statements: Bad OOP?

    - by Mystere Man
    I've always been of the opinion that large switch statements are a symptom of bad OOP design. In the past, I've read articles that discuss this topic and they have provided altnerative OOP based approaches, typically based on polymorphism to instantiate the right object to handle the case. I'm now in a situation that has a monsterous switch statement based on a stream of data from a TCP socket in which the protocol consists of basically newline terminated command, followed by lines of data, followed by an end marker. The command can be one of 100 different commands, so I'd like to find a way to reduce this monster switch statement to something more manageable. I've done some googling to find the solutions I recall, but sadly, Google has become a wasteland of irrelevant results for many kinds of queries these days. Are there any patterns for this sort of problem? Any suggestions on possible implementations? One thought I had was to use a dictionary lookup, matching the command text to the object type to instantiate. This has the nice advantage of merely creating a new object and inserting a new command/type in the table for any new commands. However, this also has the problem of type explosion. I now need 100 new classes, plus I have to find a way to interface them cleanly to the data model. Is the "one true switch statement" really the way to go? I'd appreciate your thoughts, opinions, or comments.

    Read the article

  • How do I retrieve twitter xml for Flash site via php properly

    - by daidai
    Am I using TwitterScript to retrieve Twitter data for inside a Flash site. Due to Twitter's crossdomain policy, I need to setup a php proxy... Firstly I made a simple one <?php $url = $_GET['url']; readfile($url); ?> but I then get this error URL file-access is disabled in the server configuration which is only resolved by getting my host to turn fopen() on, which I don't want to do. Then I found this <?php function get_content($url) { $ch = curl_init(); curl_setopt ($ch, CURLOPT_URL, $url); curl_setopt ($ch, CURLOPT_HEADER, 0); ob_start(); curl_exec ($ch); curl_close ($ch); $string = ob_get_contents(); ob_end_clean(); return $string; } #usage: $url = $_GET['url']; $content = get_content ($url); var_dump ($content); ?> which solves that problem but the data now is the correct XML but looks like: string(39950) "<?xml version="1.0" encoding="UTF-8"?> <statuses type="array"> <status> ... </statuses>" How do I get the XML data out of that string?

    Read the article

  • Indexing table with duplicates MySQL/SQL Server with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In SQL Server you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • How to insert result of mysql_real_escape_string() into oracle database?

    - by Prat
    For inserting special characters in data like (,')etc., I am using musql_real_escape_string() function & it's working fine. Now I want to use same variable while inserting values in Oracle. $str = 'N.G.Palace\'s Building', 'xyzcity', '12345678','India','100001',12 Here $str is result of mysql_real_escape_string(). so it escapes special character. Now my code for oracle is like this-: $qry ="INSERT INTO Ora_table(ship_to_street, ship_to_city,ship_to_country, ship_to_telephone, order_id, record_no) VALUES(".$str); So my doubt is Oracle is not accepting values return by mysql_real_escape_string i.e. Palace\'s (like this as this mysql function attach \ before 'single quote)? So can anybody tell me ho9w can I use that variable $str to insert data into Oracle? Also I tried like this also-: "q"."'"."c".$str."c"."'" can we use this for multiple values like in my case...though still I am unable to inser data in oracle? plz help.

    Read the article

  • UDP: Client started before Server

    - by Chris
    I have a bit of a glitch in my game just now, everything runs fine if the server is started before the client however when the client is started before the server they never connect. This is all UDP The problem happens when the client tries to call recvfrom() before the server has started, when this happens the client never finds the server and the server never finds the client. The resulted error is a wouldblock. If I stop the client using recvfrom and start the client before the server (the client is still sending data its just not receiving it) they both find each other no problem. Whats the solution for this? The way its seems just now is that the client cannot call recvfrom without a server being active or it all falls apart. Is there a check that can be done to see if data is sitting on a certain port(data the server would send)? Or is there a better way to do this? Some Code... Server Operation - UDPSocket is a class UDPSocket.Initialise(); UDPSocket.MakeNonBlocking(); UDPSocket.Bind(LOCALPORT); int n = UDPSocket.Receive(&thePacket); if (n > 0) UDPSocket.Send(&sendPacket); Client... UDP.Initialise(); UDP.MakeNonBlocking(); UDP.SetDestinationAddress(SERVERIP, SERVERPORT); serverStatus = UDP.Receive(&recvPacket); if (serverStatus > 0) { //Do some things UDP.Send(dPacket); //Try and reconnect with server } Thanks

    Read the article

  • SQL Server query

    - by carrot_programmer_3
    Hi, I have a SQL Server DB containing a registrations table that I need to plot on a graph over time. The issue is that I need to break this down by where the user registered from (e.g. website, wap site, or a mobile application). the resulting output data should look like this... [date] [num_reg_website] [num_reg_wap_site] [num_reg_mobileapp] 1 FEB 2010,24,35,64 2 FEB 2010,23,85,48 3 FEB 2010,29,37,79 etc... The source table is as follows... UUID(int), signupdate(datetime), requestsource(varchar(50)) some smple data in this table looks like this... 1001,2010-02-2:00:12:12,'website' 1002,2010-02-2:00:10:17,'app' 1003,2010-02-3:00:14:19,'website' 1004,2010-02-4:00:16:18,'wap' 1005,2010-02-4:00:18:16,'website' Running the following query returns one data column 'total registrations' for the website registrations but I'm not sure how to do this for multiple columns unfortunatly.... select CAST(FLOOR(CAST([signupdate]AS FLOAT ))AS DATETIME) as [signupdate], count(UUID) as 'total registrations' FROM [UserRegistrationRequests] WHERE requestsource = 'website' group by CAST(FLOOR(CAST([signupdate]AS FLOAT ))AS DATETIME)

    Read the article

  • Socket stops communicating

    - by user1392992
    I'm running python 2.7 code on a Raspberry Pi that receives serial data from an Arduino, processes it, and sends it to a Windows box over a wifi link. The Pi is wired to a Linksys router running in client bridge mode and that router connects over wifi to another Linksys router to which the Windows box is wired. The code in the Pi runs fine for some (apparently) random interval, and then the Pi becomes unreachable from the Windows box. I'm running PUTTY on the the Windows machine to connect to the Pi and when the fail occurs I get a message saying there's been a network error and the Pi is not reachable. Pinging the Pi from the Windows machine works fine until the error, at which time it produces "Reply from 192.168.0.129: Destination host unreachable." The client bridge router to which the Pi is connected remains reachable. I've got the networking code on the Pi wrapped in an exception handler, and when it fails it shows the following: Ethernet problem: Traceback (most recent call last): File "garage.py", line 108, in module s.connect((host, port)) File "/usr/lib/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args) error: [Errno 113] No route to host None The relevant python code looks like: import socket import traceback host = '192.168.0.129' port = 31415 in the setup, and after serial data has been processed: try: bline = strline.encode('utf-8') s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((host, port)) s.send(bline) s.close() except: print "Ethernet problem: " print traceback.print_exc() Where strline contains the processed data. As I said, this runs fine for a few hours more or less before failing. Any ideas? EDIT: When PUTTY fails its error message is :Network Error: Software caused connection abort."

    Read the article

  • Django Save Incomplete Progress on Form

    - by jimbob
    I have a django webapp with multiple users logging in and fill in a form. Some users may start filling in a form and lack some required data (e.g., a grant #) needed to validate the form (and before we can start working on it). I want them to be able to fill out the form and have an option to save the partial info (so another day they can log back in and complete it) or submit the full info undergoing validation. Currently I'm using ModelForm for all the forms I use, and the Model has constraints to ensure valid data (e.g., the grant # has to be unique). However, I want them to be able to save this intermediary data without undergoing any validation. The solution I've thought of seems rather inelegant and un-django-ey: create a "Save Partial Form" button that saves the POST dictionary converts it to a shelf file and create a "SavedPartialForm" model connecting the user to partial forms saved in the shelf. Does this seem sensible? Is there a better way to save the POST dict directly into the db? Or is an add-on module that does this partial-save of a form (which seems to be a fairly common activity with webforms)? My biggest concern with my method is I want to eventually be able to do this form-autosave automatically (say every 10 minutes) in some ajax/jquery method without actually pressing a button and sending the POST request (e.g., so the user isn't redirected off the page when autosave is triggered). I'm not that familiar with jquery and am wondering if it would be possible to do this.

    Read the article

< Previous Page | 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020  | Next Page >