Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 1031/2416 | < Previous Page | 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038  | Next Page >

  • XML to JSON - losing root node

    - by Mike
    I'm using net.sf.json with a Java project and it works great. The conversion of this XML: <?xml version="1.0" encoding="UTF-8"?> <important-data certified="true" processed="true"> <timestamp>232423423423</timestamp> <authors> <author> <firstName>Tim</firstName> <lastName>Leary</lastName> </author> </authors> <title>Flashbacks</title> <shippingWeight>1.4 pounds</shippingWeight> <isbn>978-0874778700</isbn> </important-data> converts to this in JSON: { "@certified": "true", "@processed": "true", "timestamp": "232423423423", "authors": [ { "firstName": "Tim", "lastName": "Leary" }], "title": "Flashbacks", "shippingWeight": "1.4 pounds", "isbn": "978-0874778700" } However, the root tag <important-data> is lost in the conversion. Being new to XML and JSON, I am not sure if this is suppose to be the correct behaviour. If not, is there any way to tell net.sf.json to convert it while keeping the root node property? Thanks.

    Read the article

  • Acessing a wordpress database from an iPhone App.

    - by Code
    Hi guys, I've been asked to create an app that will get data back from a database where the CMS will be Wordpress. I've never used a CMS so I'm trying to get a (overview)picture in my head of how it could all work and what each of the components would be. And what a CMS actually brings to the party. Creating the app itself is pretty clear. I've done a few already. I've made a database before and shouldnt cause a problem. But what is going to be in the middle between the app and the database? Part A: I'm guessing iphone apps typically would call some php file that's hosted on the server? The php then would make a call to the database and return the data somehow, maybe as xml. But this is really basic and wouldnt require a CMS. Just a database and a phpfile, or am I wrong? Part B: If i wanted to run a check on the database every minute to see if any of the data in database was no longer valid and remove it if needed, that would require somekind of program running on the server. So that program would be Wordpress, since it is managing the content, so a content management system is actually needed and is for these kind of taskes. Am i understanding the role of CMS? Many Thanks, -Code

    Read the article

  • Browser timing out attempting to load images

    - by notJim
    I've got a page on a webapp that has about 13 images that are generated by my application, which is written in the Kohana PHP framework. The images are actually graphs. They are cached so they are only generated once, but the first time the user visits the page, and the images all have to be generated, about half of the images don't load in the browser. Once the page has been requested once and images are cached, they all load successfully. Doing some ad-hoc testing, if I load an individual image in the browser, it takes from 450-700 ms to load with an empty cache (I checked this using Google Chrome's resource tracking feature). For reference, it takes around 90-150 ms to load a cached image. Even if the image cache is empty, I have the data and some of the application's startup tasks cached, so that after the first request, none of that data needs to be fetched. My questions are: Why are the images failing to load? It seems like the browser just decides not to download the image after a certain point, rather than waiting for them all to finish loading. What can I do to get them to load the first time, with an empty cache? Obviously one option is to decrease the load times, and I could figure out how to do that by profiling the app, but are there other options? As I mentioned, the app is in the Kohana PHP framework, and it's running on Apache. As an aside, I've solved this problem for now by fetching the page as soon as the data is available (it comes from a batch process), so that the images are always cached by the time the user sees them. That feels like a kludgey solution to me, though, and I'm curious about what's actually going on.

    Read the article

  • R Tree 50,000 foot overview?

    - by roufamatic
    I'm working on a school project that involves taking a lat/long point and finding the top five closest points in a known list of places. The list is to be stored in memory, with the caveat that we must choose an "appropriate data structure" -- that is, we cannot simply store all the places in an array and compare distances one-by-one in a linear fashion. The teacher suggested grouping the place data by US State to prevent calculating the distance for places that are obviously too far away. I think I can do better. From my research online it seems like an R-Tree or one of its variants might be a neat solution. Unfortunately, that sentence is as far as I've gotten with understanding the actual technique, as the literature is simply too dense for my non-academic head. Can somebody give me a really high overview of what the process is for populating an R-Tree with lat/long data, and then traversing the tree to find those 5 nearest neighbors of a given point? Additionally the project is in C, and I don't have to reinvent the wheel on this, so if you've used an existing open source C implementation of an R Tree I'd be interested in your experiences.

    Read the article

  • Pivot table from multiple spreadsheets

    - by vrao
    I am using excel 2010. I am trying to create pivot table between two worksheets 'Summary' and 'Summary2'. I have identical row of data ranging from cells B5 to F5 in row 5 in both worksheets. Data in the two worksheets looks like this: Summary worksheet: Issues,20,3,4,5 Summary2 worksheet: Issues,10,0,3,9 Worksheet referes to issues from location 1 and worksheet referes to issues from location 2. Col B has title 'issues', Col C refers to issues of customer 1, Col D refers to issues of customer 2, Col E refers to issues of customer 3, Col F refers to issues of customer 4 I go to a third worksheet and start pivot table and in the table range I give this: 'Summary:Summary2'!$B$5:$F$5. Then I Say OK. Gives error "data reference source is not valid". Can someone tell me how to select the row from two different worksheet in pivot table? Also I want to be able to add issues of customers between two locations and get % completion for each locaiton. Can someone please help?

    Read the article

  • No results are returned when using Flickr JSON request

    - by Martijn1981
    I'm still fairly new to AJAX and I'm experimenting with Twitter and Flickr. Twitter is working fine so far, but I've run into some issues with the Flickr API. I'm getting no results back. The URL seems to be working fine and I'm pointing to the right object containing the array ('items'). Can anybody tell me what I'm doing wrong please? Thanks! $('#show_pictures').click(function(e){ e.preventDefault(); $.ajax({ url: 'http://api.flickr.com/services/feeds/photos_public.gne?format=json&jsoncallback=?&tags=home', dataType: 'jsonp', success: function(data) { $.each(data.items, function(i, item){ $('<div></div>') .hide() .append('<h1>'+item.title+'</h1>') .append('<img src="'+item.media.m+'" >') .append('<p>'+item.description+'</p>') .appendTo('#results') .fadeIn(); }) }, error: function(data) { alert('Something went wrong!'); } }); });

    Read the article

  • [jQuery] Sort contents alphabetically

    - by James
    So I am appending the following after an AJAX call, and this AJAX call may happen several time, returning several data items. And I am trying to use Tinysort [http://plugins.jquery.com/project/TinySort] to sort the list everytime, so the new items added are integrated nicely and sorted alphabetically. It unfortunately doesn't seem to be working. Any ideas? I mean, the data itself is being correctly appended, but unfortunately the sorting isn't occurring. var artists = []; $.each(data.artists, function(k, v) { artists.push('<section id="artist:' + v.name + '" class="artist"><div class="span-9"><img alt="' + v.name + '" width="34" height="34" class="photo" src="' + v.photo + '" /><strong>' + v.name + '</strong><br/><span>' + v.events + ' upcoming gig'); if (v.events != 1) { artists.push('s'); } artists.push('</span></div><div class="span-2 align-right last">Last</div><div class="clear"></div></section>'); }); $('div.artists p').remove(); $('div.artists div.next').remove(); $('div.artists').append(artists.join('')).append('<div class="next"><a href="#">Next</a></div>'); $('div.artists section').tsort('section[id]', {orderby: 'id'}); Thanks!

    Read the article

  • How can I ignore the block content reading in Perl.

    - by Nano HE
    Hello. I plan to ignore the block content which include the start line of "MaterializeU4()" with the subroutin() read_block below. But failed. # Read a constant definition block from a file handle. # void return when there is no data left in the file. # Otherwise return an array ref containing lines to in the block. sub read_block { my $fh = shift; my @lines; my $block_started = 0; while( my $line = <$fh> ) { # how to correct my code below? I don't need the 2nd block content. $block_started++ if ( ($line =~ /^(status)/) && (index($line, "MaterializeU4") != 0) ) ; if( $block_started ) { last if $line =~ /^\s*$/; push @lines, $line; } } return \@lines if @lines; return; } Data as below: __DATA__ status DynTest = <dynamic 100> vid = 10002 name = "DynTest" units = "" status VIDNAME9000 = <U4 MaterializeU4()> vid = 9000 name = "VIDNAME9000" units = "degC" status DynTest = <U1 100> vid = 100 name = "Hello" units = "" Output: <StatusVariables> <SVID logicalName="DynTest" type="L" value="100" vid="10002" name="DynTest" units=""></SVID> <SVID logicalName="VIDNAME9000" type="L" value="MaterializeU4()" vid="9000" name="VIDNAME9000" units="degC"></SVID> <SVID logicalName="DynTest" type="L" value="100" vid="100" name="Hello" units=""></SVID> </StatusVariables> [Updated] I print the value of index($line, "MaterializeU4"), it output 25. Then I updated the code as below $block_started++ if ( ($line =~ /^(status)/) && (index($line, "MaterializeU4") != 25) Now it works. Any comments are welcome about my practice. Thank you.

    Read the article

  • plot an item map (based on difficulties)

    - by Tyler Rinker
    I have a data set of item difficulties that correspond to items on a questionnaire that looks like this: item difficulty 1 ITEM_6: I DESTROY THINGS BELONGING TO OTHERS 2.31179818 2 ITEM_11: I PHYSICALLY ATTACK PEOPLE 1.95215238 3 ITEM_5: I DESTROY MY OWN THINGS 1.93479536 4 ITEM_10: I GET IN MANY FIGHTS 1.62610855 5 ITEM_19: I THREATEN TO HURT PEOPLE 1.62188759 6 ITEM_12: I SCREAM A LOT 1.45137544 7 ITEM_8: I DISOBEY AT SCHOOL 0.94255210 8 ITEM_3: I AM MEAN TO OTHERS 0.89941812 9 ITEM_20: I AM LOUDER THAN OTHER KIDS 0.72752197 10 ITEM_17: I TEASE OTHERS A LOT 0.61792597 11 ITEM_9: I AM JEALOUS OF OTHERS 0.61288399 12 ITEM_4: I TRY TO GET A LOT OF ATTENTION 0.39947791 13 ITEM_18: I HAVE A HOT TEMPER 0.32209970 14 ITEM_13: I SHOW OFF OR CLOWN 0.31707701 15 ITEM_7: I DISOBEY MY PARENTS 0.20902108 16 ITEM_2: I BRAG 0.19923607 17 ITEM_15: MY MOODS OR FEELINGS CHANGE SUDDENLY 0.06023317 18 ITEM_14: I AM STUBBORN -0.31155481 19 ITEM_16: I TALK TOO MUCH -0.67777282 20 ITEM_1: I ARGUE A LOT -1.15013758 I want to make an item map of these items that looks similar (not exactly) to this (I created this in word but it lacks true scaling as I just eyeballed the scale). It's not really a traditional statistical graphic and so I don't really know how to approach this. I don't care what graphics system this is done in but I am more familiar with ggplot2 and base. I would greatly appreciate a method of plotting this sort of unusual plot. Here's the data set (I'm including it as I was having difficulty using read.table on the dataframe above): DF <- structure(list(item = structure(c(17L, 3L, 16L, 2L, 11L, 4L, 19L, 14L, 13L, 9L, 20L, 15L, 10L, 5L, 18L, 12L, 7L, 6L, 8L, 1L ), .Label = c("ITEM_1: I ARGUE A LOT", "ITEM_10: I GET IN MANY FIGHTS", "ITEM_11: I PHYSICALLY ATTACK PEOPLE", "ITEM_12: I SCREAM A LOT", "ITEM_13: I SHOW OFF OR CLOWN", "ITEM_14: I AM STUBBORN", "ITEM_15: MY MOODS OR FEELINGS CHANGE SUDDENLY", "ITEM_16: I TALK TOO MUCH", "ITEM_17: I TEASE OTHERS A LOT", "ITEM_18: I HAVE A HOT TEMPER", "ITEM_19: I THREATEN TO HURT PEOPLE", "ITEM_2: I BRAG", "ITEM_20: I AM LOUDER THAN OTHER KIDS", "ITEM_3: I AM MEAN TO OTHERS", "ITEM_4: I TRY TO GET A LOT OF ATTENTION", "ITEM_5: I DESTROY MY OWN THINGS", "ITEM_6: I DESTROY THINGS BELONGING TO OTHERS", "ITEM_7: I DISOBEY MY PARENTS", "ITEM_8: I DISOBEY AT SCHOOL", "ITEM_9: I AM JEALOUS OF OTHERS" ), class = "factor"), difficulty = c(2.31179818110545, 1.95215237740899, 1.93479536058926, 1.62610855327073, 1.62188759115818, 1.45137543733965, 0.942552101641177, 0.899418119889782, 0.7275219669431, 0.617925967008653, 0.612883990709181, 0.399477905189577, 0.322099696946661, 0.31707700560997, 0.209021078266059, 0.199236065264793, 0.0602331732900628, -0.311554806052955, -0.677772822413495, -1.15013757942119)), .Names = c("item", "difficulty" ), row.names = c(NA, -20L), class = "data.frame") Thank you in advance.

    Read the article

  • skip reading headers in matlab

    - by Paul
    I had a similar question. but what i am trying now is to read files in .txt format into matlab. My problem is with the headers. Manytimes due to errors the system rewrites the headers in the middle of file and then matlab cannot read the file. IS there a way to skip it? I know i can skip reading some characters if i know what the character is. here is the code i am using. [c,pathc]=uigetfile({'*.txt'},'Select the data','V:\data); file=[pathc c]; data= dlmread(file, ',', 1,4); this way i let the user pick the file. My files are huge typically [ 86400 125 ] so naturally it has 125 header fields or more depends on files. Thanks Because the files are so big i cannot copy , but its in format like day time col1 col2 col3 col4 ............................... 2/3/2010 0:10 3.4 4.5 5.6 4.4 ............................... .................................................................. .................................................................. and so on

    Read the article

  • SCons and dependencies for python function generating source

    - by elmo
    I have an input file data, a python function parse and a template. What I am trying to do is use parse function to get dictionary out of data and use that to replace fields in template. Now to make this a bit more generic (I perform the same action in few places) I have defined a custom function to do so. Below is definition of custom builder and values is a dictionary with { 'name': (data_file, parse_function) } (you don't really need to read through this, I simply put it here for completeness). def TOOL_ADD_FILL_TEMPLATE(env): def FillTemplate(env, output, template, values): out = output[0] subs = {} for name, (node, process) in values.iteritems(): def Process(env, target, source): with open( env.GetBuildPath(target[0]), 'w') as out: out.write( process( source[0] ) ) builder = env.Builder( action = Process ) subs[name] = builder( env, env.GetBuildPath(output[0])+'_'+name+'_processed.cpp', node )[0] def Fill(env, target, source): values = dict( (name, n.get_contents()) for name, n in subs.iteritems() ) contents = template[0].get_contents().format( **values ) open( env.GetBuildPath(target[0]), 'w').write( contents ) builder = env.Builder( action = Fill ) builder( env, output[0], template + subs.values() ) return output env.Append(BUILDERS = {'FillTemplate': FillTemplate}) It works fine when it comes to checking if data or template changed. If it did it rebuilds the output. It even works if I edit process function directly. However if my process function looks like this: def process( node ): return subprocess(node) and I edit subprocess the change goes unnoticed. Is there any way to get correct builds without making process functions being always invoked?

    Read the article

  • How do I efficiently parse a CSV file in Perl?

    - by Mike
    I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient. My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application. My question is, what is the most time efficient means of parsing a large CSV file using only built in tools? note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although it might work effectively. edit It can only involve built-in tools that ship with Perl 5.8. For bureaucratic reasons, I cannot use any third party modules (even if hosted on cpan) another edit Let's assume that our solution is only allowed to deal with the file data once it is entirely loaded into memory. yet another edit I just grasped how stupid this question is. Sorry for wasting your time. Voting to close.

    Read the article

  • UDP: Client started before Server

    - by Chris
    I have a bit of a glitch in my game just now, everything runs fine if the server is started before the client however when the client is started before the server they never connect. This is all UDP The problem happens when the client tries to call recvfrom() before the server has started, when this happens the client never finds the server and the server never finds the client. The resulted error is a wouldblock. If I stop the client using recvfrom and start the client before the server (the client is still sending data its just not receiving it) they both find each other no problem. Whats the solution for this? The way its seems just now is that the client cannot call recvfrom without a server being active or it all falls apart. Is there a check that can be done to see if data is sitting on a certain port(data the server would send)? Or is there a better way to do this? Some Code... Server Operation - UDPSocket is a class UDPSocket.Initialise(); UDPSocket.MakeNonBlocking(); UDPSocket.Bind(LOCALPORT); int n = UDPSocket.Receive(&thePacket); if (n > 0) UDPSocket.Send(&sendPacket); Client... UDP.Initialise(); UDP.MakeNonBlocking(); UDP.SetDestinationAddress(SERVERIP, SERVERPORT); serverStatus = UDP.Receive(&recvPacket); if (serverStatus > 0) { //Do some things UDP.Send(dPacket); //Try and reconnect with server } Thanks

    Read the article

  • AJAX/JSONP Question. Access id denied using IE while requesting corss domain.

    - by Sisir
    Ok, Here we go. I have already searched the Stack for the answer i have found some useful info but i want to clear up some more things. I also search the net for the answer but no real help. I have worked with some api (yelp, ouside.in). In yelp i use to inject the script to head with the url request to the api with a callback funcion. I worked fine in all browsers. But while using outside.in api when i call the url the callback in not working. In yelp they have a url field can be used like that callback=callbackfuncion so the callback will automatically called. But in outside.in there is not such field available. Is there are any standard command for callback function which will work regardless of any server/api? I also tried a standard ajax request using jQuery $.ajax() function. It worked for my local pc for both IE and other browser but did not working in IE showing the error: access denied, other borwser seems ok. Firebug in my FF also don't notice any errors. Outside.in has an javascript example but it is too hard to me to understand github.com/outsidein/api-examples/tree/master/javascript/browser/ site i am working: http://citystir.com yelp: yelp.com outside.in: outside.in Techniqual info: i am using: wampserver in local, wordpress for hosting, Godaddy, apache for remote with linux. Codes: Using Jquery $.ajax url is like: "http://hyperlocal-api.outside.in/v1.1/states/Illinois/cities/chicago/stories?dev_key="+key+"&sig="+signeture+"&limit=3 function makeOutsideRequest(url){ $.ajax({ url: url, dataType: 'json', type: 'GET', success: function (data, status, xhr) { if (data == null) { alert("An error occurred connecting to " + url + ". Please ensure that the server is running and configured to allow cross-origin requests."); }else{ printHomeNews(data); } }, error: function (xhr, status, error) { alert("An error occurred - check the server log for a stack trace."); } }); } Thanks!

    Read the article

  • Return the exact ID using fetch_assoc

    - by Selom
    Hi, I have a small problem in my code and I need your help. Well Im using fetch_assoc to get data from database and I need to get the ID number of each of returned values. the issue is that my code only return the ID number of the last data. Here's my code: <form method="post" action="action.php"> <select name="album" style="border:1px solid #CCC; font-size:11px; padding:1px"> <?php $sql = "SELECT * FROM table"; $stmt = $dbh -> prepare($sql); $stmt -> execute(); while($row = $stmt -> fetch(PDO::FETCH_ASSOC)) { $album_ID = $row['album_ID']; $value = $row['album_name']; print "<option value ='". $value ."'>". $value. "</option>"; } ?> </select> <input type="hidden" name="album_ID" value="<?php print $album_ID?>"/> </form> I would like the hidden input type holds the selected album id, but it always holds the album id of the last data. Please help.

    Read the article

  • Socket stops communicating

    - by user1392992
    I'm running python 2.7 code on a Raspberry Pi that receives serial data from an Arduino, processes it, and sends it to a Windows box over a wifi link. The Pi is wired to a Linksys router running in client bridge mode and that router connects over wifi to another Linksys router to which the Windows box is wired. The code in the Pi runs fine for some (apparently) random interval, and then the Pi becomes unreachable from the Windows box. I'm running PUTTY on the the Windows machine to connect to the Pi and when the fail occurs I get a message saying there's been a network error and the Pi is not reachable. Pinging the Pi from the Windows machine works fine until the error, at which time it produces "Reply from 192.168.0.129: Destination host unreachable." The client bridge router to which the Pi is connected remains reachable. I've got the networking code on the Pi wrapped in an exception handler, and when it fails it shows the following: Ethernet problem: Traceback (most recent call last): File "garage.py", line 108, in module s.connect((host, port)) File "/usr/lib/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args) error: [Errno 113] No route to host None The relevant python code looks like: import socket import traceback host = '192.168.0.129' port = 31415 in the setup, and after serial data has been processed: try: bline = strline.encode('utf-8') s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((host, port)) s.send(bline) s.close() except: print "Ethernet problem: " print traceback.print_exc() Where strline contains the processed data. As I said, this runs fine for a few hours more or less before failing. Any ideas? EDIT: When PUTTY fails its error message is :Network Error: Software caused connection abort."

    Read the article

  • How to strengthen Mysql database server Security?

    - by i need help
    If we were to use server1 for all files (file server), server2 for mysql database (database server). In order for websites in server1 to access to the database in server2, isn't it needed to connect to to ip address of second (mysql server) ? In this case, is remote mysql connection. However, I seen from some people comment on the security issue. remote access to MySQL is not very secure. When your remote computer first connects to your MySQL database, the password is encrypted before being transmitted over the Internet. But after that, all data is passed as unencrypted "plain text". If someone was able to view your connection data (such as a "hacker" capturing data from an unencrypted WiFi connection you're using), that person would be able to view part or all of your database. So I just wondering ways to secure it? Allow remote mysql access from server1 by allowing the static ip adress allow remote access from server 1 by setting port allowed to connect to 3306 change 3306 to other port? Any advice?

    Read the article

  • object references an unsaved transient instance

    - by developer
    Hi, I have 2 tables, user and userprofile, both with almost identical fields. user table references userprofile table by primary key ID. My requirement is that on click of a button I need to dump user table record to userprofile table. Now for a particular user table, if there is a corresponding userprofile entry, I am successfully able to dump the data, but if there is no record in userprofile table then I need to create a new record by dumping all the data. My problem is that I am able to update the data when the record is present in userprofile table, but in the case wherein I have to create a new record I get the below error "object references an unsaved transient instance - save the transient instance before flushing". `<class name="User"> <id name="ID" type="Int32"> <generator class="native" /> </id> <many-to-one name="Pid" class="UserProfile" /> </class>` UserProfile is another table and Pid above references the Primary key ID of UserProfile table.

    Read the article

  • Indexing table with duplicates MySQL/SQL Server with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In SQL Server you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • Which are the most useful techniques for faster Bluetooth?

    - by Mike Howard
    Hi. I'm adding peer-to-peer bluetooth using GameKit to an iPhone shoot-em-up, so speed is vital. I'm sending about 40 messages a second each way, most of them with the faster GKSendDataUnreliable, all serializing with NSCoding. In testing between a 3G and 3GS, this is slowing the 3G down a lot more than I'd like. I'm wondering where I should concentrate my efforts to speed it up. How much slower is GKSendDataReliable? For the few packets that have to get there, would it be faster to send a GKSendDataUnreliable and have the peer send an acknowledgement so I can send again if I don't get the Ack within, say, 100ms? How much faster would it be to create the NSData instance using a regular C array rather than archiving with the NSCoding protocol? Is this serialization process (for about a dozen floats) just as slow as you'd expect from an object creation/deallocation overhead, or is something particularly slow happening? I heard that (for example) sending four seperate sets of data is much, much slower, than sending one piece of data four times the size. Would I make a significant saving by sending separate packets of data that wouldn't always go together in the same packet when they happen at the same time? Are there any other bluetooth performance secrets I've missed? Thanks for your help.

    Read the article

  • Web services or shared database for (game) server communication?

    - by jaaronfarr
    We have 2 server clusters: the first is made up of typical web applications backed by SQL databases. The second are highly optimized multiplayer game servers which keep all data in memory. Both clusters communicate with clients via HTTP (Ajax with JSON). There are a few cases in which we need to share data between the two server types, for example, reporting back and storing the results of a game (should ultimately end up in the database). We're considering several approaches for inter-server communication: Just share the MySQL databases between clusters (introduce SQL to the game servers) Sharing data in a distributed key-value store like Memcache, Redis, etc. Use an RPC technology like Google ProtoBufs or Apache Thrift Using RESTful web services (the game server would POST back to the web servers, for example) At the moment, we're leaning towards web services or just sharing the database. Sharing the database seems easy, but we're concerned this adds extra memory and a new dependency into the game servers. Web services provide good separation of concerns and fit with the existing Ajax we use, but add complexity, overhead and many more ways for communication to fail. Are there any other good reasons not to use one or the other approach? Which would be easier to scale?

    Read the article

  • SQL Server query

    - by carrot_programmer_3
    Hi, I have a SQL Server DB containing a registrations table that I need to plot on a graph over time. The issue is that I need to break this down by where the user registered from (e.g. website, wap site, or a mobile application). the resulting output data should look like this... [date] [num_reg_website] [num_reg_wap_site] [num_reg_mobileapp] 1 FEB 2010,24,35,64 2 FEB 2010,23,85,48 3 FEB 2010,29,37,79 etc... The source table is as follows... UUID(int), signupdate(datetime), requestsource(varchar(50)) some smple data in this table looks like this... 1001,2010-02-2:00:12:12,'website' 1002,2010-02-2:00:10:17,'app' 1003,2010-02-3:00:14:19,'website' 1004,2010-02-4:00:16:18,'wap' 1005,2010-02-4:00:18:16,'website' Running the following query returns one data column 'total registrations' for the website registrations but I'm not sure how to do this for multiple columns unfortunatly.... select CAST(FLOOR(CAST([signupdate]AS FLOAT ))AS DATETIME) as [signupdate], count(UUID) as 'total registrations' FROM [UserRegistrationRequests] WHERE requestsource = 'website' group by CAST(FLOOR(CAST([signupdate]AS FLOAT ))AS DATETIME)

    Read the article

  • PHP: Building A Stock Index Using Yahoo Finance [on hold]

    - by Jeremy
    I have the following code which is pulling data but it is not outputting properly. <?php class YahooStock { public function getQuotes(){ $stocks = array(); $result = array(); $s = file_get_contents("http://finance.yahoo.com/d/quotes.csv?s=AMZN+CRM+CNQR+CTL+CTXS+DWRE+EMC+GOOG+HP+IBM+JIVE+LNKD+MKTO+MSFT+N+NFLX+NOW+ORCL+RAX+SAP+T+VEEV+VMW+VZ+WDAY&f=npf6&e=.csv"); $data = explode( ',', $s); $result = $data; return $result; } } $objYahooStock = new YahooStock; foreach( $objYahooStock->getQuotes() as $code => $result){ echo 'Name:' . $result[0] . '<br />'; echo 'Price:' . $result[1] . '<br />'; echo 'Float:' . $result[2] . '<br />'; } ?> The output looks like it is separating every character with a comma instead of each column: Name:" Price:A Float:m Name: Price:I Float:n Name:3 Price:3 Float:2 Name: Price: Float: Any help is appreciated!

    Read the article

  • Jquery ajax ($.ajax) not working on chrome. please help

    - by racky
    Hi, I need a little help to figure out why the following code does not work on google chrome 5/windows xp. It works well on all other browsers (IE, FF, Safri, Opera etc). Can someone shed some light around this? /* AJAX Request */ jq("#a-post-request").unbind("click").bind("click", function(e){ //jq("#loading").css({"display":"block"}); jq.ajax({ url: "search_data_table.html", type: "get", cache: false, error: function(){alert ("No data found for your search.");}, success: function(data){ jq("#search-results-table tbody").empty().append(data); jq("#search-results").css({"display":"block"}); jq("#search-results-table").trigger("update"); // this one is for the table sorter plugin // set sorting column and direction, this will sort on the first column. var sorting = [[0,0]];// this one is for the table sorter plugin // sort on the first column . jq("#search-results-table").trigger("sorton",[sorting]);// this one is for the table sorter plugin e.preventDefault(); } }); }); Many thanks, Racky

    Read the article

  • Which way is preferred when doing asynchronous WCF calls?

    - by Mikael Svenson
    When invoking a WCF service asynchronous there seems to be two ways it can be done. 1. public void One() { WcfClient client = new WcfClient(); client.BegindoSearch("input", ResultOne, null); } private void ResultOne(IAsyncResult ar) { WcfClient client = new WcfClient(); string data = client.EnddoSearch(ar); } 2. public void Two() { WcfClient client = new WcfClient(); client.doSearchCompleted += TwoCompleted; client.doSearchAsync("input"); } void TwoCompleted(object sender, doSearchCompletedEventArgs e) { string data = e.Result; } And with the new Task<T> class we have an easy third way by wrapping the synchronous operation in a task. 3. public void Three() { WcfClient client = new WcfClient(); var task = Task<string>.Factory.StartNew(() => client.doSearch("input")); string data = task.Result; } They all give you the ability to execute other code while you wait for the result, but I think Task<T> gives better control on what you execute before or after the result is retrieved. Are there any advantages or disadvantages to using one over the other? Or scenarios where one way of doing it is more preferable?

    Read the article

< Previous Page | 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038  | Next Page >