Search Results

Search found 58653 results on 2347 pages for 'seed data'.

Page 644/2347 | < Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >

  • How to structure a set of RESTful URLs

    - by meetamit
    Kind of a REST lightweight here... Wondering which url scheme is more appropriate for a stock market data app (BTW, all queries will be GETs as the client doesn't modify data): Scheme 1 examples: /stocks/ABC/news /indexes/XYZ/news /stocks/ABC/time_series/daily /stocks/ABC/time_series/weekly /groups/ABC/time_series/daily /groups/ABC/time_series/weekly Scheme 2 examples: /news/stock/ABC /news/index/XYZ /time_series/stock/ABC/daily /time_series/stock/ABC/weekly /time_series/index/XYZ/daily /time_series/index/XYZ/weekly Scheme 3 examples: /news/stock/ABC /news/index/XYZ /time_series/daily/stock/ABC /time_series/weekly/stock/ABC /time_series/daily/index/XYZ /time_series/weekly/index/XYZ Scheme 4: Something else??? The point is that for any data being requested, the url needs to encapsulate whether an item is a Stock or an Index. And, with all the RESTful talk about resources I'm confused about whether my primary resource is the stock & index or the time_series & news. Sorry if this is a silly question :/ Thanks!

    Read the article

  • jqGrid - dynamically load different drop down values for different rows depending on another column value

    - by Renso
    Goal: As we all know the jqGrid examples in the demo and the Wiki always refer to static values for drop down boxes. This of course is a personal preference but in dynamic design these values should be populated from the database/xml file, etc, ideally JSON formatted. Can you do this in jqGrid, yes, but with some custom coding which we will briefly show below (refer to some of my other blog entries for a more detailed discussion on this topic). What you CANNOT do in jqGrid, referrign here up and to version 3.8.x, is to load different drop down values for different rows in the jqGrid. Well, not without some trickery, which is what this discussion is about. Issue: Of course the issue is that jqGrid has been designed for high performance and thus I have no issue with them loading a  reference to a single drop down values list for every column. This way if you have 500 rows or one, each row only refers to a single list for that particuolar column. Nice! SO how easy would it be to simply traverse the grid once loaded on gridComplete or loadComplete and simply load the select tag's options from scratch, via ajax, from memory variable, hard coded etc? Impossible! Since their is no embedded SELECT tag within each cell containing the drop down values (remeber it only has a reference to that list in memory), all you will see when you inspect the cell prior to clicking on it, or even before and on beforeEditCell, is an empty <TD></TD>. When trying to load that list via a click event on that cell will temporarily load the list but jqGrid's last internal callback event will remove it and replace it with the old one, and you are back to square one. Solution: Yes, after spending a few hours on this found a solution to the problem that does not require any updates to jqGrid source code, thank GOD! Before we get into the coding details, the solution here can of course be customized to suite your specific needs, this one loads the entire drop down list that would be needed across all rows once into global variable. I then parse this object that contains all the properties I need to filter the rows depending on which ones I want the user to see based off of another cell value in that row. This only happens when clicking the cell, so no performance penalty. You may of course to load it via ajax when the user clicks the cell, but I found it more effecient to load the entire list as part of jqGrid's normal editoptions: { multiple: false, value: listingStatus } colModel options which again keeps only a reference to the sinlge list, no duplciation. Lets get into the meat and potatoes of it.         var acctId = $('#Id').val();         var data = $.ajax({ url: $('#ajaxGetAllMaterialsTrackingLookupDataUrl').val(), data: { accountId: acctId }, dataType: 'json', async: false, success: function(data, result) { if (!result) alert('Failure to retrieve the Alert related lookup data.'); } }).responseText;         var lookupData = eval('(' + data + ')');         var listingCategory = lookupData.ListingCategory;         var listingStatus = lookupData.ListingStatus;         var catList = '{';         $(lookupData.ListingCategory).each(function() {             catList += this.Id + ':"' + this.Name + '",';         });         catList += '}';         var lastsel;         var ignoreAlert = true;         $(item)         .jqGrid({             url: listURL,             postData: '',             datatype: "local",             colNames: ['Id', 'Name', 'Commission<br />Rep', 'Business<br />Group', 'Order<br />Date', 'Edit', 'TBD', 'Month', 'Year', 'Week', 'Product', 'Product<br />Type', 'Online/<br />Magazine', 'Materials', 'Special<br />Placement', 'Logo', 'Image', 'Text', 'Contact<br />Info', 'Everthing<br />In', 'Category', 'Status'],             colModel: [                 { name: 'Id', index: 'Id', hidden: true, hidedlg: true },                 { name: 'AccountName', index: 'AccountName', align: "left", resizable: true, search: true, width: 100 },                 { name: 'OnlineName', index: 'OnlineName', align: 'left', sortable: false, width: 80 },                 { name: 'ListingCategoryName', index: 'ListingCategoryName', width: 85, editable: true, hidden: false, edittype: "select", editoptions: { multiple: false, value: eval('(' + catList + ')') }, editrules: { required: false }, formatoptions: { disabled: false} }             ],             jsonReader: {                 root: "List",                 page: "CurrentPage",                 total: "TotalPages",                 records: "TotalRecords",                 userdata: "Errors",                 repeatitems: false,                 id: "0"             },             rowNum: $rows,             rowList: [10, 20, 50, 200, 500, 1000, 2000],             imgpath: jQueryImageRoot,             pager: $(item + 'Pager'),             shrinkToFit: true,             width: 1455,             recordtext: 'Traffic lines',             sortname: 'OrderDate',             viewrecords: true,             sortorder: "asc",             altRows: true,             cellEdit: true,             cellsubmit: "remote",             cellurl: editURL + '?rows=' + $rows + '&page=1',             loadComplete: function() {               },             gridComplete: function() {             },             loadError: function(xhr, st, err) {             },             afterEditCell: function(rowid, cellname, value, iRow, iCol) {                 var select = $(item).find('td.edit-cell select');                 $(item).find('td.edit-cell select option').each(function() {                     var option = $(this);                     var optionId = $(this).val();                     $(lookupData.ListingCategory).each(function() {                         if (this.Id == optionId) {                                                       if (this.OnlineName != $(item).getCell(rowid, 'OnlineName')) {                                 option.remove();                                 return false;                             }                         }                     });                 });             },             search: true,             searchdata: {},             caption: "List of all Traffic lines",             editurl: editURL + '?rows=' + $rows + '&page=1',             hiddengrid: hideGrid   Here is the JSON data returned via the ajax call during the jqGrid function call above (NOTE it must be { async: false}: {"ListingCategory":[{"Id":29,"Name":"Document Imaging & Management","OnlineName":"RF Globalnet"} ,{"Id":1,"Name":"Ancillary Department Hardware","OnlineName":"Healthcare Technology Online"} ,{"Id":2,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":3,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":4,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":5,"Name":"Document Imaging & Management","OnlineName":"Healthcare Technology Online"} ,{"Id":6,"Name":"Document Imaging & Management","OnlineName":"Healthcare Technology Online"} ,{"Id":7,"Name":"EMR/EHR Software","OnlineName":"Healthcare Technology Online"}]} I only need the Id and Name for the drop down list, but the third column in the JSON object is important, it is the only that I match up with the OnlineName in the jqGrid column, and then in the loop during afterEditCell simply remove the ones I don't want the user to see. That's it!

    Read the article

  • designing classes with similar goal but widely different decisional core

    - by Stefano Borini
    I am puzzled on how to model this situation. Suppose you have an algorithm operating in a loop. At every loop, a procedure P must take place, whose role is to modify an input data I into an output data O, such that O = P(I). In reality, there are different flavors of P, say P1, P2, P3 and so on. The choice of which P to run is user dependent, but all P have the same finality, produce O from I. This called well for a base class PBase with a method PBase::apply, with specific reimplementations of P1::apply(I), P2::apply(I), and P3::apply(I). The actual P class gets instantiated in a factory method, and the loop stays simple. Now, I have a case of P4 which follows the same principle, but this time needs additional data from the loop (such as the current iteration, and the average value of O during the previous iterations). How would you redesign for this case?

    Read the article

  • Building a chat-like functionality in iOS

    - by Mani BAtra
    I was planning to implement a functionality wherein a user can send data to a friend of his, similar to sending messages in WhatsApp. This is how I broke down the problem : The user registers for the app. This equates to user info being stored on a dedicated server. With the phone number as the key identifier. The user selects the friend to send a message to and pushes the data. The receiver polls the server regularly and acknowledges that the data has been received. I did a little bit of research and am thinking of implementing this using the XMPP Framework for iOS. Any pointers as to is this the correct implementation or some advice in general?

    Read the article

  • error trying to display semi transparent rectangle

    - by scott lafoy
    I am trying to draw a semi transparent rectangle and I keep getting an error when setting the textures data. The size of the data passed in is too large or too small for this resource. dummyRectangle = new Rectangle(0, 0, 8, 8); Byte transparency_amount = 100; //0 transparent; 255 opaque dummyTexture = new Texture2D(ScreenManager.GraphicsDevice, 8, 8); Color[] c = new Color[1]; c[0] = Color.FromNonPremultiplied(255, 255, 255, transparency_amount); dummyTexture.SetData<Color>(0, dummyRectangle, c, 0, 1); the error is on the SetData line: "The size of the data passed in is too large or too small for this resource." Any help would be appreciated. Thank you.

    Read the article

  • Google Cloud Messaging (GCM) for turn-based mobile multiplayer server?

    - by Chris
    I'm designing a multiplayer turn-based game for Android (over 3g). I'm thinking the clients will send data to a central server over a socket or http, and receive data via GCM push messaging. I'd like to know if anyone has practical experience with GCM for pushing 'real-time' turn data to game clients. What kind of performance and limitations does it have? I'm also considering using a RESTful approach with GAE or Amazon EC2. Any advice about these approaches is appreciated.

    Read the article

  • Google I/O 2010 - Making smart & scalable Wave robots

    Google I/O 2010 - Making smart & scalable Wave robots Google I/O 2010 - Making smart & scalable Wave robots Wave 201 David Byttow, Marcel Prasetya A smart robot must be able to store persistent data. Wave robots can store data in wave structures, like wavelets, datadocs, and annotations, instead of traditional datastores. A scalable robot must perform operations with minimal bandwidth. Wave robots can optimize by selecting the appropriate amount of context, the optimal events, and narrow filters for events. In this talk, we'll share best practices on data storage and scaling. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 9 0 ratings Time: 58:25 More in Science & Technology

    Read the article

  • Time tracking and payment registration architecture

    - by egis
    ?itle might be a little bit incorrect. :) Anyway, I'm building a software where employees input time they worked per day (work hours) and employer "pays" for this time. "Payment" is done outside this system, so employer just "confirms" (checkbox or something like this) which work hours are paid. So the question is - what is the best way (both UI and data storage wise) to implement this? At the moment I have this idea: Employee selects week and manually (with some Javascript helpers, like "fill the same time for all days") inputs work hours in every day of the week. Employer confirms payment the same way employee inputs data (selects week, confirms each day). Data is saved in DB as unix timestamp (one day per table row). Problem is 14 inputs (7 days * ("hours from" + "hours to" input), yet this approach seems kinda easy to implement. Maybe I'm overlooking something and this can be done differently and better? Maybe someone has any example of already working software?

    Read the article

  • Load Texture From Image Content In Runtime

    - by Austin Brunkhorst
    Basically I wrote a world editor for a game I'm working on. Looking ahead, I was brainstorming ways to save the created world including the tile-sets (this game will rely on a tile engine). I was hoping to save the image data of each tile-set in the same file containing the tile positions, etc. and load the image data into a Texture with XNA. Is it possible? Something like this is what I'm going for. Texture2D tileset = Content.LoadFromString<Texture2D>("png tileset data");

    Read the article

  • Netbook partitioning scheme suggestions

    - by David B
    I got a new Asus EEE PC 1015PEM with 2GB RAM and a 250GB HD. After playing with the netbook edition a little, I would like to install the desktop edition I'm used to. In addition to ubunto partition(s), I would like to have one separate partition for data (documents, music, etc.), so I could try other OSs in the future without losing the data. What partition scheme would you recommend? I usually like to let the installation do it by itself, but when I try to that I can only use the entire disk, so I don't get the desired data partition. I wish there was a way to see the recommended default partitioning scheme, then just tweak it a bit to fit your needs (instead of building one from scratch). So, how would you recommend I partition my HD? Please be specific since I never manually partitioned before. Thanks!

    Read the article

  • Statements of direction for EPM 11.1.1.x series products

    - by THE
    Some of the older parts of EPM that have been replaced with newer software will phase out after January 2013. For most of these the 11.1.1.x Series will be the last release. They will then only be supported via sustaining support (see policy). We have notes about: the Essbase Excel Add In (replaced by SmartView which nearly achieved functionality parity with release 11.1.2.1.102) Oracle Essbase Spreadsheet Add-in Statement of Direction (Doc ID 1466700.1) Hyperion Data Integration Management (replaced by Oracle Data Integrator ( ODI )) Hyperion Data Integration Management Statement of Direction (Doc ID 1267051.1) Hyperion Enterprise and Enterprise Reporting (replaced by HFM) Hyperion Enterprise and Hyperion Enterprise Reporting Statement of Direction (Doc ID 1396504.1) Hyperion Business Rules (replaced by Calculation Manager) Hyperion Business Rules Statement of Direction (Doc ID 1448421.1) Oracle Visual Explorer (this one phased out in June 11 already - just in case anyone missed it) Oracle Essbase Visual Explorer Statement of Direction (Doc ID 1327945.1) For a complete list of the Supported Lifetimes, please review the "Oracle Lifetime Support Policy for Applications"

    Read the article

  • Adoption of Exadata - Gartner research note

    - by Javier Puerta
    Independent research note by Gartner acknowledges Oracle Exadata Database Machine has achieved significant early adoption and acceptance of its database appliance value proposition. Analyst Merv Adrian looks at some of the main issues that IT professionals have solved as they assess or deploy the Oracle Exadata solution, including: OLTP and DSS workload support workload consolidation increasing performance and scalability demands data compression improvements  Gartner reports clients using Oracle Exadata experienced the following: report significant performance improvements substantial amounts of cache memory which greatly improves processing speed Oracle Advanced Compression providing 2-4X data compression delivering significant reductions in storage requirements and driving shorter times for backup operations Tables compressed with Oracle Advanced Compression automatically recompress as data is added/updated. One client specifically reported consolidating more than 400 applications onto the Oracle Exadata platform Read the full Gartner note

    Read the article

  • What is the practical use of IBOs / degenerate vertex in OpenGL?

    - by 0xFAIL
    Vertices in 3D models CAN get cut in the process of optimizing 3D geometry, (degenerate vertices) by 3D graphics software (Blender, ...) when exporting because they aren't needed when reusing a vertex for multiple triangles. (In the current case 3D data is exported from Blender as .ply and read by a simple application that displays the 3D model) Every vertex has a few attributes like position, color, normal, tangent,... But the data for each vertex that is cut through the vertex sharing is lost and is missing in the vertex shader. Modern shader techniques like Bump or Normal mapping require normals/tangents per vertex which are also cut. To use complex shader techniques IBOs must not be used? Or is there a way to use IBOs and retain the data per vertex that was origionally lost?

    Read the article

  • Java: Reflection Packet Builder using getField()

    - by Matchlighter
    So I just finished writing a packet builder that dynamically loads data into a data stream which is then sent out. Each builder operates by finding fields in its class (and its superclasses) that are marked with an @data annotation. Upon finishing the builder, I remembered that getFields() does not return in "any specific order". I quite like my builder because it allows for quite simple, yet hard-typed packets. Could this implementation be a problem? What would be the best next step to keep the simplicity - do alphabetical sorting of fields?

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • Bing Maps Integrated With ASP.NET Pivot Grid v2010 vol 1

    Check out this slick demo which shows how sales data from the ASPxPivotGrid is plotted and displayed using the Bing.com maps service. The Bing Maps service provides you the capability to plot data geographically on a map. For example, this ASPxPivotGrid shows the quantity of products sold per country: We can plot this data on to a map because the Bing maps services provides developers with a JavaScript API to display maps, locate countries and businesses and create pushpin indicators! Now, we...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Sending JSON to an ASP.NET MVC Action Method Argument

    Javier G Money Lozano, one of the good folks involved with C4MVC, recently wrote a blog post on posting JSON (JavaScript Object Notation) encoded data to an MVC controller action. In his post, he describes an interesting approach of using a custom model binder to bind sent JSON data to an argument of an action method. Unfortunately, his sample left out the custom model binder and only demonstrates how to retrieve JSON data sent from a controller action, not how to send the JSON to the action method....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Returning Images from ASP.NET Web API

    - by bipinjoshi
    Sometimes you need to save and retrieve image data in SQL Server as a part of Web API functionality. A common approach is to save images as physical image files on the web server and then store the image URL in a SQL Server database. However, at times you need to store image data directly into a SQL Server database rather than the image URL. While dealing with the later scenario you need to read images from a database and then return this image data from your Web API. This article shows the steps involved in this process. http://www.bipinjoshi.net/articles/4b9922c3-0982-4e8f-812c-488ff4dbd507.aspx

    Read the article

  • Sending state diffs (deltas) and unreliable connections

    - by spaceOwl
    We're building a realtime multiplayer game, in which each player is responsible for reporting its state on every iteration of the game loop. The state updates are broadcasted using unreliable UDP. To minimize state data sending, we've come up with a system that will send only deltas (whatever state data that was changed). This method however is flawed, since a lost packet will mean that other players will not receive the delta, making the game behave in an unexpected way. For example: Assume that state is comprised of: { positionX, positionY, health } Frame 1 - positionX changed --> send a packet with positionX only. Frame 2 - health changed // lost ! Frame 3 - positionY changed --> send a packet with positionY only. // Other players don't know about health change. How can one overcome this issue then? sending the entire data is not always feasible.

    Read the article

  • Tornado Tracks Highlights 61 Years of Tornado Activity [Wallpaper]

    - by Jason Fitzpatrick
    This eye catching image maps 61 years worth of storm data over the continental United States. It’s neat way to see the frequency and intensity of tornadoes and is available in wallpaper-friendly resolutions. John Nelson took 61 years of data from government sources like the NOAA and compiled the data into a visualization. You can read more about the methodology behind the image at the link below or jump right to Flickr to grab a high-res image for your desktop. Tornado Tracks [via Neatorama] How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • Document Management System

    - by rjayavrp
    Is there any Document Management System in Ubuntu? I tried Alfresco, RavenDB, Owl, Document Manager. Alfresco, RavenDB are heavy. More than my requirements. Owl having source issues. Document Manager im trying to install. Should keep data on the same machine as I am looking for more of internal purpose. Should allow to upload Zip files as well. If it extracts Zip it will be a great + Should allow to send email to preconfigured email addresses Should allow to upload data of size around 100MB at one go Should maintain history of documents also deleted documents Should allow role based document access. Should be Free :) It should not do any spoofing on data. Documents are confidential. Please share your knowledge. Thanks.

    Read the article

  • Chart Chooser Helps You Pick the Right Chart for the Job

    - by Jason Fitzpatrick
    If you’re not sure what kind of chart would best showcase the data you’re presenting, Chart Chooser makes short work of narrowing it down. Are you trying to showcase trends? Compare the composition of sets? Show distributions and trends together? By selecting what you’re trying to highlight, Chart Chooser automatically narrows the pool of chart types to show which would effectively achieve your end. Once you’ve narrowed it down to the chart type you want, you can even download an Excel template for that chart type and populate it with your own data. Hit up the link below to take it for a spin and grab some free templates. Chart Chooser [via Flowing Data] How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • Google I/O 2012 - Making Google Product Search Work for You Using the Content API for Shopping

    Google I/O 2012 - Making Google Product Search Work for You Using the Content API for Shopping Mayuresh Saoji, Danny Hermes To get the best out of product search, merchants need to provide complete and accurate product information, as well as fresh price and availability data for all products. This session will provide merchants with concrete steps they can take to improve their data quality using the Content API for Shopping. We will provide details on when it makes sense to use the Content API to submit data (as opposed to Feeds), and how to use the API. We will also go into details on how to debug API requests and errors, and talk about general best practices to follow in order to use the API optimally and efficiently. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 35 1 ratings Time: 43:50 More in Science & Technology

    Read the article

  • How can I fix latency problems for car game?

    - by Freddy
    Basically I'm trying to make a online car racing game for IOS using Game Center real time multiplayer. I have setup a timer that sends data every 0.02 seconds to the other player with the current position and current angle. However sometimes, it will take LONGER then these 0.02 seconds for the package to be sent and then received. In this case i have implemented a method that "calculate" what the next position should be if no position is received based on the last position and angle. However, when the data then receives for let say 0.04 seconds after, it will change back to the last position, which will result in the car "jumping" back and lag. And If i just keep ignoring the data it will never take any input from the other user. Is their any way to prevent this? I suppose this needs to be fixed with some client-sided algorithm.

    Read the article

  • How to use PostgreSQL on AWS - Ubuntu 11.10

    - by That1Guy
    I'm extremely new to cloud-computing, Linux, and PostgreSQL, so if this is a stupid question, I apologize. I've managed to create an m1.large instance running Ubuntu 11.10, connect via Putty SSH, and install PostgreSQL (sudo apt-get install postgresql), but that is as far as I've gotten. My goal is to run several python web-scraping scripts that I've written on this instance (so as not to eat up all of our bandwidth (smaller company at the moment)) and insert the scraped data into a PostgreSQL table on the instance and later retrieve that data to store on our local server (as I've heard AWS EBS is unreliable and I don't want to take chances). How can I configure PostgreSQL on my AWS instance? How can I access the data from my machine? I currently use PgAdmin3 to manage PosgreSQL on our local server. Can I use this same interface to manage PostgreSQL on my AWS instance? Any suggestions, solutions, links, etc is greatly appreciated. And again, if this is a dumb question, I apologize.

    Read the article

< Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >