Search Results

Search found 69357 results on 2775 pages for 'data oriented design'.

Page 418/2775 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • Programmatically pushing data to Quickbooks Online?

    - by QuickbooksUser
    Our company uses Quickbooks Online to track our books. When our application bills a customer, it would be nice to have that information recorded automatically in QB rather than logging in to QB Online to fill in the info. Is there a way to accomplish this using Quickbook's APIs? The Intuit developer information is very confusing; it is mainly oriented towards extending the desktop version of Quickbooks or developing apps to publish in Intuit's app store. Have you successfully created a web or other service-side app that was able to pull data to or from QB Online?

    Read the article

  • python sending incomplete data over socket

    - by tipu
    I have this socket server script, import SocketServer import shelve import zlib class MyTCPHandler(SocketServer.BaseRequestHandler): def handle(self): self.words = shelve.open('/home/tipu/Dropbox/dev/workspace/search/words.db', 'r'); self.tweets = shelve.open('/home/tipu/Dropbox/dev/workspace/search/tweets.db', 'r'); param = self.request.recv(1024).strip() try: result = str(self.words[param]) except KeyError: result = "set()" self.request.send(str(result)) if __name__ == "__main__": HOST, PORT = "localhost", 50007 SocketServer.TCPServer.allow_reuse_address = True server = SocketServer.TCPServer((HOST, PORT), MyTCPHandler) server.serve_forever() And this receiver, from django.http import HttpResponse from django.template import Context, loader import shelve import zlib import socket def index(req, param = ''): HOST = 'localhost' PORT = 50007 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((HOST, PORT)) s.send(param) data = zlib.decompress(s.recv(131072)) s.close() print 'Received', repr(data) t = loader.get_template('index.html') c = Context({ 'foo' : data }) return HttpResponse(t.render(c)) I am sending strings to the receiver that are in the hundreds of kilobytes. I end up only receiving a portion of it. Is there a way that I can fix that so that the whole string is sent?

    Read the article

  • python fdb save huge data from database to file

    - by peter
    I have this script SELECT = """ select coalesce (p.ID,'') as id, coalesce (p.name,'') as name, from TABLE as p """ self.cur.execute(SELECT) for row in self.cur.itermap(): xml +=" <item>\n" xml +=" <id>" + id + "</id>\n" xml +=" <name>" + name + "</name>\n" xml +=" </item>\n\n" #save xml to file here f = open... and I need to save data from huge database to file. There are 10 000s (up to 40000) of items in my database and it takes very long time when script runs (1 hour and more) until finish. How can I take data I need from database and save it to file "at once"? (as quick as possible? I don't need xml output because I can process data from output on my server later. I just need to do it as quickly as possible. Any idea?) Many thanks!

    Read the article

  • update data in jqgrid

    - by griZZZly8
    Hi! I uses jqgrid in this scenario: Grid gets JSON data from first url. If url returns correct JSON - grid displays that data. If url returns incorrect url, thet fires 'loadError' event of grid. In this event i want to change url of grid to url2 fnd get JSON data from thus new url. Here is my code. loadError: function(xhr, st, err) { $("#list").setGridParam({ url: '/new_url' }); $("#list").trigger("reloadGrid"); } But it doesnt't works.

    Read the article

  • c# multi threaded file processing

    - by user177883
    There is a folder that contains 1000 of small text files. I aim to parse and process all of them while more files are being populated in to the folder. My intention is to multithread this operation as the single threaded prototype took 6 minutes to process 1000 files. I like to have reader and writer thread(s) as following : while the reader thread(s) are reading the files, I d like to have writer thread(s) to process them. Once the reader is started reading a file, I d like to mark it as being processed, such as by renaming it, once it s read, rename it to completed. How to approach such multithreaded application ? Is it better to use a distributed hash table or a queue? Which data structure to use that would avoid locks? Would you have a better approach to this scheme that you like to share?

    Read the article

  • What format do I use to store a relatively small amount of user data

    - by wcm
    I am writing a small program for our local high school (pro bono). The program has an interface allows the user to enter school holidays. This is a simple stand alone Windows app. What format should I use to store the data? A big relational data is obviously overkill. My initial plan was to store the data in an XML file. Co-workers have been suggesting that I use JSON files, Access Databases, SQL Lite, and SQL Server Express. There was even a suggestion of old school INI files.

    Read the article

  • Service Layer Pattern - Could we avoid the service layer on a specific case?

    - by lidermin
    Hi, we are trying to implement an application using the Service Layer Pattern cause our application needs to connect to other multiple applications too, and googling on the web, we found this link of a demonstrative graphic for the "right" way of apply the pattern: martinfowler.com - Service Layer Pattern But now we have a question: what if our system needs to implement some business logic, only for our application (like some maintenance data for the system itself) that we don't need to share with other systems. Based on this graphic: As it seems, it will be unnecesary to implement a service layer just for that; it will be more practical to avoid the service layer, and just go from User Interface to the Business Layer (for example). What should be the right way in this case to implement the Service Layer Pattern? What do you suggest us for a scenario like the one I told you? Thanks in advance.

    Read the article

  • How to scale an image (in data URI format) in JavaScript (real scaling, not using styling)

    - by 103067513055141045393
    We are capturing a visible tab in a Chrome browser (by using the extensions API chrome.tabs.captureVisibleTab) and receiving a snapshot in the data URI scheme (Base64 encoded string). Is there a JavaScript library that can be used to scale down an image to a certain size? Currently we are styling it via CSS, but have to pay performance penalties as pictures are mostly 100 times bigger than required. Additional concern is also the load on the localStorage we use to save our snapshots. Does anyone know of a way to process this data URI scheme formatted pictures and reduce their size by scaling them down? References: Data URI scheme on http://en.wikipedia.org/wiki/Data_URI_scheme Chrome Extensions API onhttp://code.google.com/chrome/extensions/tabs.html The "Recently Closed Tabs" Chrome Extension onhttp://code.google.com/p/recently-closed-tabs

    Read the article

  • Returning user data for forms that have errors in when using ModelForms

    - by Sevenearths
    forms.py from django.forms import ModelForm from client.models import ClientDetails, ClientAddress, ClientPhone from snippets.UKPhoneNumberForm import UKPhoneNumberField class ClientDetailsForm(ModelForm): class Meta: model = ClientDetails class ClientAddressForm(ModelForm): class Meta: model = ClientAddress class ClientPhoneForm(ModelForm): number = UKPhoneNumberField() class Meta: model = ClientPhone views.py from django.shortcuts import render_to_response, redirect from django.template import RequestContext from client.forms import ClientDetailsForm, ClientAddressForm, ClientPhoneForm def new_client_view(request): formDetails = ClientDetailsForm(initial={'marital_status':'u'}) formAddress = ClientAddressForm() formHomePhone = ClientPhoneForm(initial={'phone_type':'home'}) formWorkPhone = ClientPhoneForm(initial={'phone_type':'work'}) formMobilePhone = ClientPhoneForm(initial={'phone_type':'mobi'}) return render_to_response('client/new_client.html', {'formDetails': formDetails, 'formAddress': formAddress, 'formHomePhone': formHomePhone, 'formWorkPhone': formWorkPhone, 'formMobilePhone': formMobilePhone}, context_instance=RequestContext(request)) (the new_client.html is nothing special) How should I write views.py so that if the user's data raises an error, instead of showing them the form again with the errors in but none of their original data, it shows them the form again with the errors AND their original data?

    Read the article

  • Howto read only one line via c++ from a data

    - by Markus Hupfauer
    i tryed to read the first line of a .dat data, but when i tryed to give to text, wich was saved in the .dat data, it print out the whole data, not only one line. the tool is also not looking after breaks or spaces :( Im using the following code: //Vocabel.dat wird eingelesen ifstream f; // Datei-Handle string s; f.open("Vocabeln.dat", ios::in); // Öffne Datei aus Parameter while (!f.eof()) // Solange noch Daten vorliegen { getline(f, s); // Lese eine Zeile cout << s; } f.close(); // Datei wieder schließen getchar(); . So could u help me please ? . Thanks a lot Markus

    Read the article

  • implementing type inference

    - by deepblue
    well I see some interesting discussions here about static vs. dynamic typing I generally prefer static typing, due to compile type checking, better documented code,etc. However I do agree that they do clutter up the code if done the way Java does it, for example. so Im about to start building a language of my own and type inference is one of the things that I want to implement, in a functional style language... I do understand that it is a big subject, and Im not trying to create something that has not been done before, just basic inferencing... any pointers on what to read up that will help me with this? preferably something more pragmatic/practical as oppose to more theoretical category theory/type theory texts. If there's a implementation discussion text out here, with data structures/algorithms, that would just be lovely much appreciated

    Read the article

  • Is SELECT INTO able to affect data from its original table during UPDATE

    - by driveby
    Whilst asking this question asp.net scheduling timed events user murph posted some insightful information: Point about this is that its very, very simple - you have an process for exchange that is performing a clearly defined task and you have a high frequency task that is not doing anything particularly complex, its a straightforward query (select from table where sent = false and send at < value) - probably into temporary table so that you can run a single query update after you've done the sends - that you can optimise the index for. You're not trying to queue up a huge pile of event triggers, just one that fires once a minute and processes things that are due. Is it possible to SELECT data from table X INTO table Y and have the UPDATES that are performed on table Y pushed into table X? I guess the alternative would be that the data gets updated in table Y then an update command can be run on table X based on the data in table Y. What would be the advantage of selecting into another table? Thank you,

    Read the article

  • cron job for updating user profile data imported via facebook connect

    - by Abidoon Nadeem
    I want to write a cron job for updating user profile data on my website that I pull for users that register via facebook connect on my website. The objective is to keep their profile data on my website in sync with their profile data on facebook. So if a user updates their profile picture on facebook. I want to update his profile picture on my website as well via a cron job which will run every 24 hours. I wanted to know if this is possible and secondly if this is in violation of facebook privacy policy. Based on my research it seems doable but I wanted to know if anyone has already done something like this before. It would really help.

    Read the article

  • What is the simplest, but solid, interface from WinForms to a SQL Server database?

    - by Greg
    Hi, If I wanted to have my data in SQL Server, but wanted to use a thick client WinForms application for users, what would be the best practice way to have calls occurring from WinForms to database? And how simple is this? I guess I'm trying to gauge to what extent there are issues with this approach and one needs to go for some (a) middle tier with web services, or (b) have to go asp.net or something. I really just have a simple app that needs a database and I'll only have a 10 - 30 clients on a LAN/WAN network that would be connecting in.

    Read the article

  • grabing data from url

    - by Syom
    i have a task - i must grab some data from the URL. the link is http://cba.am. the data, i want to take, are in the some table, and i have the only one identifier, to reach my wanted data, it's the word "usd", which writes in that table(html)! i've written the following script, and it works! but i never heard how more experienced programers do such things, so i want to hear your comments. here is script <?php $str = file_get_contents("http://cba.am/"); $key_usd = "USD"; $sourse_usd_1 = explode($key_usd,$str); $usd1 = $sourse_usd_1[2]; $sourse_usd_2=explode(">",$usd1); $usd2 = $sourse_usd_2[4]; $sourse_usd_3=explode("<",$usd2); $usd = $sourse_usd_3[0]; ?> sorry for poor english:)

    Read the article

  • Data Access from single table in sql server 2005 is too slow

    - by Muhammad Kashif Nadeem
    Following is the script of table. Accessing data from this table is too slow. SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Emails]( [id] [int] IDENTITY(1,1) NOT NULL, [datecreated] [datetime] NULL CONSTRAINT [DF_Emails_datecreated] DEFAULT (getdate()), [UID] [nvarchar](250) COLLATE Latin1_General_CI_AS NULL, [From] [nvarchar](100) COLLATE Latin1_General_CI_AS NULL, [To] [nvarchar](100) COLLATE Latin1_General_CI_AS NULL, [Subject] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [Body] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [HTML] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [AttachmentCount] [int] NULL, [Dated] [datetime] NULL ) ON [PRIMARY] Following query takes 50 seconds to fetch data. select id, datecreated, UID, [From], [To], Subject, AttachmentCount, Dated from emails If I include Body and Html in select then time is event worse. indexes are on: id unique clustered From Non unique non clustered To Non unique non clustered Tabls has currently 180000+ records. There might be 100,000 records each month so this will become more slow as time will pass. Does splitting data into two table will solve the problem? What other indexes should be there?

    Read the article

  • getting web page data as json object?

    - by encryptor
    I have a url, the data of which page i need as a json object. I ve tried xmlhttprequest and ajaxobject both but doesnt work. It doesnt even give a responseText when I give it as an alert Ill post both the code snippets here. url = http://mydomain.com:port/a/b/c AJAX : var ajaxRequest = new ajaxObject(URL); ajaxRequest.callback = function (responseText,responseStatus) { alert(responseStatus); JSONData = responseText.parseJSON(); processData(JSONData); } USING xmlhttprequest: var client = new XMLHttpRequest(); client.open('GET',URL,true ); data = JSON.parse(client.responseText); alert(data.links.length); can someone please help me out with this. I understand cross scripting may be an issue, but how to come over it? and shouldn't then too it should give the alerts as zero or null

    Read the article

  • Invoicing vs Quoting or Estimating

    - by FreshCode
    If invoices can be voided, should they be used as quotations? I have an Invoices tables that is created from inventory associated with a Job or Order. I could have a Quotes table as a halfway-house between inventory and invoices, but it feels like I would have duplicate data structures and logic just to handle an "Is this a quote?" bit. From a business perspective, quotes are different from invoices: a quote is sent prior to an undertaking and an invoice is sent once it is complete and payment is due, but how to represent this in my repository and model. What is an elegant way to store and manage quotes & invoices in a database? Edit: indicated Job === Order for this particular instance.

    Read the article

  • How to prune data set by frequency to conform to paper's description

    - by sakura90
    The MovieLens data set provides a table with columns: userid | movieid | tag | timestamp I have trouble reproducing the way they pruned the MovieLens data set used in: Tag Informed Collaborative Filtering, by Zhen, Li and Young In 4.1 Data Set of the above paper, it writes "For the tagging information, we only keep those tags which are added on at least 3 distinct movies. As for the users, we only keep those users who used at least 3 distinct tags in their tagging history. For movies, we only keep those movies that are annotated by at least 3 distinct tags." I tried to query the database: select TMP.userid, count(*) as tagnum from (select distinct T.userid as userid, T.tag as tag from tags T) AS TMP group by TMP.userid having tagnum >= 3; I got a list of 1760 users who labeled 3 distinct tags. However, some of the tags are not added on at least 3 distinct movies. Any help is appreciated.

    Read the article

  • Operate on pairs of rows of a data frame

    - by lorin
    I've got a data frame in R, and I'd like to perform a calculation on all pairs of rows. Is there a simpler way to do this than using a nested for loop? To make this concrete, consider a data frame with ten rows, and I want to calculate the difference of scores between all (45) possible pairs. > data.frame(ID=1:10,Score=4*10:1) ID Score 1 1 40 2 2 36 3 3 32 4 4 28 5 5 24 6 6 20 7 7 16 8 8 12 9 9 8 10 10 4 I know I could do this calculation with a nested for loop, but is there a better (more R-ish) way to do it?

    Read the article

  • How to maintain a pool of names ?

    - by Jacques René Mesrine
    I need to maintain a list of userids (proxy accounts) which will be dished out to multithreaded clients. Basically the clients will use the userids to perform actions; but for this question, it is not important what these actions are. When a client gets hold of a userid, it is not available to other clients until the action is completed. I'm trying to think of a concurrent data structure to maintain this pool of userids. Any ideas ? Would a ConcurrentQueue do the job ? Clients will dequeue a userid, and add back the userid when they are finished with it.

    Read the article

  • Converting a C# DataTable instance to xml that contains HTML or binary data

    - by Wardy
    Hmmmm ... Although it works in most cases, one column has html data in it. It seems that doing this ... StringBuilder xmltarget = new StringBuilder(); XmlWriter xmlWriter = XmlWriter.Create(xmltarget); tableData.WriteXml(xmlWriter); ... doesn't identify where this html or binary data exists and wrap the data in cdata tags as it should ... Is there something i need to do to ensure the relevant checks are made and a working xml string is produced?

    Read the article

  • Ruby: Read large data from stdout and stderr of an external process on Windows

    - by BinaryMuse
    Greetings, all, I need to run a potentially long-running process from Ruby on Windows and subsequently capture and parse the data from the external process's standard output and error. A large amount of data can be sent to each, but I am only necessarily interested in one line at a time (not capturing and storing the whole of the output). After a bit of research, I found that the Open3 class would take care of executing the process and giving me IO objects connected to the process's standard output and error (via popen3). Open3.popen3("external-program.bat") do |stdin, out, err, thread| # Step3.profit() ? end However, I'm not sure how to continually read from both streams without blocking the program. Since calling IO#readlines on out or err when a lot of data has been sent results in a memory allocation error, I'm trying to continuously check both streams for available input, but not having much luck with any of my implementations. Thanks in advance for any advice!

    Read the article

  • C# Importing Large Volume of Data from CSV to Database

    - by guazz
    What's the most efficient method to load large volumes of data from CSV (3 million + rows) to a database. The data needs to be formatted(e.g. name column needs to be split into first name and last name, etc.) I need to do this in a efficiently as possible i.e. time constraints I am siding with the option of reading, transforming and loading the data using a C# application row-by-row? Is this ideal, if not, what are my options? Should I use multithreading?

    Read the article

  • Storage for large gridded datasets

    - by nullglob
    I am looking for a good storage format for large, gridded datasets. The application is meteorology, and we would prefer a format that is common within this field (to help exchange data with others). I don't need to deal with special data structures, and there should be a Fortran API. I am currently considering HDF5, GRIB2 and NetCDF4. How do these formats compare in terms of data compression? What are their main limitations? How steep is the learning curve? Are there any other storage formats worth investigating? I have not found a great deal of material outlining the differences and pros/cons of these formats (there is one relevant SO thread, and a presentation comparing GRIB and NetCDF).

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >