Search Results

Search found 63386 results on 2536 pages for 'data structure'.

Page 465/2536 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • Single windows service to provide access to cached data?

    - by Matthias
    I need a solution where I have a single windows service providing access to cached data to various consumers: To an MVC web application, a .Net Assembly (COM interop) used within an classic ASP page, other windows services, a windows forms application. So the data must be accessible from various processes. The data being cached is read-only. For now, all processes are located on the same machine. The environment is .net framework 3.5 and c#. My question is, how can multiple appdomains/processes retrieve cached data from a single windows service?

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • How to notify table-data change to a client program?

    - by JMSA
    Suppose I have an application that access data resident in a central DB server and more than one user access data from client machines networked with the DB server. Suppose two client machines are running a copy of the application and two users are accessing the same DB table. How can I automatically refresh the data on the GUI, that is being viewed by one client, as soon as a change is made by another client in the DB table-data? Which technology should be used to solve this particular scenario in .net? WCF?

    Read the article

  • How to display data into datagridview using multi thread?

    - by Mark
    Hi, I have application where I read/receive data all the time (text) and I need to display this data into datagridview, what is the best way to do that in real time, so the data will be changed all the time. I thought about multi threading, if this is a good idea can you guide me with link to explain how to implement it. Thanks

    Read the article

  • Oracle: Is there a way to get the column data types for a view?

    - by rally25rs
    For a table in oracle, I can query "all_tab_columns" and get table column information, like the data type, precision, whether or not the column is nullable. In SQL Developer or TOAD, you can click on a view in the GUI and it will spit out a list of the columns that the view returns and the same set of data (data type, precision, nullable, etc). So my question is, is there a way to query this column definition for a view, the way you can for a table? How do the GUI tools do it?

    Read the article

  • How can I get swig to wrap a linked list-type structure?

    - by bk
    Here's what I take to be a pretty standard header for a list. Because the struct points to itself, we need this two-part declaration. Call it listicle.h: typedef struct _listicle listicle; struct _listicle{ int i; listicle *next; }; I'm trying to get swig to wrap this, so that the Python user can make use of the listicle struct. Here's what I have in listicle.i right now: %module listicle %{ #include "listicle.h" %} %include listicle.h %rename(listicle) _listicle; %extend listicle { listicle() {return malloc (sizeof(listicle));} } As you can tell by my being here asking, it doesn't work. All the various combinations I've tried each fail in their own special way. [This one: %extend defined for an undeclared class listicle. Change it to %extend _listicle (and fix the constructor) and loading in Python gives type object '_listicle' has no attribute '_listicle_swigregister'. And so on.] Suggestions?

    Read the article

  • using spring, hibernate and scala, is there a better way to load test data than dbunit?

    - by egervari
    Here are some things I really dislike about dbunit: 1) You cannot specify the exact ordering the inserts because dbunit likes to group your inserts by table name, and not by the order you define them in the XML file. This is a problem when you have records depending on other records in other tables, so you have to disable foreign key constraints during your tests... which actually sucks because these foreign key constraints will get fired in production while your tests won't be aware of them! 2) They seem hellbent on forcing you to use an xml namespace to define your xml... and I honestly can't be bothered to do this. I like the data.xml without any namespace. It works. But they are so hellbent on deprecating it. 3) Creating different xml files is hard on a per test basis, so it actually encourages creating data for your entire app. Unfortunately, this process is a little bloated too once the data grows in size and things get inter tangled. There has got to be a better way to split up your test data into chunks without having to copy/paste a lot of the test data across all of your tests. 4) Keeping track of id references in a big xml file is just impossible. If you have 130 domain classes, it just gets bewildering. This model simply does not scale. Is there something less bloated and better in the Spring/Hibernate space? db unit has worn out its welcome and I'm really looking for something better.

    Read the article

  • How to copy data from another workbook and paste onto related group rows?

    - by leighla
    Hi there, How do I copy data from all the workbooks in the folder onto workbook 1 into it's corresponding row groups? The attached images shows the sample worksheet is the file I want to paste data into (main template) and wb2 sample is a sample of one of the worksheets in the folder that I want to copy data from. As you can see, the workbook 2 does not include all of the tasks. So I need to copy all of the data from workbook 2 and paste it on the corresponding row group (col A) on original workbook. I then need to do this for all workbooks in the folder. Any help would be most appreciated!

    Read the article

  • How to input test data using the DecisionTree module in python?

    - by lifera1n
    On the Python DescisionTree module homepage (DecisionTree-1.6.1), they give a piece of example code. Here it is: dt = DecisionTree( training_datafile = "training.dat", debug1 = 1 ) dt.get_training_data() dt.show_training_data() root_node = dt.construct_decision_tree_classifier() root_node.display_decision_tree(" ") test_sample = ['exercising=>never', 'smoking=>heavy', 'fatIntake=>heavy', 'videoAddiction=>heavy'] classification = dt.classify(root_node, test_sample) print "Classification: ", classification My question is: How can I specify sample data (test_sample here) from variables? On the project homepage, it says: "You classify new data by first constructing a new data vector:" I have searched around but have been unable to find out what a data vector is or the answer to my question. Any help would be appreciated!

    Read the article

  • Ruby on Rails: one-to-one mapping. Just semantics or really a different structure.

    - by Sam
    So I'm creating a plugin for Ruby on Rails to make implemented addresses including country, state, city, and zip_code for countries that can follow that paradigm a lot easier but that's beside the point expect for how the address model is associated. So starting with my address model. class Address < ActiveRecord::Base has_one :country has_one :state has_one :city has_one :zip_code end What's the difference between saying belongs_to and has_one Seems to be the same thing because both only require one model to declare ownership and foreign_key And it also seems that both are logical to say. an address belongs to an account and an account has one address Is this only semantics or is there are real difference

    Read the article

  • parsing python to csv

    - by user185955
    I'm trying to download some game stats to do some analysis, only problem is each season the data their isn't 100% consistent. I grab the json file from the site, then wish to save it to a csv with the first line in the csv containing the heading for that column, so the heading would be essentially the key from the python data type. #!/usr/bin/env python import requests import json import csv base_url = 'http://www.afl.com.au/api/cfs/afl/' token_url = base_url + 'WMCTok' player_url = base_url + 'matchItems/round' def printPretty(data): print(json.dumps(data, sort_keys=True, indent=2, separators=(',', ': '))) session = requests.Session() # session makes it simple to use the token across the requests token = session.post(token_url).json()['token'] # get the token session.headers.update({'X-media-mis-token': token}) # set the token Season = 2014 Roundno = 4 if Roundno<10: strRoundno = '0'+str(Roundno) else: strRoundno = str(Roundno) # get some data (could easily be a for loop, might want to put in a delay using Sleep so that you don't get IP blocked) data = session.get(player_url + '/CD_R'+str(Season)+'014'+strRoundno) # print everything printPretty(data.json()) with open('stats_game_test.csv', 'w', newline='') as csvfile: spamwriter = csv.writer(csvfile, delimiter="'",quotechar='|', quoting=csv.QUOTE_ALL) for profile in data.json()['items']: spamwriter.writerow(['%s' %(profile)]) #for key in data.json().keys(): # print("key: %s , value: %s" % (key, data.json()[key])) The above code grabs the json and writes it to a csv, but it puts the key in each individual cell next to the value (eg 'venueId': 'CD_V190'), the key needs to be just across the first row as a heading. It gives me a csv file with data in the cells like this Column A B 'tempInCelsius': 17.0 'totalScore': 32 'tempInCelsius': 16.0 'totalScore': 28 What I want is the data like this tempInCelsius totalScore 17 32 16 28 As I mentioned up the top, the data isn't always consistent so if I define what fields to grab with spamwriter.writerow([profile['tempInCelsius'], profile['totalScore']]) then it will error out on certain data grabs. This is why I'm now trying the above method so it just grabs everything regardless of what data is there.

    Read the article

  • Flex Tree with infinite parents and children

    - by Tempname
    I am working on a tree component and I am having a bit of the issue with populating the data-provider for this tree. The data that I get back from my database is a simple array of value objects. Each value object has 2 properties. ObjectID and ParentID. For parents the ParentID is null and for children the ParentID is the ObjectID of the parent. Any help with this is greatly appreciated. Essentially the tree should look something like this: Parent1 Child1 Child1 Child2 Child1 Child2 Parent2 Child1 Child2 Child3 Child1 This is the current code that I am testing with: public function setDataProvider(data:Array):void { var tree:Array = new Array(); for(var i:Number = 0; i < data.length; i++) { // do the top level array if(!data[i].parentID) { tree.push(data[i], getChildren(data[i].objectID, data)); } } function getChildren(objectID:Number, data:Array):Array { var childArr:Array = new Array(); for(var k:Number = 0; k < data.length; k++) { if(data[k].parentID == objectID) { childArr.push(data[k]); //getChildren(data[k].objectID, data); } } return childArr; } trace(ObjectUtil.toString(tree)); } Here is a cross section of my data: ObjectID ParentID 1 NULL 10 NULL 8 NULL 6 NULL 4 6 3 6 9 6 2 6 11 7 7 8 5 8

    Read the article

  • How to filter using a dropdown in Knockout

    - by user517406
    I have just started using Knockout and I want to filter my data that I am displaying in my UI by selecting an item from a dropdown. I have got so far, but I cannot get the selected value from my dropdown yet, and then after that I need to actually filter the data displayed based upon that value. Here is my code so far : @model Models.Fixture @{ ViewBag.Title = "Fixtures"; Layout = "~/Areas/Development/Views/Shared/_Layout.cshtml"; } @Scripts.Render("~/bundles/jqueryval") <script type="text/javascript" src="@Url.Content("~/Scripts/knockout-2.1.0.js")"></script> <script type="text/javascript"> function FixturesViewModel() { var self = this; var baseUri = '@ViewBag.ApiUrl'; self.fixtures = ko.observableArray(); self.teams = ko.observableArray(); self.update = function (fixture) { $.ajax({ type: "PUT", url: baseUri + '/' + fixture.Id, data: fixture }); }; self.sortByAwayTeamScore = function () { this.fixtures.sort(function(a, b) { return a.AwayTeamScore < b.AwayTeamScore ? -1 : 1; }); }; self.select = function (team) { }; $.getJSON("/api/fixture", self.fixtures); $.getJSON("/api/team", self.teams); } $(document).ready(function () { ko.applyBindings(new FixturesViewModel()); }); </script> <div class="content"> <div> <table><tr><td><select data-bind="options: teams, optionsText: 'TeamName', optionsCaption: 'Select...', optionsValue: 'TeamId', click: $root.select"> </select></td></tr></table> <table class="details ui-widget-content"> <thead> <tr><td>FixtureId</td><td>Season</td><td>Week</td><td>AwayTeam</td><td><a id="header" data-bind='click: sortByAwayTeamScore'>AwayTeamScore</a></td><td>HomeTeam</td><td>HomeTeamScore</td></tr> </thead> <tbody data-bind="foreach: fixtures"> <tr> <td><span data-bind="text: $data.Id"></span></td> <td><span data-bind="text: $data.Season"></span></td> <td><span data-bind="text: $data.Week"></span></td> <td><span data-bind="text: $data.AwayTeamName"></span></td> <td><input type="text" data-bind="value: $data.AwayTeamScore"/></td> <td><span data-bind="text: $data.HomeTeamName"></span></td> <td><input type="text" data-bind="value: $data.HomeTeamScore"/></td> <td><input type="button" value="Update" data-bind="click: $root.update"/></td> </tr> </tbody> </table> </div> </div> Can anybody please advise on this?

    Read the article

  • Truth tables in code? How to structure state machine?

    - by HanClinto
    I have a (somewhat) large truth table / state machine that I need to implement in my code (embedded C). I anticipate the behavior specification of this state machine to change in the future, and so I'd like to keep this easily modifiable in the future. My truth table has 4 inputs and 4 outputs. I have it all in an Excel spreadsheet, and if I could just paste that into my code with a little formatting, that would be ideal. I was thinking I would like to access my truth table like so: u8 newState[] = decisionTable[input1][input2][input3][input4]; And then I could access the output values with: setOutputPin( LINE_0, newState[0] ); setOutputPin( LINE_1, newState[1] ); setOutputPin( LINE_2, newState[2] ); setOutputPin( LINE_3, newState[3] ); But in order to get that, it looks like I would have to do a fairly confusing table like so: static u8 decisionTable[][][][][] = {{{{ 0, 0, 0, 0 }, { 0, 0, 0, 0 }}, {{ 0, 0, 0, 0 }, { 0, 0, 0, 0 }}}, {{{ 0, 0, 1, 1 }, { 0, 1, 1, 1 }}, {{ 0, 1, 0, 1 }, { 1, 1, 1, 1 }}}}, {{{{ 0, 1, 0, 1 }, { 1, 1, 1, 1 }}, {{ 0, 1, 0, 1 }, { 1, 1, 1, 1 }}}, {{{ 0, 1, 1, 1 }, { 0, 1, 1, 1 }}, {{ 0, 1, 0, 1 }, { 1, 1, 1, 1 }}}}; Those nested brackets can be somewhat confusing -- does anyone have a better idea for how I can keep a pretty looking table in my code? Thanks! Edit based on HUAGHAGUAH's answer: Using an amalgamation of everyone's input (thanks -- I wish I could "accept" 3 or 4 of these answers), I think I'm going to try it as a two dimensional array. I'll index into my array using a small bit-shifting macro: #define SM_INPUTS( in0, in1, in2, in3 ) ((in0 << 0) | (in1 << 1) | (in2 << 2) | (in3 << 3)) And that will let my truth table array look like this: static u8 decisionTable[][] = { { 0, 0, 0, 0 }, { 0, 0, 0, 0 }, { 0, 0, 0, 0 }, { 0, 0, 0, 0 }, { 0, 0, 1, 1 }, { 0, 1, 1, 1 }, { 0, 1, 0, 1 }, { 1, 1, 1, 1 }, { 0, 1, 0, 1 }, { 1, 1, 1, 1 }, { 0, 1, 0, 1 }, { 1, 1, 1, 1 }, { 0, 1, 1, 1 }, { 0, 1, 1, 1 }, { 0, 1, 0, 1 }, { 1, 1, 1, 1 }}; And I can then access my truth table like so: decisionTable[ SM_INPUTS( line1, line2, line3, line4 ) ] I'll give that a shot and see how it works out. I'll also be replacing the 0's and 1's with more helpful #defines that express what each state means, along with /**/ comments that explain the inputs for each line of outputs. Thanks for the help, everyone!

    Read the article

  • Does Compressed Sensing bring anything new to data Compression?

    - by anon
    Compressed sensing is great for situations where capturing data is expensive (either in energy or time), and thus less samples can now be taken. However, in situations like image compression, given that the data is already on the computer -- does compressed sensing offer anything? For example, would it offer better data compression? Would it result in better image search?... (Note: If you don't know what the field of Compressed Sensing is, please do not respond.)

    Read the article

  • How can I use EF in windows forms to insert new data ?

    - by Al0NE
    Hi Maybe it's simple question, but it's my first time with EF + win app so ... I'm building win app with EF by adding edmx as data source then drag and drop the tables to get navigator and binding source .. when I press the add button in the navigator it allows me to enter new data but when I save the context I get NULL data for all fields except the auto increment field ( the ID field ).... What should I do to save many entries ...? DO I have to loop over all the entities in the binding source and add them to the context ?? Thanx in advance Update: I forgot to say that when i use data grid view it adds the items correctly but with "Details" it didn't ..

    Read the article

  • When encrypting data that is not an even multiple of the block size do I have to send a complete las

    - by WilliamKF
    If I am using a block cipher such as AES which has a block size of 128 bits, what do I do if my data is not an even multiple of 128 bits? I am working with packets of data and do not want to change the size of my packet when encrypting it, yet my data is not an even multiple of 128? Does the AES block cipher allow handling of a final block that is short without changing the size of my message once encrypted?

    Read the article

  • MIN array function non zeros only

    - by user1717622
    I have been trying to get this array function to output (non-zero) minimum values in the 'FINAL DATA' AE column. Can you see a structural error in this formula? =IF($C$4="All EMEA", MIN(IF('FINAL DATA'!$2:$AE$250000<>0, ('FINAL DATA'!$J$2:$J$250000=$C$4)*('FINAL DATA'!$E$2:$E$250000=$E$4)*( 'FINAL DATA'!$AE$2:$AE$250000))), MIN(IF('FINAL DATA'!$AE$2:$AE$250000<>0, ('FINAL DATA'!$K$2:$K$250000=$C$4)*('FINAL DATA'!$E$2:$E$250000=$E$4)*( 'FINAL DATA'!$AE$2:$AE$250000))) )

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >