Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 123/151 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Using ARIMA to model and forecast stock prices using user-friendly stats program

    - by Brian
    Hi people, Can anyone please offer some insight into this for me? I'm coming from a functional magnetic resonance imaging research background where I analyzed a lot of time series data, and I'd like to analyze the time series of stock prices (or returns) by: 1) modeling a successful stock in a particular market sector and then cross-correlating the time series of this historically successful stock with that of other newer stocks to look for significant relationships; 2) model a stock's price time series and use forecasting (e.g., exponential smoothing) to predict future values of it. I'd like to use non-linear modeling methods (ARIMA and ARCH) to do this. Several questions: How often do ARIMA and ARCH modeling methods (given that the individual who implements them does so accurately) actually fit the stock time series data they target, and what is the optimal fit I can expect? Is the extent to which this model fits the data commensurate with the extent to which it predicts this stock time series' future values? Rather than randomly selecting stocks to compare or model, if profit is my goal, what is an efficient approach, if any, to selecting the stocks I'm going to analyze? Which stats program is the most user-friendly for this? Any thoughts on this would be great and would go a long way for me. Thanks, Brian

    Read the article

  • Debugging unexpected error message - possible memory management problem?

    - by Ben Packard
    I am trying to debug an application that is throwing up strange (to my untutored eyed) errors. When I try to simply log the count of an array... NSLog(@"Array has %i items", [[self startingPlayers] count]); ...I sometimes get an error: -[NSCFString count]: unrecognized selector sent to instance 0x1002af600 or other times -[NSConcreteNotification count]: unrecognized selector sent to instance 0x1002af600 I am not sending 'count' to any NSString or NSNotification, and this line of code works fine normally. A Theory... Although the error varies, the crash happens at predictable times, immediately after I have run through some other code where I'm thinking I might have a memory management issue. Is it possible that the object reference is still pointing to something that is meant to be destroyed? Sorry if my terms are off, but perhaps it's expecting the array at the address it calls 'count' on, but finds another previous object that shouldn't still be there (eg an NSString)? Would this cause the problem? If so, what is the most efficient way to debug and find out what is that address? Most of my debugging up until now involves inserting NSLogs, so this would be a good opportunity to learn how to use the debugger.

    Read the article

  • How do I efficiently write a "toggle database value" function in AJAX?

    - by AmbroseChapel
    Say I have a website which shows the user ten images and asks them to categorise each image by clicking on buttons. A button for "funny", a button for "scary", a button for "pretty" and so on. These buttons aren't exclusive. A picture can be both funny and scary. The user clicks the "funny" button. An AJAX request is sent off to the database to mark that image as funny. The "funny" button lights up, by assigning a class in the DOM to mark it as "on". But the user made a mistake. They meant to hit the next button over. They should click "funny" again to turn it off, right? At this point I'm not sure whats the most efficient way to proceed. The database knows that the "funny" flag is set, but it's inefficient to query the database every time a button is clicked to say, is this flag set or not, then go on with a second database call to toggle it. Should I infer the state of the database flag from the DOM, i.e. if that button has the class "on" then the flag must be set, and branch at that point? Or would it be better to have a data structure in Javascript in the page which duplicates the state of each image in the database, so that every time I set the database flag to true, I also set the value in the Javascript data to true and so on?

    Read the article

  • Reasons for & against a Database

    - by dbemerlin
    Hi, i had a discussion with a coworker about the architecture of a program i'm writing and i'd like some more opinions. The Situation: The Program should update at near-realtime (+/- 1 Minute). It involves the movement of objects on a coordinate system. There are some events that occur at regular intervals (i.e. creation of the objects). Movements can change at any time through user input. My solution was: Build a server that runs continously and stores the data internally. The server dumps a state-of-the-program at regular intervals to protect against powerfailures and/or crashes. He argued that the program requires a Database and i should use cronjobs to update the data. I can store movement information by storing startpoint, endpoint and speed and update the position in the cronjob (and calculate collisions with other objects there) by calculating direction and speed. His reasons: Requires more CPU & Memory because it runs constantly. Powerfailures/Crashes might destroy data. Databases are faster. My reasons against this are mostly: Not very precise as events can only occur at full minutes (wouldn't be that bad though). Requires (possibly costly) transformation of data on every run from relational data to objects. RDBMS are a general solution for a specialized problem so a specialized solution should be more efficient. Powerfailures (or other crashes) can leave the Data in an undefined state with only partially updated data unless (possibly costly) precautions (like transactions) are taken. What are your opinions about that? Which arguments can you add for any side?

    Read the article

  • Hybrid EAV/CR model via WCF (and statically-typed language)?

    - by Pat
    Background I'm working on the architecture for a cloud-based LOB application, using Silverlight for the client, WCF, ASP.NET/C# for server and SQL Server for storage. The data model requires some flexibility per user (ability to add custom properties and define validation rules for them, for example), and a hybrid EAV/CR persistence model on the server side will suit nicely. Problem I need an efficient and maintainable technology and approach to handle the transformation from the persisted EAV model to/from WCF (and similarly allow the client to bind to the resulting data - DataGrid is a key UI element)? Admission: I don't yet know enough about WCF to understand if it supports ExpandoObject directly, but I suspect it will. Options I started off looking at WCF RIA services, but quickly discovered they're heavily dependent upon both static type data and compile-time code generation. Neither of these appeal. The options I'm considering include: Using WCF RIA services and pass the data over the network directly in EAV form (i.e. Dictionary), and handle the binding issue purely on the client side (like this) Using a dynamic language (probably IronPython) to handle both ends of the communication, with plumbing to generate the necessary CLR type data on the client to allow binding, and transform to/from EAV form on the server (spam preventer stopped me from posting a URL here, I'll try it in a comment). Dynamic LINQ (CreateClass() and friends), although I'm way out of my depth there and don't know what the limitations on that approach might be yet. I'm interested in comments on these approaches as well as alternative approaches that might solve the problem. Other Notes The Silverlight client will not be the only consumer of the service, making me slightly uncomfortable with option #1 above. While the data model is flexible, it's not expected to be modified heavily. For argument's sake, we could assume that we might have 25 distinct data models active at a given time, with something like 10-20 unique data fields/rules each. Modifications to the data model will happen infrequently (typically when a new user is initially configured).

    Read the article

  • Random Pairings that don't Repeat

    - by Andrew Robinson
    This little project / problem came out of left field for me. Hoping someone can help me here. I have some rough ideas but I am sure (or at least I hope) a simple, fairly efficient solution exists. Thanks in advance.... pseudo code is fine. I generally work in .NET / C# if that sheds any light on your solution. Given: A pool of n individuals that will be meeting on a regular basis. I need to form pairs that have not previously meet. The pool of individuals will slowly change over time. For the purposes of pairing, (A & B) and (B & A) constitute the same pair. The history of previous pairings is maintained. For the purpose of the problem, assume an even number of individuals. For each meeting (collection of pairs) and individual will only pair up once. Is there an algorithm that will allow us to form these pairs? Ideally something better than just ordering the pairs in a random order, generating pairings and then checking against the history of previous pairings. In general, randomness within the pairing is ok.

    Read the article

  • Printing out this week's dates in perl

    - by ABach
    Hi folks, I have the following loop to calculate the dates of the current week and print them out. It works, but I am swimming in the amount of date/time possibilities in perl and want to get your opinion on whether there is a better way. Here's the code I've written: #!/usr/bin/env perl use warnings; use strict; use DateTime; # Calculate numeric value of today and the # target day (Monday = 1, Sunday = 7); the # target, in this case, is Monday, since that's # when I want the week to start my $today_dt = DateTime->now; my $today = $today_dt->day_of_week; my $target = 1; # Create DateTime copies to act as the "bookends" # for the date range my ($start, $end) = ($today_dt->clone(), $today_dt->clone()); if ($today == $target) { # If today is the target, "start" is already set; # we simply need to set the end date $end->add( days => 6 ); } else { # Otherwise, we calculate the Monday preceeding today # and the Sunday following today my $delta = ($target - $today + 7) % 7; $start->add( days => $delta - 7 ); $end->add( days => $delta - 1 ); } # I clone the DateTime object again because, for some reason, # I'm wary of using $start directly... my $cur_date = $start->clone(); while ($cur_date <= $end) { my $date_ymd = $cur_date->ymd; print "$date_ymd\n"; $cur_date->add( days => 1 ); } As mentioned, this works - but is it the quickest/most efficient/etc.? I'm guessing that quickness and efficiency may not necessarily go together, but your feedback is very appreciated.

    Read the article

  • mysql subselect alternative

    - by Arnold
    Hi, Lets say I am analyzing how high school sports records affect school attendance. So I have a table in which each row corresponds to a high school basketball game. Each game has an away team id and a home team id (FK to another "team table") and a home score and an away score and a date. I am writing a query that matches attendance with this seasons basketball games. My sample output will be (#_students_missed_class, day_of_game, home_team, away_team, home_team_wins_this_season, away_team_wins_this_season) I now want to add how each team did the previous season to my analysis. Well, I have their previous season stored in the game table but i should be able to accomplish that with a subselect. So in my main select statement I add the subselect: SELECT COUNT(*) FROM game_table WHERE game_table.date BETWEEN 'start of previous season' AND 'end of previous season' AND ( (game_table.home_team = team_table.id AND game_table.home_score > game_table.away_score) OR (game_table.away_team = team_table.id AND game_table.away_score > game_table.home_score)) In this case team-table.id refers to the id of the home_team so I now have all their wins calculated from the previous year. This method of calculation is neither time nor resource intensive. The Explain SQL shows that I have ALL in the Type field and I am not using a Key and the query times out. I'm not sure how I can accomplish a more efficient query with a subselect. It seems proposterously inefficient to have to write 4 of these queries (for home wins, home losses, away wins, away losses). I am sure this could be more lucid. I'll absolutely add color tomorrow if anyone has questions

    Read the article

  • Exporting dataset to Excel file with multiple sheets in ASP.NET

    - by engg
    In C# ASP.NET 3.5 web application, I need to export multiple datatables (or a dataset) to an Excel 2007 file with multiple sheets, and then provide the user with 'Open/Save' dialog box, WITHOUT saving the Excel file on the web server. I have used Excel Interop before. I have been reading that it's not efficient and is not the best approach to achieve this and there are more ways to do it, 2 of them being: 1) Converting data in datatables to an XML string that Excel understands 2) Using OPEN XML SDK 2.0. It looks like OPEN XML SDK 2.0 is better, please let me know. Are there any other ways to do it? I don't want to use any third-party tools. If I use OPEN XML SDK, it creates an excel file, right? I don't want to save it on the (Windows 2003) server hard drive (I don't want to use Server.MapPath, these Excel files are dynamically created, and they are not required on the server, once client gets them). I directly want to prompt the user to open/save it. I know how to do it when the 'XML string' approach is used. Please help. Thank you.

    Read the article

  • Python IOError: Not a gzipped file (Gzip and Blowfish Encrypt/Compress)

    - by notbad.jpeg
    I'm having some problems with python's built-in library gzip. Looked through almost every other stack question about it, and none of them seem to work. MY PROBLEM IS THAT WHEN I TRY TO DECOMPRESS I GET THE IOError I'm Getting: Traceback (most recent call last): File "mymodule.py", line 61, in return gz.read() File "/usr/lib/python2.7/gzip.py", line 245, readself._read(readsize) File "/usr/lib/python2.7/gzip.py", line 287, in _readself._read_gzip_header() File "/usr/lib/python2.7/gzip.py", line 181, in _read_gzip_header raise IOError, 'Not a gzipped file'IOError: Not a gzipped file This is my code to send it over SMB, it might not make sense why i do things, but it's normally in a while loop and memory efficient, I just simplified it. buffer = cStringIO.StringIO(output) #output is from a subprocess call small_buffer = cStringIO.StringIO() small_string = buffer.read() #need a string to write to buffer gzip_obj = gzip.GzipFile(fileobj=small_buffer,compresslevel=6, mode='wb') gzip_obj.write(small_string) compressed_str = small_buffer.getvalue() blowfish = Blowfish.new('abcd', Blowfish.MODE_ECB) remainder = '|'*(8 - (len(compressed_str) % 8)) compressed_str += remainder encrypted = blowfish.encrypt(compressed_str) #i send it over smb, then retrieve it later Then this is the code that retrieves it: #buffer is a cStringIO object filled with data from smb retrieval decrypter = Blowfish.new('abcd', Blowfish.MODE_ECB) value = buffer.getvalue() decrypted = decrypter.decrypt(value) buff = cStringIO.StringIO(decrypted) buff.seek(0) gz = gzip.GzipFile(fileobj=buff) return gz.read() Here's the problem return gz.read()

    Read the article

  • jdbc query - date ranges as parameters

    - by pstanton
    Hi all, I'd like to write a single JDBC statement that can handle the equivalent of any number of NOT BETWEEN date1 AND date2 where clauses. By single query, i mean that the same SQL string will be used to create the JDBC statements and then have different parameters supplied. This is so that the underlying frameworks can efficiently cache the query (i've been stung by that before). Essentially, I'd like to find a query that is the equivalent of SELECT * FROM table WHERE mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? and at the same time could be used with fewer parameters: SELECT * FROM table WHERE mydate NOT BETWEEN ? AND ? or more parameters SELECT * FROM table WHERE mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? I will consider using a temporary table if that will be simpler and more efficient. thanks for the help!

    Read the article

  • C# String.Replace with a start/index (Added my (slow) implementation)

    - by Chris T
    I'd like an efficient method that would work something like this EDIT: Sorry I didn't put what I'd tried before. I updated the example now. // Method signature, Only replaces first instance or how many are specified in max public int MyReplace(ref string source,string org, string replace, int start, int max) { int ret = 0; int len = replace.Length; int olen = org.Length; for(int i = 0; i < max; i++) { // Find the next instance of the search string int x = source.IndexOf(org, ret + olen); if(x > ret) ret = x; else break; // Insert the replacement source = source.Insert(x, replace); // And remove the original source = source.Remove(x + len, olen); // removes original string } return ret; } string source = "The cat can fly but only if he is the cat in the hat"; int i = MyReplace(ref source,"cat", "giraffe", 8, 1); // Results in the string "The cat can fly but only if he is the giraffe in the hat" // i contains the index of the first letter of "giraffe" in the new string The only reason I'm asking is because my implementation I'd imagine getting slow with 1,000s of replaces.

    Read the article

  • Alternative to Galileo GWS

    - by Anton Gogolev
    Please note that this Galileo is absolutely not related to Java. Galileo is basically a set of web services which can be used to book airline tickets. Originally, it was supposed to be used via Galileo Desktop, whereby operators would enter various commands to perform required operations. For example, SA*AZ610J20JULFCOJFK will "Display seat availability map for specified flight and class". Granted, humans can get used to it and be very efficient, but here comes a problem of integrating this with other systems. For that, folks at TravelPort basically slapped a SOAP interface to this system (which must have been written in COBOL or something), without even thinking about actually embracing XML. For example, it can contain <Ind1>N</Ind1> <Ind2>N</Ind2> <Ind3>N</Ind3> ... <Ind72>N</Ind72> <!-- Yes! 72! --> or, better yet <Text>P/RU/4xxx24528/RU/11MAY67/M/23DEC12/AxxxxxxV/MxxxM</Text> In light of this, my question is as follows: are there any sane airline tickets booking systems we can integrate with? Or are there companies which have products that can abstract away all this?

    Read the article

  • What is a data structure for quickly finding non-empty intersections of a list of sets?

    - by Andrey Fedorov
    I have a set of N items, which are sets of integers, let's assume it's ordered and call it I[1..N]. Given a candidate set, I need to find the subset of I which have non-empty intersections with the candidate. So, for example, if: I = [{1,2}, {2,3}, {4,5}] I'm looking to define valid_items(items, candidate), such that: valid_items(I, {1}) == {1} valid_items(I, {2}) == {1, 2} valid_items(I, {3,4}) == {2, 3} I'm trying to optimize for one given set I and a variable candidate sets. Currently I am doing this by caching items_containing[n] = {the sets which contain n}. In the above example, that would be: items_containing = [{}, {1}, {1,2}, {2}, {3}, {3}] That is, 0 is contained in no items, 1 is contained in item 1, 2 is contained in itmes 1 and 2, 2 is contained in item 2, 3 is contained in item 2, and 4 and 5 are contained in item 3. That way, I can define valid_items(I, candidate) = union(items_containing[n] for n in candidate). Is there any more efficient data structure (of a reasonable size) for caching the result of this union? The obvious example of space 2^N is not acceptable, but N or N*log(N) would be.

    Read the article

  • question about MySQL database migration

    - by WilliamLou
    Hi there: If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc.. Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database. Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while. Thanks!!

    Read the article

  • Best way to randomly select rows *per* column in SQL Server

    - by LesterDove
    A search of SO yields many results describing how to select random rows of data from a database table. My requirement is a bit different, though, in that I'd like to select individual columns from across random rows in the most efficient/random/interesting way possible. To better illustrate: I have a large Customers table, and from that I'd like to generate a bunch of fictitious demo Customer records that aren't real people. I'm thinking of just querying randomly from the Customers table, and then randomly pairing FirstNames with LastNames, Address, City, State, etc. So if this is my real Customer data (simplified): FirstName LastName State ========================== Sally Simpson SD Will Warren WI Mike Malone MN Kelly Kline KS Then I'd generate several records that look like this: FirstName LastName State ========================== Sally Warren MN Kelly Malone SD Etc. My initial approach works, but it lacks the elegance that I'm hoping the final answer will provide. (I'm particularly unhappy with the repetitiveness of the subqueries, and the fact that this solution requires a known/fixed number of fields and therefore isn't reusable.) SELECT FirstName = (SELECT TOP 1 FirstName FROM Customer ORDER BY newid()), LastName= (SELECT TOP 1 LastNameFROM Customer ORDER BY newid()), State = (SELECT TOP 1 State FROM Customer ORDER BY newid()) Thanks!

    Read the article

  • Implementing Model-level caching

    - by Byron
    I was posting some comments in a related question about MVC caching and some questions about actual implementation came up. How does one implement a Model-level cache that works transparently without the developer needing to manually cache, yet still remains efficient? I would keep my caching responsibilities firmly within the model. It is none of the controller's or view's business where the model is getting data. All they care about is that when data is requested, data is provided - this is how the MVC paradigm is supposed to work. (Source: Post by Jarrod) The reason I am skeptical is because caching should usually not be done unless there is a real need, and shouldn't be done for things like search results. So somehow the Model itself has to know whether or not the SELECT statement being issued to it worthy of being cached. Wouldn't the Model have to be astronomically smart, and/or store statistics of what is being most often queried over a long period of time in order to accurately make a decision? And wouldn't the overhead of all this make the caching useless anyway? Also, how would you uniquely identify a query from another query (or more accurately, a resultset from another resultset)? What about if you're using prepared statements, with only the parameters changing according to user input? Another poster said this: I would suggest using the md5 hash of your query combined with a serialized version of your input arguments. This would require twice the number of serialization options. I was under the impression that serialization was quite expensive, and for large inputs this might be even worse than just re-querying. And is the minuscule chance of collision worth worrying about? Conceptually, caching in the Model seems like a good idea to me, but it seems in practicality the developer should have direct control over caching and write it into the controller. Thoughts/ideas? Edit: I'm using PHP and MySQL if that helps to narrow your focus.

    Read the article

  • Finding Common Phrases in MS SQL TEXT Column

    - by regex
    Hello All, Short Desc: I'm curious to see if I can use SQL Analysis services or some other MS SQL service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification. Thanks in advance for your help.

    Read the article

  • What is the best database structure for this scenario?

    - by Ricketts
    I have a database that is holding real estate MLS (Multiple Listing Service) data. Currently, I have a single table that holds all the listing attributes (price, address, sqft, etc.). There are several different property types (residential, commercial, rental, income, land, etc.) and each property type share a majority of the attributes, but there are a few that are unique to that property type. My question is the shared attributes are in excess of 250 fields and this seems like too many fields to have in a single table. My thought is I could break them out into an EAV (Entity-Attribute-Value) format, but I've read many bad things about that and it would make running queries a real pain as any of the 250 fields could be searched on. If I were to go that route, I'd literally have to pull all the data out of the EAV table, grouped by listing id, merge it on the application side, then run my query against the in memory object collection. This also does not seem very efficient. I am looking for some ideas or recommendations on which way to proceed. Perhaps the 250+ field table is the only way to proceed. Just as a note, I'm using SQL Server 2012, .NET 4.5 w/ Entity Framework 5, C# and data is passed to asp.net web application via WCF service. Thanks in advance.

    Read the article

  • FIFO dequeueing in python?

    - by Aaron Ramsey
    hello again everybody— I'm looking to make a functional (not necessarily optimally efficient, as I'm very new to programming) FIFO queue, and am having trouble with my dequeueing. My code looks like this: class QueueNode: def __init__(self, data): self.data = data self.next = None def __str__(self): return str(self.data) class Queue: def__init__(self): self.front = None self.rear = None self.size = 0 def enqueue(self, item) newnode = QueueNode(item) newnode.next = None if self.size == 0: self.front = self.rear = newnode else: self.rear = newnode self.rear.next = newnode.next self.size = self.size+1 def dequeue(self) dequeued = self.front.data del self.front self.size = self.size-1 if self.size == 0: self.rear = None print self.front #for testing if I do this, and dequeue an item, I get the error "AttributeError: Queue instance has no attribute 'front'." I guess my function doesn't properly assign the new front of the queue? I'm not sure how to fix it though. I don't really want to start from scratch, so if there's a tweak to my code that would work, I'd prefer that—I'm not trying to minimize runtime so much as just get a feel for classes and things of that nature. Thanks in advance for any help, you guys are the best.

    Read the article

  • Database design and foreign keys: Where should they be added in related tables?

    - by Carvell Fenton
    I have a relatively simple subset of tables in my database for tracking something called sessions. These are academic sessions (think offerings of a particular program). The tables to represent a sessions information are: sessions session_terms session_subjects session_mark_item_info session_marks All of these tables have their own primary keys, and are like a tree, in that sessions have terms, terms have subjects, subjects have mark items, etc. So each on would have at least its "parent's" foreign key. My question is, design wise is it a good idea to include the sessions primary key in the other tables as a foreign key to easily select related session items, or is that too much redundency? If I include the session foreign key (or all parent foreign keys from tables up the heirarchy) in all the tables, I can easily select all the marks for a session. As an example, something like SELECT mark FROM session_marks WHERE sessionID=... If I don't, then I would have to combine selects with something like WHERE something IN (SELECT... Which approach is "more correct" or efficient? Thanks in advance!

    Read the article

  • Simple description of worker and I/O threads in .NET

    - by Konstantin
    It's very hard to find detailed but simple description of worker and I/O threads in .NET What's clear to me regarding this topic (but may not be technically precise): Worker threads are threads that should employ CPU for their work; I/O threads (also called "completion port threads") should employ device drivers for their work and essentially "do nothing", only monitor the completion of non-CPU operations. What is not clear: Although method ThreadPool.GetAvailableThreads returns number of available threads of both types, it seems there is no public API to schedule work for I/O thread. You can only manually create worker thread in .NET? It seems that single I/O thread can monitor multiple I/O operations. Is it true? If so, why ThreadPool has so many available I/O threads by default? In some texts I read that callback, triggered after I/O operation completion is performed by I/O thread. Is it true? Isn’t this a job for worker thread, considering that this callback is CPU operation? To be more specific – do ASP.NET asynchronous pages user I/O threads? What exactly is performance benefit in switching I/O work to separate thread instead of increasing maximum number of worker threads? Is it because single I/O thread does monitor multiple operations? Or Windows does more efficient context switching when using I/O threads?

    Read the article

  • PHP templating challenge (optimizing front-end templates)

    - by Matt
    Hey all, I'm trying to do some templating optimizations and I'm wondering if it is possible to do something like this: function table_with_lowercase($data) { $out = '<table>'; for ($i=0; $i < 3; $i++) { $out .= '<tr><td>'; $out .= strtolower($data); $out .= '</td></tr>'; } $out .= "</table>"; return $out; } NOTE: You do not know what $data is when you run this function. Results in: <table> <tr><td><?php echo strtolower($data) ?></td></tr> <tr><td><?php echo strtolower($data) ?></td></tr> <tr><td><?php echo strtolower($data) ?></td></tr> </table> General Case: Anything that can be evaluated (compiled) will be. Any time there is an unknown variable, the variable and the functions enclosing it, will be output in a string format. Here's one more example: function capitalize($str) { return ucwords(strtolower($str)); } If $str is "HI ALL" then the output is: Hi All If $str is unknown then the output is: <?php echo ucwords(strtolower($str)); ?> In this case it would be easier to just call the function (ie. <?php echo capitalize($str) ?> ), but the example before would allow you to precompile your PHP to make it more efficient

    Read the article

  • PHP templating challenge (optimizing front-end templates)

    - by Matt
    Hey all, I'm trying to do some templating optimizations and I'm wondering if it is possible to do something like this: function table_with_lowercase($data) { $out = '<table>'; for ($i=0; $i < 3; $i++) { $out .= '<tr><td>'; $out .= strtolower($data); $out .= '</td></tr>'; } $out .= "</table>"; return $out; } NOTE: You do not know what $data is when you run this function. Results in: <table> <tr><td><?php echo strtolower($data) ?></td></tr> <tr><td><?php echo strtolower($data) ?></td></tr> <tr><td><?php echo strtolower($data) ?></td></tr> </table> General Case: Anything that can be evaluated (compiled) will be. Any time there is an unknown variable, the variable and the functions enclosing it, will be output in a string format. Here's one more example: function capitalize($str) { return ucwords(strtolower($str)); } If $str is "HI ALL" then the output is: Hi All If $str is unknown then the output is: <?php echo ucwords(strtolower($str)); ?> In this case it would be easier to just call the function (ie. <?php echo capitalize($str) ?> ), but the example before would allow you to precompile your PHP to make it more efficient

    Read the article

  • Passing data between ViewControllers versus doing local Fetch in each VC

    - by Tofrizer
    Hi All, I'm developing an iPhone app using Core Data and I'm looking for some general advice and recommendations on whether its acceptable to pass data between ViewControllers versus doing a local fetch in each ViewController as you navigate to it. Ordinarily I would say it all depends on various factors (e.g. performance etc) but the passing data approach is so prevalent in my app and I'm spooked by all the stories about Apple rejecting apps because of not conforming to their standard guidelines. So let me put another way -- is it non-standard to pass data between VC's? The reason I pass data so much is because each ViewController is just another view on to data present in my object model / graph. Once I have a handle on my first object in the first view controller (which I of course do have to fetch), I can use the existing object composition / relationships to drill down into the next level of detail into data and so I just pass these objects to the next VC. Separately, one possible downside with this passing-data-to-each-VC approach is I don't benefit from (what I perceive to be) the optimisation/benefits that NSFetchedResultsController provides in terms of efficient memory usage and section handling. My app is read-only but I do have one table with 5000 rows and I'm curious if I am missing out on NSFetchedResultsController benefits. Any thoughts on this as well? Can I somehow still benefit from NSFetchedResultsController goodness without having to do a full fetch (as I would have already passed in the data from my previous VC)? Thanks a lot.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >