Search Results

Search found 16639 results on 666 pages for 'task engine'.

Page 596/666 | < Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >

  • Does IE completely ignore cache control headers for AJAX requests?

    - by Joshua Hayworth
    Hello there, I've got, what I would consider, a simple test web site. A single page with a single button. Here is a copy of the source I'm working with if you would like to download it and play with it. When that button is clicked, it creates a JavaScript timer that executes once a second. When the timer function is executed, An AJAX call is made to retrieve a text value. That text value is then placed into the DOM. What's my problem? IE Caching. Crack open Task Manager and watch what happens to the iexplorer.exe process (IE 8.0.7600.16385 for me) while the timer in that page is executing. See the memory and handle count getting larger? Why is that happening when, by all accounts, I have caching turned off. I've got the jQuery cache option set to false in $.ajaxSetup. I've got the CacheControl header set to no-cache and no-store. The Expires header is set to DateTime.Now.AddDays(-1). The headers are set in both the page code-behind as well as the HTTP Handler's response. Anybody got any ideas as to how I could prevent IE from caching the results of the AJAX call? Here is what the iexplorer.exe process looks like in ProcessMonitor. I believe that the activity shown in this picture is exactly what I'm attempting to prevent.

    Read the article

  • Rapid Opening and Closing System.IO.StreamWriter in C#

    - by ccomet
    Suppose you have a file that you are programmatically logging information into with regards to a process. Kinda like your typical debug Console.WriteLine, but due to the nature of the code you're testing, you don't have a console to write onto so you have to write it somewhere like a file. My current program uses System.IO.StreamWriter for this task. My question is about the approach to using the StreamWriter. Is it better to open just one StreamWriter instance, do all of the writes, and close it when the entire process is done? Or is it a better idea to open a new StreamWriter instance to write a line into the file, then immediately close it, and do this for every time something needs to be written in? In the latter approach, this would probably be facilitated by a method that would do just that for a given message, rather than bloating the main process code with excessive amounts of lines. But having a method to aid in that implementation doesn't necessarily make it the better choice. Are there significant advantages to picking one approach or the other? Or are they functionally equivalent, leaving the choice on the shoulders of the programmer?

    Read the article

  • How can I build value for "for-each" expression in XSLT with help of parameter

    - by Artic
    I need to navigate through this xml tree. <publication> <corporate> <contentItem> <metadata>meta</metadata> <content>html</content> </contentItem> <contentItem > <metadata>meta1</metadata> <content>html1</content> </contentItem> </corporate> <eurasia-and-africa> ... </eurasia-and-africa> <europe> ... </europe> </publication> and convert it to html with this stylesheet <ul> <xsl:variable name="itemsNode" select="concat('publicationManifest/',$group,'/contentItem')"></xsl:variable> <xsl:for-each select="$itemsNode"> <li> <xsl:value-of select="content"/> </li> </xsl:for-each> </ul> $group is a parameter with name of group for example "corporate". I have an error with compilation of this stylsheet. SystemID: D:\1\contentsTransform.xslt Engine name: Saxon6.5.5 Severity: error Description: The value is not a node-set Start location: 18:0 What the matter?

    Read the article

  • Is it still true, to make cross broswer layouts for desktop browsers using table+css is easier then

    - by metal-gear-solid
    My one of web designer friend still making sites with table but he use css very nicely and I also use css nicely but with <div> and i face cross browser problem in layout more than my friend. and i given some reason to my friend about cons of <table>. read my whole discussion with friend? I - you site will be problematic with screen reader My friend - OK, but i never got any call from any client regarding this. I - you will devote more time to make any changes in layout, if changes comes from client My friend - I don't think so, but if it is then show me how can i save time with <div>? I - your sites will not work well with search engine. My friend - it's not true. I've made many site and no problem with any site or client regarding this I - layout is old way, non w3c and non standard way. My friend - what is old and what is new, Who is W3C i don't know, What is standard? Whatever i make works in all browsers, it's enough for me and my client will not pay for standard and W3C guidelines rules I - Your site will not work in mobile browsers My friend - No problem for me, my client don't care about mobile phone I - Your sites are not accessible? My Friend - What do u mean not accessible? Whatever i make works in all browsers. my any client never asked about accessibility I - You will not get more work in future, with table? My friend - OK, no problem when clients will not accept site with table then i will learn about div based layouts in future. My questions? Is it still true, to make cross browser layouts for desktop browsers using table+css is easier then div+css? What is the benefit for developer to use DIV+CSS layout in place of <table> layouts if client would not mind if i use ?

    Read the article

  • crash when using stl vector at instead of operator[]

    - by Jamie Cook
    I have a method as follows (from a class than implements TBB task interface - not currently multithreading though) My problem is that two ways of accessing a vector are causing quite different behaviour - one works and the other causes the entire program to bomb out quite spectacularly (this is a plugin and normally a crash will be caught by the host - but this one takes out the host program as well! As I said quite spectacular) void PtBranchAndBoundIterationOriginRunner::runOrigin(int origin, int time) const // NOTE: const method { BOOST_FOREACH(int accessMode, m_props->GetAccessModes()) { // get a const reference to appropriate vector from member variable // map<int, vector<double>> m_rowTotalsByAccessMode; const vector<double>& rowTotalsForAccessMode = m_rowTotalsByAccessMode.find(accessMode)->second; if (origin != 129) continue; // Additional debug constrain: I know that the vector only has one non-zero element at index 129 m_job->Write("size: " + ToString(rowTotalsForAccessMode.size())); try { // check for early return... i.e. nothing to do for this origin if (!rowTotalsForAccessMode[origin]) continue; // <- this works if (!rowTotalsForAccessMode.at(origin)) continue; // <- this crashes } catch (...) { m_job->Write("Caught an exception"); // but its not an exception } // do some other stuff } } I hate not putting in well defined questions but at the moment my best phrasing is : "WTF?" I'm compiling this with Intel C++ 11.0.074 [IA-32] using Microsoft (R) Visual Studio Version 9.0.21022.8 and my implementation of vector has const_reference operator[](size_type _Pos) const { // subscript nonmutable sequence #if _HAS_ITERATOR_DEBUGGING if (size() <= _Pos) { _DEBUG_ERROR("vector subscript out of range"); _SCL_SECURE_OUT_OF_RANGE; } #endif /* _HAS_ITERATOR_DEBUGGING */ _SCL_SECURE_VALIDATE_RANGE(_Pos < size()); return (*(_Myfirst + _Pos)); } (Iterator debugging is off - I'm pretty sure) and const_reference at(size_type _Pos) const { // subscript nonmutable sequence with checking if (size() <= _Pos) _Xran(); return (*(begin() + _Pos)); } So the only difference I can see is that at calls begin instead of simply using _Myfirst - but how could that possibly be causing such a huge difference in behaviour?

    Read the article

  • Web Crawler C# .Net

    - by sora0419
    I'm not sure if this is actually called the web crawler, but this is what I'm trying to do. I'm building a program in visual studio 2010 using C# .Net. I want to find all the urls that has the same first part. Say I have a homepage: www.mywebsite.com, and there are several subpage: /tab1, /tab2, /tab3, etc. Is there a way to get a list of all urls that begins with www.mywebsite.com? So by providing www.mywebsite.com, the program returns www.mywebsite.com/tab1, www.mywebsite.com/tab2, www.mywebsite.com/tab3, etc. ps. I do not know how many total sub pages there are. --edit at 12:04pm-- sorry for the lack of explanation. I want to know how to write a crawler in C# that do the above task. All I know is the main url www.mywebsite.com, and the goal is to find all its sub pages. -- edit at 12:16pm-- Also, there is no links on the main page, the html is basically blank. I just know that the subpages exist, but have no way to link to it except for providing the exact urls.

    Read the article

  • Query crashes MS Access

    - by user284651
    THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend?

    Read the article

  • Is there a Designer for MFC in Visual Studio like for windows forms in .NET?

    - by claws
    I'm a .NET programmer. I've never developed anything in MFC. Currently I had to write a C++ application (console) for some image processing task. I finished writing it. But the point is I need to design GUI also for this. Well, there won't be anything complex. Just a window with few Buttons, RadioButtons, Check Boxes, PicturesBox & few sliders. thats it. I'm using VS 2008 and was expecting a .NET style form designer. Just to test, I created a MFC project (with all default configuration) and these files were created by default: ChildFrm.cpp MainFrm.cpp mfc.cpp mfcDoc.cpp mfcView.cpp stdafx.cpp Now, I'm unable to find a Designer. There is no View Designer. I've opened all the above *.cpp and in the code editor right clicked to see "Designer View". ToolBox is just empty because I'm in code editor mode. When I built the project. This is the window I get. How to open a designer?

    Read the article

  • how to run TimerTask off main UI thread?

    - by huskyd97
    I am having trouble with a TimerTask Interfering with In App Purchasing (Async Tasks). I am weak with Threads, so I believe it is running on the main UI thread, eating up resources. How can I run this outside the UI thread? I have searched, and tried some suggestions using handlers. but seems like I get the same result, app gets really laggy. when I don't run this task (refreshes every 500mS), the activity runs smoothly, and there are no hangs during In app purchases. Your help is appreciated, code snippet below: public class DummyButtonClickerActivity extends Activity { protected Timer timeTicker = new Timer("Ticker"); private Handler timerHandler = new Handler(); protected int timeTickDown = 20; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.mainhd); // start money earned timer handler TimerTask tick = new TimerTask() { public void run() { myTickTask(); } }; timeTicker.scheduleAtFixedRate(tick, 0, 500); // 500 ms each } // End OnCreate protected void myTickTask() { if (timeTickDown == 0) { /// run my code here //total = total + _Rate; timerHandler.post(doUpdateTimeout); } else if(timeTickDown < 0) { // do nothing } timeTickDown--; } private Runnable doUpdateTimeout = new Runnable() { public void run() { updateTimeout(); } }; private void updateTimeout() { // reset tick timeTickDown = 2; // 2* 500ms == once a second } }

    Read the article

  • .tpl files and website problem

    - by whitstone86
    Apologies if the title is in lowercase but it's describing an extension format. I've started using Dwoo as my template engine for PHP, and am not sure how to convert my PHP files into .tpl templates. My site is similar to, but not the same as, http://library.digiguide.com/lib/programme/Medium-319648/Drama/ with its design (except colour scheme and site name are different, plus it's in PHP - so copyright issues are avoided here, the design arguably could be seen as parody even though the content is different. The database is called tvguide, and it has these tables: Programmes House M.D. Medium Police Stop! American Dad! The tablenames of the above programmes are: housemdonair mediumonair policestopair americandad1 Episodes The tablenames for the above programmes' episode guides are: housemdepidata mediumepidata policestopepidata americandad1epidata All of them have the following rows: id (not an auto-increment, since I wish to dynamically generate a page from this) episodename seriesnumber episodenumber episodesynopsis (the above four after id do exactly as stated) I have a pagination script that works, it displays 20 records per page as I want it to. This is called pmcPagination.php - but I won't post it in full since it would take up too much space. However, I'm trying to get it so that variables are filled in like this: (ok, so the examples below are ASP.NET, but if there's a PHP/MySQL equivalent I would gratefully appreciate this!!): http://library.digiguide.com/lib/episode/741168 http://library.digiguide.com/lib/episode/714829 with the episode detail and data. My site works, but it's fairly basic, and it's not online yet until my bugs are fixed. Mod_rewrite is enabled so my site reads as http://mytvguide.com/episode/123456 or http://mytvguide.com/programme/123456 http://mytvguide.com/WorldInAction/123456/Documentary/ I've tried looking on Google, but am not sure how to get this TV guide script to work at its best - but I think .tpl, and .php/MySQL is the way to go. Any advice anyone has on making this project into a fully workable, ready to use site would be much appreciated, I've spent months refining this project! P.S. Apologies for the length of this, hope it describes my project well.

    Read the article

  • Mono Text Based Web Browser

    - by powerbox
    Hi guys, is there any public text based web browser implementation for C# or on mono base api that I can use to fill up web forms automatically? I'll be using it to automate some web task that does not require any image authentication. I'm currently using a web browser control available on .Net Framework and waits for the event WebBrowserDocumentCompletedEventHandler to fire after a page is successfully loaded and invoke some actions like Submit or simulating a mouse click on some links. It actually does the job but I can't process bulk transactions since I needed to wait for the whole page to be loaded together with the images and other stuff. It is easy to use HttpWebRequest to fill up some forms , provide some data and then submit. But on some occasions I only need to simulate a mouse click to a certain link which I don't know how to do with HttpWebRequest. By the way using HttpWebRequest will still download all the images of a web page that I see pointless since I only need to provide correct data back to the server. I hope someone can pinpoint me the correct way of doing this kind of automation and thanks in advance!

    Read the article

  • Word frequency tally script is too slow

    - by Dave Jarvis
    Background Created a script to count the frequency of words in a plain text file. The script performs the following steps: Count the frequency of words from a corpus. Retain each word in the corpus found in a dictionary. Create a comma-separated file of the frequencies. The script is at: http://pastebin.com/VAZdeKXs Problem The following lines continually cycle through the dictionary to match words: for i in $(awk '{if( $2 ) print $2}' frequency.txt); do grep -m 1 ^$i\$ dictionary.txt >> corpus-lexicon.txt; done It works, but it is slow because it is scanning the words it found to remove any that are not in the dictionary. The code performs this task by scanning the dictionary for every single word. (The -m 1 parameter stops the scan when the match is found.) Question How would you optimize the script so that the dictionary is not scanned from start to finish for every single word? The majority of the words will not be in the dictionary. Thank you!

    Read the article

  • Is it possible to have WAMP run httpd.exe as user [myself] instead of local SYSTEM?

    - by Olivier H
    Hello! I run a django application over apache with mod_wsgi, using WAMP. A certain URL allows me to stream the content of image files, the paths of which are stored in database. The files can be located whether on local machine or under network drive (\my\network\folder). With the development server (manage.py runserver), I have no trouble at all reading and streaming the files. With WAMP, and with network drive files, I get a IOError : obviously because the httpd instance does not have read permission on said drive. In the task manager, I see that httpd.exe is run by SYSTEM. I would like to tell WAMP to run the server as [myself] as I have read and write permissions on the shared folder. (eventually, the production server should be run by a 'www-admin' user having the permissions) Mapping the network shared folder on a drive letter (Z: for instance) does not solve this at all. The User/Group directives in httpd.conf do not seem to have any kind of influence on Apache's behaviour. I've also regedited : I tried to duplicate the HKLM[...]\wampapache registry key under HK_CURRENT_USER\ and rename the original key, but then the new key does not seem to be found when I cmd this > httpd.exe -n wampapache -k start or when I run WAMP. I've run out of ideas :) Has anybody ever had the same issue?

    Read the article

  • CakePHP dropping session between pages

    - by DavidYell
    Hi, I have an application with multiple regions and various incoming links. The premise, well it worked before, is that in the app_controller, I break out these incoming links and set them in the session. So I have a huge beforeFilter() in my *app_controller* which catches these and sets two variables in the session. Viewing.region and Search.engine, no problem. The problem arises that the session does not seem to be persistant across page requests. So for example, going to /reviews/write (userReviews/add) should have a session available which was set when the user arrived at the site. Although it seems to have vanished! It would appear that unless $this-params is caught explicitly in the *app_controller* and a session variable written, it does not exist on other pages. So far I have tried, swapping between storing session in 'cake' and 'php' both seem to exhibit the same behaviour. I use 'php' as a default. My Session.timeout is '120', Session.checkAgent is False and Security.level is 'low'. All of which should give enough leniency to the framework to allow sessions the most room to live! I'm a bit stumped as to why the session seems to be either recreated or blanked when a new page is being requested. I have commented out the requestAction() calls to make sure that isn't confusing the session request object also, which doesn't seem to make a difference. Any help would be great, as I don't have to have to recode the site to pass all the various variables via parameters in the url, as that would suck, and it's worked before, thus switching on $this-Session-read('Viewing.region') in all my code!

    Read the article

  • Better way to compare neighboring cells in matrix

    - by HyperCube
    Suppose I have a matrix of size 100x100 and I would like to compare each pixel to its direct neighbor (left, upper, right, lower) and then do some operations on the current matrix or a new one of the same size. A sample code in Python/Numpy could look like the following: (the comparison 0.5 has no meaning, I just want to give a working example for some operation while comparing the neighbors) import numpy as np my_matrix = np.random.rand(100,100) new_matrix = np.array((100,100)) my_range = np.arange(1,99) for i in my_range: for j in my_range: if my_matrix[i,j+1] > 0.5: new_matrix[i,j+1] = 1 if my_matrix[i,j-1] > 0.5: new_matrix[i,j-1] = 1 if my_matrix[i+1,j] > 0.5: new_matrix[i+1,j] = 1 if my_matrix[i-1,j] > 0.5: new_matrix[i-1,j] = 1 if my_matrix[i+1,j+1] > 0.5: new_matrix[i+1,j+1] = 1 if my_matrix[i+1,j-1] > 0.5: new_matrix[i+1,j-1] = 1 if my_matrix[i-1,j+1] > 0.5: new_matrix[i-1,j+1] = 1 This can get really nasty if I want to step into one neighboring cell and start from it to do a similar task... Do you have some suggestions how this can be done in a more efficient manner? Is this even possible?

    Read the article

  • C++ AMP, for loops to parallel_for_each loop

    - by user1430335
    I'm converting an algorithm to make use of the massive acceleration that C++ AMP provides. The stage I'm at is putting the for loops into the known parallel_for_each loop. Normally this should be a straightforward task to do but it appears more complex then I first thought. It's a nested loop which I increment using steps of 4 per iterations: for(int j = 0; j < height; j += 4, data += width * 4 * 4) { for(int i = 0; i < width; i += 4) { The trouble I'm having is the use of the index. I can't seem to find a way to properly fit this into the parallel_for_each loop. Using an index of rank 2 is the way to go but manipulating it via branching will do harm to the performance gain. I found a similar post: Controlling the index variables in C++ AMP. It also deals about index manipulation but the increment aspect doesn't cover my issue. With kind regards, Forcecast

    Read the article

  • Selective replication with CouchDB

    - by FRotthowe
    I'm currently evaluating possible solutions to the follwing problem: A set of data entries must be synchonized between multiple clients, where each client may only view (or even know about the existence of) a subset of the data. Each client "owns" some of the elements, and the decision who else can read or modify those elements may only be made by the owner. To complicate this situation even more, each element (and each element revision) must have an unique identifier that is equal for all clients. While the latter sounds like a perfect task for CouchDB (and a document based data model would fit my needs perfectly), I'm not sure if the authentication/authorization subsystem of CouchDB can handle these requirements: While it should be possible to restict write access using validation functions, there doesn't seem to be a way to authorize read access. All solutions I've found for this problem propose to route all CouchDB requests through a proxy (or an application layer) that handles authorization. So, the question is: Is it possible to implement an authorization layer that filters requests to the database so that access is granted only to documents that the requesting client has read access to and still use the replication mechanism of CouchDB? Simplified, this would be some kind of "selective replication" where only some of the documents, and not the whole database is replicated. I would also be thankful for directions to some detailed information about how replication works. The CouchDB wiki and even the "Definite Guide" Book are not too specific about that.

    Read the article

  • How can I filter then modify e-mails using IMAP?

    - by swolff1978
    I have asked this question in a different post here on SO: How can a read receipt be suppressed? I have been doing some research of my own to try and solve this problem and accessing the e-mail account via IMAP seems like it is going to be a good solution. I have successfully been able to access my own Inbox and mark messages as read with no issue. I have been asked to preform the same task on an Inbox that contains over 23,000 emails. I would like to run the test on a small amount of e-mails from that inbox before letting the whole 23,000 get it. Here is the code I have been running via telnet: . LOGIN [email protected] password . SELECT Inbox . STORE 1:* flags \Seen 'this line marks all the e-mails as read So my question is how can i execute that store command on a specific group of e-mails... say emails that are going to / coming from a specific account? Is there a way to like concatenate the commands? like a FETCH then the STORE? Or is there a better way to go about getting a collection of e-mails based on certain criteria and then modifying ONLY those e-mails that can be accomplished through IMAP?

    Read the article

  • Are workflows good for web service business logic?

    - by JL
    I have a series of complex web services that are getting used in my SOA application. I am generally happy with the overall design of the application, but as the complexity grows, I was wondering if Windows Workflow might be the way to go. My motivations for this are that you can get a graphic representation of the applications functionality, so it would be easier to maintain the code by its business function, rather than what I have now ( a standard 3 tier class library structure). My concerns are: I would be inducing an abstraction in my code, and I don't want to spend time having to deal with possible WF quirks or bugs. I've never worked with WF, is it a solid technology? I don't want to hit any WF limitations that prevent me from developing my solution. Is a WF even the right solution for the task? Simply put I am considering writing my next web service in this app to call a WF, and in this work flow manage the tasks the web service needs to carry out. I think it will be much neater and easier to maintain than a regular c# class library (maintainable by namespaces, classes ). Do you think this is the right thing to do? I'm hoping for positive feedback on WF (.net 4), but brutal honestly at the end of the day would help more. Thanks

    Read the article

  • Nginx - Treats PHP as binary

    - by Think Floyd
    We are running Nginx+FastCgi as the backend for our Drupal site. Everything seems to work like fine, except for this one url. http:///sites/all/modules/tinymce/tinymce/jscripts/tiny_mce/plugins/smimage/index.php (We use TinyMCE module in Drupal, and the url above is invoked when user tries to upload an image) When we were using Apache, everything was working fine. However, nginx treats that above url Binary and tries to Download it. (We've verified that the file pointed out by the url is a valid PHP file) Any idea what could be wrong here? I think it's something to do with the NGINX configuration, but not entirely sure what that is. Any help is greatly appreciated. Config: Here's the snippet from the nginx configuration file: root /var/www/; index index.php; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; } error_page 404 index.php; location ~* \.(engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(code-style\.pl|Entries.*|Repository|Root|Tag|Template)$ { deny all; } location ~* ^.+\.(jpg|jpeg|gif|png|ico)$ { access_log off; expires 7d; } location ~* ^.+\.(css|js)$ { access_log off; expires 7d; } location ~ .php$ { include /etc/nginx/fcgi.conf; fastcgi_pass 127.0.0.1:8888; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; } location ~ /\.ht { deny all; }

    Read the article

  • .htaccess - alias all www-only requests to subdirectory

    - by CodeMoose
    Trying to install the wonderful Concrete5 CMS to use as my main site engine - the problem is it has about 15 different files and directories, and they clutter up my root. I'd really like to move it to a /_concrete/ subdirectory, and still maintain it in the domain root. htaccess has never been my strong suit - after a lot of research and learning, and a lot of error 500s, my frustration is overriding my pride and I'm posting here. Here's exactly what I'm trying to accomplish: Any requests that come through www.domain.com are forwarded to www.domain.com/_concrete/, except in the case of an existing file. The end-user URL shouldn't change - they will still see the site as www.domain.com, even though they're being served www.domain.com/_concrete/. Multiple subdomains exist on this site as sub-folders within the root - thus, only requests coming through www.domain.com should be redirected. Here's the closest I got with my htaccess, which produces an error 500: RewriteEngine On RewriteCond %{HTTP_HOST} ^(www\.)?domain\.com [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !^_concrete RewriteRule ^(.*)$ _concrete/$1 [L,QSA] This is the result of 4 hours of sweat and blood (mostly blood), so I have to be close. I'm hoping one of your fine minds can point out a stupid mistake and put this thing to rest swiftly. Thanks for your time! Addendum: I previously posted .htaccess - alias domain root to subfolder a while ago, which got me started. Please don't fall into the trap of thinking it's a duplicate.

    Read the article

  • Response time increasing (worsening) over time with consistent load

    - by NJ
    Ok. I know I don't have a lot of information. That is, essentially, the reason for my question. I am building a game using Flash/Flex and Rails on the back-end. Communication between the two is via WebORB. Here is what is happening. When I start the client an operation calls the server every 60 seconds (not much, right?) which results in two database SELECTS and an UPDATE and a resulting response to the client. This repeats every 60 seconds. I deployed a test version on heroku and NewRelic's RPM told me that response time degraded over time. One client with one task every 60 seconds. Over several hours the response time drifted from 150ms to over 900ms in response time. I have been able to reproduce this in my development environment (my Macbook Pro) so it isn't a problem on Heroku's side. I am not doing anything sophisticated (by design) in the server app. An action gets called, gets some data from the database, performs an AR update and then returns a response. No caching, etc. Any thoughts? Anyone? I'd really appreciate it.

    Read the article

  • C# Hook Forms / Windows / Dialogs etc. (via HWND?) to Capture Video Buffer (D3D Device?)

    - by Drax
    I am looking to create a very simple C# application which runs Full-Screen in Direct3D, and is able to grab the Desktop 'scene', mapping each Window from the Desktop to a Textured Polygon in my D3D Scene... I'm hoping to create a simplistic "3D Desktop" type of application as an experiment, and I'm wondering if there is a specific method for doing something like the following: 1)Get a list of all the Windows open on the Desktop (List of HWNDs?). 2)Grab the X,Y position of each Window, as well as the Width and Height. 3)Grab the Rendered image of each Window (magic happens here). 4)Create a new Texture/Surface in D3D using the Width and Height of the Window(s), and apply the Image we grabbed as a Texture. Is there an efficient 'best practice' for acquiring the actual image(s) being rendered to the Desktop? Is there also a 'best practice' for "extending the desktop" to a virtual second, third, etc. "desktop" and being able to swap between them, including creating a unique instance of the task-bar for each virtual desktop. Thanks a million for any suggestions!

    Read the article

  • .NET 4 ... Parallel.ForEach() question

    - by CirrusFlyer
    I understand that the new TPL (Task Parallel Library) has implemented the Parallel.ForEach() such that it works with "expressed parallelism." Meaning, it does not guarantee that your delegates will run in multiple threads, but rather it checks to see if the host platform has multiple cores, and if true, only then does it distribute the work across the cores (essentially 1 thread per core). If the host system does not have multiple cores (getting harder and harder to find such a computer) then it will run your code sequenceally like a "regular" foreach loop would. Pretty cool stuff, frankly. Normally I would do something like the following to place my long running operation on a background thread from the ThreadPool: ThreadPool.QueueUserWorkItem( new WaitCallback(targetMethod), new Object2PassIn() ); In a situation whereby the host computer only has a single core does the TPL's Parallel.ForEach() automatically place the invocation on a background thread? Or, should I manaully invoke any TPL calls from a background thead so that if I am executing from a single core computer at least that logic will be off of the GUI's dispatching thread? My concern is if I leave the TPL in charge of all this I want to ensure if it determines it's a single core box that it still marshalls the code that's inside of the Parallel.ForEach() loop on to a background thread like I would have done, so as to not block my GUI. Thanks for any thoughts or advice you may have ...

    Read the article

  • Running a process at the Windows 7 Welcome Screen

    - by peelman
    So here's the scoop: I wrote a tiny C# app a while back that displays the hostname, ip address, imaged date, thaw status (we use DeepFreeze), current domain, and the current date/time, to display on the welcome screen of our Windows 7 lab machines. This was to replace our previous information block, which was set statically at startup and actually embedded text into the background, with something a little more dynamic and functional. The app uses a Timer to update the ip address, deepfreeze status, and clock every second, and it checks to see if a user has logged in and kills itself when it detects such a condition. If we just run it, via our startup script (set via group policy), it holds the script open and the machine never makes it to the login prompt. If we use something like the start or cmd commands to start it off under a separate shell/process, it runs until the startup script finishes, at which point Windows seems to clean up any and all child processes of the script. We're currently able to bypass that using psexec -s -d -i -x to fire it off, which lets it persist after the startup script is completed, but can be incredibly slow, adding anywhere between 5 seconds and over a minute to our startup time. We have experimented with using another C# app to start the process, via the Process class, using WMI Calls (Win32_Process and Win32_ProcessStartup) with various startup flags, etc, but all end with the same result of the script finishing and the info block process getting killed. I tinkered with rewriting the app as a service, but services were never designed to interact with the desktop, let alone the login window, and getting things operating in the right context never really seemed to work out. So for the question: Does anybody have a good way to accomplish this? Launch a task so that it would be independent of the startup script and run on top of the welcome screen?

    Read the article

< Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >