Search Results

Search found 3627 results on 146 pages for 'v light'.

Page 122/146 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • algorithm to generate maximum number of character 'A' using keystrokes 'A', CTRL + 'A', CTRL + 'C' and CTRL + 'V'

    - by munda
    This is an interview question from google. I am not able to solve it by myself. Can somebody throw some light? The question goes like this. Write a program to print the sequence of keystrokes such that it generates the maximum number of character 'A's. You are allowed to use only 4 keys: 'A', CTRL + 'A', CTRL + 'C' and CTRL + 'V'. Only N keystrokes are allowed. All CTRL+ characters are considered as one keystroke, so CTRL+A is one keystroke. e.g.: A, ctrl+A, ctrl+C, ctrl+V generates two As in 4 keystrokes. Edit: CTRL + A is Select All CTRL + C is copy CTRL + V is paste I did some mathematics. For any N, using x numbers of A's , one CTRL+A, one CTRL+C and y CTRL+V, we can generate max ((N-1)/2)^2 numbers of A's. But for some N M, it is better to use as many ^A, ^C and ^V as it doubles the number of As. Edit 2: ^A, ^V and ^C will not overwrite on the existing selection but it will append the copied selection to selected one.

    Read the article

  • difference between http.context.user and thread.currentprincipal and when to use them?

    - by yamspog
    I have just recently run into an issue running an asp.net web app under visual studio 2008. I get the error 'type is not resolved for member...customUserPrincipal'. Tracking down various discussion groups it seems that there is an issue with Visual Studio's web server when you assign a custom principal against the Thread.CurrentPrincipal. In my code, I now use... HttpContext.Current.User = myCustomPrincipal //Thread.CurrentPrincipal = myCustomPrincipal I'm glad that I got the error out of the way, but it begs the question "What is the difference between these two methods of setting a principal?". There are other stackoverflow questions related to the differences but they don't get into the details of the two approaches. I did find one tantalizing post that had the following grandiose comment but no explanation to back up his assertions... Use HttpConext.Current.User for all web (ASPX/ASMX) applications. Use Thread.CurrentPrincipal for all other applications like winForms, console and windows service applications. Can any of you security/dot.net gurus shed some light on this subject?

    Read the article

  • Recursive templates: compilation error under g++

    - by Johannes
    Hi, I am trying to use templates recursively to define (at compile-time) a d-tuple of doubles. The code below compiles fine with Visual Studio 2010, but g++ fails and complains that it "cannot call constructor 'point<1::point' directly". Could anyone please shed some light on what is going on here? Many thanks, Jo #include <iostream> #include <utility> using namespace std; template <const int N> class point { private: pair<double, point<N-1> > coordPointPair; public: point() { coordPointPair.first = 0; coordPointPair.second.point<N-1>::point(); } }; template<> class point<1> { private: double coord; public: point() { coord= 0; } }; int main() { point<5> myPoint; return 0; }

    Read the article

  • Java Executor: Small tasks or big ones?

    - by Arash Shahkar
    Consider one big task which could be broken into hundreds of small, independently-runnable tasks. To be more specific, each small task is to send a light network request and decide upon the answer received from the server. These small tasks are not expected to take longer than a second, and involve a few servers in total. I have in mind two approaches to implement this using the Executor framework, and I want to know which one's better and why. Create a few, say 5 to 10 tasks each involving doing a bunch of send and receives. Create a single task (Callable or Runnable) for each send & receive and schedule all of them (hundreds) to be run by the executor. I'm sorry if my question shows that I'm lazy to test these and see for myself what's better (at least performance-wise). My question, while looking after an answer to this specific case, has a more general aspect. In situations like these when you want to use an executor to do all the scheduling and other stuff, is it better to create lots of small tasks or to group those into a less number of bigger tasks?

    Read the article

  • how to extract information from JSON using jQuery

    - by Richard Reddy
    Hi, I have a JSON response that is formatted from my C# WebMethod using the JavascriptSerializer Class. I currently get the following JSON back from my query: {"d":"[{\"Lat\":\"51.85036\",\"Long\":\"-8.48901\"},{\"Lat\":\"51.89857\",\"Long\":\"-8.47229\"}]"} I'm having an issue with my code below that I'm hoping someone might be able to shed some light on. I can't seem to get at the information out of the values returned to me. Ideally I would like to be able to read in the Lat and Long values for each row returned to me. Below is what I currently have: $.ajax({ type: "POST", url: "page.aspx/LoadWayPoints", data: "{'args': '" + $('numJourneys').val() + "'}", contentType: "application/json; charset=utf-8", dataType: "json", success: function (msg) { if (msg.d != '[]') { var lat = ""; var long = ""; $.each(msg.d, function () { lat = this['MapLatPosition']; long = this['MapLongPosition']; }); alert('lat =' + lat + ', long =' + long); } } }); I think the issue is something to do with how the JSON is formatted but I might be incorrect. Any help would be great. Thanks, Rich

    Read the article

  • Communication between lexer and parser

    - by FredOverflow
    Every time I write a simple lexer and parser, I stumble upon the same question: how should the lexer and the parser communicate? I see four different approaches: The lexer eagerly converts the entire input string into a vector of tokens. Once this is done, the vector is fed to the parser which converts it into a tree. This is by far the simplest solution to implement, but since all tokens are stored in memory, it wastes a lot of space. Each time the lexer finds a token, it invokes a function on the parser, passing the current token. In my experience, this only works if the parser can naturally be implemented as a state machine like LALR parsers. By contrast, I don't think it would work at all for recursive descent parsers. Each time the parser needs a token, it asks the lexer for the next one. This is very easy to implement in C# due to the yield keyword, but quite hard in C++ which doesn't have it. The lexer and parser communicate through an asynchronous queue. This is commonly known under the title "producer/consumer", and it should simplify the communication between the lexer and the parser a lot. Does it also outperform the other solutions on multicores? Or is lexing too trivial? Is my analysis sound? Are there other approaches I haven't thought of? What is used in real-world compilers? It would be really cool if compiler writers like Eric Lippert could shed some light on this issue.

    Read the article

  • C++ Storing variables and inheritance

    - by Kaa
    Hello Everyone, Here is my situation: I have an event driven system, where all my handlers are derived from IHandler class, and implement an onEvent(const Event &event) method. Now, Event is a base class for all events and contains only the enumerated event type. All actual events are derived from it, including the EventKey event, which has 2 fields: (uchar) keyCode and (bool)isDown. Here's the interesting part: I generate an EventKey event using the following syntax: Event evt = EventKey(15, true); and I ship it to the handlers: EventDispatch::sendEvent(evt); // void EventDispatch::sendEvent(const Event &event); (EventDispatch contains a linked list of IHandlers and calls their onEvent(const Event &event) method with the parameter containing the sent event. Now the actual question: Say I want my handlers to poll the events in a queue of type Event, how do I do that? x Dynamic pointers with reference counting sound like too big of a solution. x Making copies is more difficult than it sounds, since I'm only receiving a reference to a base type, therefore each time I would need to check the type of event, upcast to EventKey and then make a copy to store in a queue. Sounds like the only solution - but is unpleasant since I would need to know every single type of event and would have to check that for every event received - sounds like a bad plan. x I could allocate the events dynamically and then send around pointers to those events, enqueue them in the array if wanted - but other than having reference counting - how would I be able to keep track of that memory? Do you know any way to implement a very light reference counter that wouldn't interfere with the user? What do you think would be a good solution to this design? I thank everyone in advance for your time. Sincerely, Kaa

    Read the article

  • How to check if new version of Chrome is available?

    - by serg
    I am trying to build an extension that would notify a user when new version of Chrome is available. I tried to inspect network traffic when Chrome is checking for an update and it is sending a request to http://74.125.95.113/service/update2?w=3:{long_encoded_string} page that returns XML with information I need: <?xml version="1.0" encoding="UTF-8"?> <gupdate xmlns="http://www.google.com/update2/response" protocol="2.0" server="prod"> <daystart elapsed_seconds="31272"/> <app appid="{8A69D345-D564-463C-AFF1-A69D9E530F96}" status="ok"> <updatecheck status="noupdate"/> <ping status="ok"/> </app> </gupdate> Besides sending {long_encoded_string} as URL parameter it is also sending some encoded cookie. Maybe someone familiar with Chrome build process can shed some light on those encoded strings and how to build them? Maybe there is another easier way (I have a feeling that string encoding is a dead end for me)?

    Read the article

  • App is fast on 3GS but slow on 3G

    - by Anthony Chan
    Hi all, I'm new to computer coding and have just finished coding an app and tested it on both 3G and 3GS. On 3GS, it worked as normal as on the simulator. However, when I tried to run it on 3G, the app becomes extremely slow. I'm not sure what's the reason and I hope someone could shed some light on me. Generally, my app has a couple of view controller classes, with one of them being the title page, one being the main page, one is setting, etc. I used a dissolve to transition from the title page to the main page. But even this simple transition shows un-smooth performance on a 3G! My other part of the app involves zooming in to some images by scaling up the images, switching images by push or dissolve upon receiving touch events, saving photos into photo library and storing and retrieving some photos in a folder and some data in a SQlite database, each showing un-smooth actions. Compared with some heavy graphic or heavy maths app, I think mine is pretty simple. I totally have no clue why the app would behave so slow and un-smooth that it is barely useful on a 3G. Any help/ direction would be much appreciated. Thanks for helping out.

    Read the article

  • Versioning code in two separate projects concurently with subverison

    - by Matt1776
    I have a need to create a library of Object Oriented PHP code that will see much reuse and aspires to be highly flexible and modular. Because of its independent nature I would like it to exist as its own SVN project. I would like to be able to create a new web project, save it in SVN as its own separate project, and include within it the library project code as well. During this process, while coding the web application code and making commits, I may need to add a class to the library. I would like to be able to do so and commit those changes back to the libraries project code. In light of all this I could manage the code in two ways Commit the changes to the library back to a branch of its original base project code and make the branch name relevant to the web project I was using it with Commit the changes to the library back to the original code, growing it in size regardless of any specific references that might exist. I have two questions How can I include this library project code into a new project yet not break the subversion functionality, i.e. allowing me to make changes to each project individually? How I can keep the code synchronized? If I choose the first method of managing the library code I may want to grab changes from another branch and pull it in for use in another.

    Read the article

  • Java "Pool" of longs or Oracle sequence with reusable values

    - by Anthony Accioly
    Several months ago I implemented a solution to choose unique values from a range between 1 and 65535 (16 bits). This range is used to generate unique Route Targets suffixes, which for this customer massive network (it's a huge ISP) are a very disputed resource, so any free index needs to become immediately available to the end user. To tackle this requirement I used a BitSet. Allocate on the RT index with set and deallocate a suffix with clear. The method nextClearBit() can find the next available index. I handle synchronization / concurrency issues manually. This works pretty well for a small range... The entire index is small (around 10k), it is blazing fast and can be easy serialized into a Blob field. The problem is, some new devices can handle RTs of 32 bits (range 1 / 4294967296). Which can't be managed with a BitSet (it would, by itself, consume around 600Mb, plus be limited to int range). Even with this massive range available, the client still wants to free available Route Targets for the end user, mainly because the lowest ones (up to 65535) - which are compatible with old routers - are being heavily disputed. Before I tell the customer that this is impossible and he will have to conform with my reusable index for lower RTs (up to 65550) and use a database sequence for the other ones (which means that when the user frees a Route Target, it will not become available again). Would anyone shed some light? Maybe some kind soul already implemented a high performance number pool for Java (6 if it matters), or I am missing a killer feature of Oracle database (11R2 if it matters)... Wishful thinking. Thank you very much in advance.

    Read the article

  • Namespace Traversal

    - by RikSaunderson
    I am trying to parse the following sample piece of XML: <?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soapenv:Body> <d2LogicalModel modelBaseVersion="1.0" xmlns="http://datex2.eu/schema/1_0/1_0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://datex2.eu/schema/1_0/1_0 http://datex2.eu/schema/1_0/1_0/DATEXIISchema_1_0_1_0.xsd"> <payloadPublication xsi:type="PredefinedLocationsPublication" lang="en"> <predefinedLocationSet id="GUID-NTCC-VariableMessageSignLocations"> <predefinedLocation id="VMS30082775"> <predefinedLocationName> <value lang="en">VMS M60/9084B</value> </predefinedLocationName> </predefinedLocation> </predefinedLocationSet> </payloadPublication> </d2LogicalModel> </soapenv:Body> </soapenv:Envelope> I specifically need to get at the contents of the top-level predefinedLocation tag. By my calculations, the correct XPath should be /soapenv:Envelope/soapenv:Body/d2LogicalModel/payloadPublication/predefinedLocationSet/predefinedLocation I am using the following C# code to parse the XML: string filename = "content-sample.xml"; XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(filename); XmlNamespaceManager nsmanager = new XmlNamespaceManager(xmlDoc.NameTable); nsmanager.AddNamespace("soapenv", "http://schemas.xmlsoap.org/soap/Envelope"); string xpath ="/soapenv:Envelope/soapenv:Body/d2LogicalModel/payloadPublication/predefinedLocationSet/predefinedLocation"; XmlNodeList itemNodes = xmlDoc.SelectNodes(xpath, nsmanager); However, this keeps coming up with no results. Can anyone shed any light on this, because I feel like I'm banging my head on a brick wall.

    Read the article

  • Manually wiring up unobtrusive jquery validation client-side without Model/Data Annotations, MVC3

    - by cmorganmcp
    After searching and experimenting for 2 days I relent. What I'd like to do is manually wire up injected html with jquery validation. I'm getting a simple string array back from the server and creating a select with the strings as options. The static fields on the form are validating fine. I've been trying the following: var dates = $("<select id='ShiftDate' data-val='true' data-val-required='Please select a date'>"); dates.append("<option value=''>-Select a Date-</option>"); for (var i = 0; i < data.length; i++) { dates.append("<option value='" + data[i] + "'>" + data[i] + "</option>"); } $("fieldset", addShift).append($("<p>").append("<label for='ShiftDate'>Shift Date</label>\r").append(dates).append("<span class='field-validation-valid' data-valmsg-for='ShiftDate' data-valmsg-replace='true'></span>")); // I tried the code below as well instead of adding the data-val attributes and span manually with no luck dates.rules("add", { required: true, messages: { required: "Please select a date" } }); // Thought this would do it when I came across several posts but it didn't $.validator.unobtrusive.parse(dates.closest("form")); I know I could create a view model ,decorate it with a required attribute, create a SelectList server-side and send that, but it's more of a "how would I do this" situation now. Can anyone shed light on why the above code wouldn't work as I expect? -chad

    Read the article

  • Questions regarding detouring by modifying the virtual table

    - by Elliott Darfink
    I've been practicing detours using the same approach as Microsoft Detours (replace the first five bytes with a jmp and an address). More recently I've been reading about detouring by modifying the virtual table. I would appreciate if someone could shed some light on the subject by mentioning a few pros and cons with this method compared to the one previously mentioned! I'd also like to ask about patched vtables and objects on the stack. Consider the following situation: // Class definition struct Foo { virtual void Call(void) { std::cout << "FooCall\n"; } }; // If it's GCC, 'this' is passed as the first parameter void MyCall(Foo * object) { std::cout << "MyCall\n"; } // In some function Foo * foo = new Foo; // Allocated on the heap Foo foo2; // Created on the stack // Arguments: void ** vtable, uint offset, void * replacement PatchVTable(*reinterpret_cast<void***>(foo), 0, MyCall); // Call the methods foo->Call(); // Outputs: 'MyCall' foo2.Call(); // Outputs: 'FooCall' In this case foo->Call() would end up calling MyCall(Foo * object) whilst foo2.Call() call the original function (i.e Foo::Call(void) method). This is because the compiler will try to decide any virtual calls during compile time if possible (correct me if I'm wrong). Does that mean it does not matter if you patch the virtual table or not, as long as you use objects on the stack (not heap allocated)?

    Read the article

  • How To perform a SQL Query to DataTable Operation That Can Be Cancelled

    - by David W
    I tried to make the title as specific as possible. Basically what I have running inside a backgroundworker thread now is some code that looks like: SqlConnection conn = new SqlConnection(connstring); SqlCommand cmd = new SqlCommand(query, conn); conn.Open(); SqlDataAdapter sda = new SqlDataAdapter(cmd); sda.Fill(Results); conn.Close(); sda.Dispose(); Where query is a string representing a large, time consuming query, and conn is the connection object. My problem now is I need a stop button. I've come to realize killing the backgroundworker would be worthless because I still want to keep what results are left over after the query is canceled. Plus it wouldn't be able to check the canceled state until after the query. What I've come up with so far: I've been trying to conceptualize how to handle this efficiently without taking too big of a performance hit. My idea was to use a SqlDataReader to read the data from the query piece at a time so that I had a "loop" to check a flag I could set from the GUI via a button. The problem is as far as I know I can't use the Load() method of a datatable and still be able to cancel the sqlcommand. If I'm wrong please let me know because that would make cancelling slightly easier. In light of what I discovered I came to the realization I may only be able to cancel the sqlcommand mid-query if I did something like the below (pseudo-code): while(reader.Read()) { //check flag status //if it is set to 'kill' fire off the kill thread //otherwise populate the datatable with what was read } However, it would seem to me this would be highly ineffective and possibly costly. Is this the only way to kill a sqlcommand in progress that absolutely needs to be in a datatable? Any help would be appreciated!

    Read the article

  • Can't find momd file: Core Data problems

    - by thekevinscott
    Aw geez! I screwed something up! I'm a Core Data noob, working on my first iOS app. After much Stack Overflowing I'm using this code: NSString *path = [[NSBundle mainBundle] pathForResource:@"CoreData" ofType:@"momd"]; if (!path) { path = [[NSBundle mainBundle] pathForResource:@"CoreData" ofType:@"mom"]; } NSAssert(path != nil, @"Unable to find Resource in main bundle"); CoreData is the name of my app. I've tried to put in initial data into the app by finding the path to the sqlite file in my iPhone simulator, and then going and inserting into that sqlite file. But at some point, I moved the sqlite (thinking it would create a fresh copy), deleted the app from the simulator, and the sqlite file is gone. I'm not sure if I'm leaving out some part of the process (this was a few hours ago) but the end result is that everything is screwed up. How do I resubstantiate this sqlite / momd file? "Clean" and "Clean all targets" are grayed out. I'm happy to post the relevant code from my app that would help shed some light on this problem but there's tons of code relating to Core Data which I don't understand, so I'm not sure what part to post! Any help is greatly appreciated.

    Read the article

  • While loop in foreach loop not looping correctly

    - by tominated
    I'm trying to make a very basic php ORM as for a school project. I have got almost everything working, but I'm trying to map results to an array. Here's a snippet of code to hopefully assist my explanation. $results = array(); foreach($this->columns as $column){ $current = array(); while($row = mysql_fetch_array($this->results)){ $current[] = $row[$column]; print_r($current); echo '<br><br>'; } $results[$column] = $current; } print_r($results); return mysql_fetch_array($this->results); This works, but the while loop only works on the first column. The print_r($results); shows the following: Array ( [testID] => Array ( [0] => 1 [1] => 2 ) [testName] => Array ( ) [testData] => Array ( ) ) Can anybody shed some light? Thanks in advance!

    Read the article

  • Saving ntext data from SQL Server to file directory using asp

    - by April
    A variety of files (pdf, images, etc.) are stored in a ntext field on a MS SQL Server. I am not sure what type is in this field, other than it shows question marks and undefined characters, I am assuming they are binary type. The script is supposed to iterate through the rows and extract and save these files to a temp directory. "filename" and "contenttype" are given, and "data" is whatever is in the ntext field. I have tried several solutions: 1) data.SaveToFile "/temp/"&filename, 2 Error: Object required: '????????????????????' ??? 2) File.WriteAllBytes "/temp/"&filename, data Error: Object required: 'File' I have no idea how to import this, or the Server for MapPath. (Cue: what a noob!) 3) Const adTypeBinary = 1 Const adSaveCreateOverWrite = 2 Dim BinaryStream Set BinaryStream = CreateObject("ADODB.Stream") BinaryStream.Type = adTypeBinary BinaryStream.Open BinaryStream.Write data BinaryStream.SaveToFile "C:\temp\" & filename, adSaveCreateOverWrite Error: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another. 4) Response.ContentType = contenttype Response.AddHeader "content-disposition","attachment;" & filename Response.BinaryWrite data response.end This works, but the file should be saving to the server instead of popping up save-as dialog. I am not sure if there is a way to save the response to file. Thanks for shedding light on any of these problems!

    Read the article

  • web service slowdown

    - by user238591
    Hi, I have a web service slowdown. My (web) service is in gsoap & managed C++. It's not IIS/apache hosted, but speaks xml. My client is in .NET The service computation time is light (<0.1s to prepare reply). I expect the service to be smooth, fast and have good availability. I have about 100 clients, response time is 1s mandatory. Clients have about 1 request per minute. Clients are checking web service presence by tcp open port test. So, to avoid possible congestion, I turned gSoap KeepAlive to false. Until there everything runs fine : I bearly see connections in TCPView (sysinternals) New special synchronisation program now calls the service in a loop. It's higher load but everything is processed in less 30 seconds. With sysinternals TCPView, I see that about 1 thousands connections are in TIME_WAIT. They slowdown the service and It takes seconds for the service to reply, now. Could it be that I need to reset the SoapHttpClientProtocol connection ? Someone has TIME_WAIT ghosts with a web service call in a loop ?

    Read the article

  • Calling from C# to C function which accept a struct array allocated by caller

    - by lifey
    I have the following C struct struct XYZ { void *a; char fn[MAX_FN]; unsigned long l; unsigned long o; }; And I want to call the following function from C#: extern "C" int func(int handle, int *numEntries, XYZ *xyzTbl); Where xyzTbl is an array of XYZ of size numEntires which is allocated by the caller I have defined the following C# struct: [System.Runtime.InteropServices.StructLayoutAttribute(System.Runtime.InteropServices.LayoutKind.Sequential, CharSet = System.Runtime.InteropServices.CharSet.Ansi)] public struct XYZ { public System.IntPtr rva; [System.Runtime.InteropServices.MarshalAsAttribute(System.Runtime.InteropServices.UnmanagedType.ByValTStr, SizeConst = 128)] public string fn; public uint l; public uint o; } and a method: [System.Runtime.InteropServices.DllImport(@"xyzdll.dll", CallingConvention = CallingConvention.Cdecl)] public static extern Int32 func(Int32 handle, ref Int32 numntries, [MarshalAs(UnmanagedType.LPArray)] XYZ[] arr); Then I try to call the function : XYZ xyz = new XYZ[numEntries]; for (...) xyz[i] = new XYZ(); func(handle,numEntries,xyz); Of course it does not work. Can someone shed light on what I am doing wrong ?

    Read the article

  • Why is search functionality not working on this page?

    - by DaveDev
    we deliver micro-site content for our client. Our content is injected into a wrapper that is supplied by another developer. To deliver our content we host the wrapper as well as the content. The user can access this at http://fundcentre.newireland.ie/ (try a search for 'bloxham') For the other content that is not ours, the other developer hosts a similar (though slightly different) wrapper and delivers the content. the user accesses this here: http://www.newireland.ie/ (try a search for 'bloxham') The wrapper contains a search box, which does not work for us but it works for the other developer. I took a look at the network traffic with FireBug but it appears that when I do the search from the wrapper that we're hosting, I'm getting a "407 Proxy Access Denied" error. My guess is their proxy has a problem with the fact that the search is being conducted from a page hosted outside the scope of their proxy. It was also suggested that there were javascript errors on the page that were preventing the search from executing but I can't see any. Also, I don't think I'd get as far as the proxy error if that was the case. I don't really understand this stuff too well though, so could somebody with a bit more experience please take a look and maybe shed some light on this for me? Thanks.

    Read the article

  • Apache mod_rewrite - forward domain root to subdirectory

    - by DuFace
    I have what I originally assumed to be a simple problem. I am using shared hosting for my website (so I don't have access to the Apache configuration) and have only been given a single folder to store all my content in. This is all well and good but it means that all my subdomains must have their virtual document root's inside public_html, meaning they effectively become a folder on my main domain. What I'd like to do is organise my public_html something like this: public_html/ www/ index.php ... sub1/ index.php ... some_library/ ... This way, all my web content is still in public_html but only a small fraction of it will be served to the client. I can easily achieve this for all the subdomains, but it's the primary domain that I'm having issues with. I created a .htaccess file in public_html with the following: Options +SymLinksIfOwnerMatch # I'm not allowed to use FollowSymLinks RewriteEngine on RewriteBase / RewriteCond %{REQUEST_URI} !^/www [NC] RewriteRule ^(.*)$ /www/$1 [L] This works fairly well, but for some strange reason www.example.com/stuff is translated into a request for www.example.com/www/stuff and hence a 404 error is given. It was my understanding that unless an 'R' flag was specified, mod_rewrite was purely internal so I can't understand why the request is generated as that implies (to me at least) redirection. I assumed this would be a trivial problem to solve as all I actually want to do is forward all requests for the root of www.example.com to a subdirectory, but I've spent hours searching for answers and none are quite correct. I find it difficult to believe I'm the only person to have this issue. I apologise if this question has been answered on here before, I did search and trawl but couldn't find an appropriate answer. Please could someone shed some light on this?

    Read the article

  • .NET Remoting: Getting underlying socket?

    - by Alan
    Hi, I'm writing a light remoting app to assist in debugging a problem with remoting communication. This app mimics much of what a larger application does: Periodically sends a heartbeat to another peer application, and periodically verifies that a heartbeat has been received within some time threshold. What we're seeing is in our big application, the heartbeats seem to get dropped. One peer will go for long periods of time without seeing heartbeats from another peer, until the peer that is "dead" is restarted. The big application is responsive in all other ways. We believe it has something to do with the network setup. We were able to repro the problem locally, and fixed it by making some configuration changes to our test environment. To help our customer diagnose the issue, the mini-remoting app needs to log as much information as possible. So, is there a way to get the underlying socket for the remoting connection? I'm aware that I could write a custom sink for this, but I'd like to keep the actual remoting process as close to what is implemented in the big app as possible. Also as an aside, any ideas why the big-app might be "dropping" heartbeats?

    Read the article

  • does entity framework or mysql provider swallows timeout exceptions on enumeration of result?!

    - by Freddy Rios
    I'm trying to make sense of a situation I have using entity framework on .net 3.5 sp1 + MySQL 6.1.2.0 as the provider. It involves the following code: Response.Write("Products: " + plist.Count() + "<br />"); var total = 0; foreach (var p in plist) { //... some actions total++; //... other actions } Response.Write("Total Products Checked: " + total + "<br />"); Basically the total products is varying on each run, and it isn't matching the full total in plist. Its varies widely, from ~ 1/5th to half. There isn't any control flow code inside the foreach i.e. no break, continue, try/catch, conditions around total++, anything that could affect the count. As confirmation, there are other totals captured inside the loop related to the actions, and those match the lower and higher total runs. I don't find any reason to the above, other than something in entity framework or the mysql provider that causes it to end the foreach when retrieving an item. The body of the foreach can have some good variation in time, as the actions involve file & network access, my best shot at the time is that when it takes beyond certain threshold there is some type of timeout in the underlying framework/provider and instead of causing an exception it is silently reporting no more items for enumeration. Can anyone give some light in the above scenario and/or confirm if the entity framework/mysql provider has the above behavior?

    Read the article

  • Hibernate Exception Fixed By Alt+Tab

    - by Lee Theobald
    Hi all, I've got a very curious problem in Hibernate that I would like some opinions on. In my code if I do the following: Go to page A Click a link on page A to be taken to page B Click on data item on page B Exception thrown I get an error telling me: failed to lazily initialize a collection of role: XYZ, no session or session was closed Fair enough. But when I do the same thing but add an alt+tab in the middle, everything is fine. E.g. Go to page A Click a link on page A to be taken to page B Hit ALt+Tab to switch to another application Hit ALt+Tab to switch back to the web browser Click on data item on page B Everything is fine. I'm a little confused as to how switching focus from my application makes it act as I want it to. Does anyone have any light to shine on the subject? I don't think it's a locking issue as even if I do the second set of steps quicker than the first, still no error. It's a Seam application using Hibernate 3.3.2.GA & 3.4.0.GA.

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >