Search Results

Search found 3884 results on 156 pages for 'silver light'.

Page 132/156 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • What could cause these Apache crash errors ?

    - by jacobanderssen
    Hello guys. I had a server crash several days ago. I use Cacti to keep stats: at the time when the server crashed, a huge spike from Load 1 to Load 200 occurred, with over 800 processes in the run queue ( from 300 average). Upon checking /var/log/httpd I notice this: * glibc detected /usr/sbin/httpd: double free or corruption (out): 0x00002b8f3142c2f0 ** Followed by alot of these: [Sat Mar 13 19:20:20 2010] [warn] child process 3090 still did not exit, sending a SIGTERM [Sat Mar 13 19:20:20 2010] [warn] child process 3091 still did not exit, sending a SIGTERM Followed by this: ======= Backtrace: ========= /lib64/libc.so.6[0x2b8f1463c2ef] /lib64/libc.so.6(cfree+0x4b)[0x2b8f1463c73b] /usr/lib64/libapr-1.so.0(apr_pool_destroy+0x131)[0x2b8f13f98821] /usr/sbin/httpd[0x2b8f126df47e] /usr/sbin/httpd[0x2b8f126df4ab] /lib64/libpthread.so.0[0x2b8f141b87c0] /etc/httpd/modules/mod_file_cache.so[0x2b8f1cdf00fb] ======= Memory map: ======== And finally a lot of these: [Sat Mar 13 19:20:27 2010] [error] could not make child process 733 exit, attempting to continue anyway [Sat Mar 13 19:20:27 2010] [error] could not make child process 24560 exit, attempting to continue anyway [Sat Mar 13 19:20:27 2010] [error] could not make child process 31384 exit, attempting to continue anyway I am also noticing one or two lines like this: [Mon Mar 15 01:17:26 2010] [notice] child pid 20765 exit signal Segmentation fault (11) Please help me shed some light on this. Thanks !

    Read the article

  • Need to exclude results in a MySQL query where two table fields are not of certain values (brain far

    - by DondeEstaMiCulo
    I don't know if I'm just burnt out and can't think, or what... But I can't seem to make this work right... (We're using MySQL 5.1...) I have two tables which have some transactional stuff stored in them. There will be many records per user_id in each table. Table1 and Table2 have a one-to-one relationship with each other. I want to pull records from both tables, but I want to exclude records which have certain values in both tables. I don't care if they both don't have these values, or if just one does, but both tables should not have both values. (Does this make any sense? lol) For example: SELECT t1.id, t1.type, t2.name FROM table1 t1 INNER JOIN table2 t2 ON table.xid = table2.id WHERE t1.user_id = 100 AND (t1.type != 'FOO' AND t2.name != 'BAR') So t1.type is type ENUM with about 10 different options, and t2.name is also type ENUM with 2 options. My expected results would look a little like: 1, FOO, YUM 2, BOO, BAR 3, BOO, YUM But instead, all I'm getting is: 3, BOO, YUM Because it's filtering out all records which has 'FOO' as the type, and 'BAR' as the name. I keep waiting for that D'oh! moment where it hits me and I feel like an idiot for not realizing what I'm doing wrong. But it hasn't come. And I still feel like an idiot, lol. I appreciate any light any of you can shed on this! Many thanks in advance for the help!

    Read the article

  • With C# 3.0, how to write Interface based code with generic collection?

    - by Deecay
    I want to write code that is decouple and clean, and I know that by programming to an interface instead of the implementation, my code will be more flexible and extensible. So, instead of writing methods like: bool IsProductAvailable(ProductTypeA product); I write methods like: bool IsProductAvailable(IProduct product); As long as my products implement IProduct: class ProductTypeA : IProduct I should be OK. All is well until I start using generic collections. Since C# 3.0 doesn't support covariant and contravariant, even though both ProuctTypeA and ProductTypeB implements IProduct, you cannot put List in List. This is pretty troublesome because a lot of times I want to write something like: bool AreProductsAvailable(List<IProduct> products); So that I can check product avaialbility by writing: List<ProductA> productsArrived = GetDataFromDataabase(); bool result = AreProductsAvailable(productsArrived); And I want to write just one AreProductsAvailable() method that works with all IProduct collections. I know that C# 4.0 is going to support covariant and contravariant, but I also realize that there other libraries that seemed to have the problem solved. For instance, I was trying out ILOG Gantt the gantt chart control, and found that they have a lot of collection intefaces that looks like this: IActivityCollection ILinkCollection So it seems like their approach is wrapping the generic collection with an interface. So instead of "bool AreProductsAvailable(List products);", I can do: bool AreProductsAvailable(IProductCollection products); And then write some code so that IProductCollection takes whatever generic collection of IProduct, be it List or List. However, I don't know how to write an IProductCollection interface that does that "magic". :-< (ashame) .... Could someone shed me some light? This has been bugging me for so long, and I so wanted to do the "right thing". Well, thanks!

    Read the article

  • lighten background color on button click per binding

    - by one of two
    I want to lighten a buttons background on click. So I did the following: <converter:ColorLightConverter x:Key="colorLightConverter" /> ... <Style BasedOn="{StaticResource default}" TargetType="{x:Type controls:Button}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type controls:Button}"> <ControlTemplate.Triggers> <Trigger Property="IsPressed" Value="True"> <Setter Property="Background"> <Setter.Value> <SolidColorBrush Color="{Binding Path=Background.Color, RelativeSource={RelativeSource TemplatedParent}, Converter={StaticResource colorLightConverter}}" /> </Setter.Value> </Setter> </Trigger> </ControlTemplate.Triggers> <Border Background="{TemplateBinding Background}" BorderBrush="Transparent" BorderThickness="0"> ... </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> The converter: class ColorLightConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { Color color = (Color)value; System.Drawing.Color lightColor = ControlPaint.Light(System.Drawing.Color.FromArgb(color.A, color.R, color.G, color.B)); return Color.FromArgb(lightColor.A, lightColor.R, lightColor.G, lightColor.B); } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } But the converter isn't called when I click the button. I think there is anything wrong with the binding, but I can't see the error... Can you help me? Maybe I'm completely wrong. What I basically want to do: When clicking the button, take the current background color and lighten it. Not more...

    Read the article

  • Treating differential operator as algebraic entity

    - by chappar
    I know that this question is offtopic and don't belong here. But i didn't know somewhere else to ask. So here is the question. I was reading e:the story of a number by Eli Maor, where he treats differential operator as just like any algebraic entity. For example if we have a differential equation like y’’ + 5y’ - 6y = 0. This can be treaed as (D^2 + 5D – 6)y = 0. So, either y = 0 (trivial solution) or (D^2 + 5D – 6) = 0. Factoring out above equation we get (D-1)(D+6)= 0 with solutions as D = 1 and D = -6. Since D does not have any meaning on its own, multiplying by y on both the sides we get Dy = y and Dy = -6y for which the solutions are Ae^x and Be^-6x. Combining these 2 solutions we get Ae^x + Be^-6x. Now my doubt is this approach break when we have an equation like D^2y = 0. Which means y = 0 (again trivial) or D^2 = 0 which means D = 0. Now Dy = y*0 = 0. That means y = C ( a constant). The actual answer should be Cx. I know that it is stupidity to treat D^2 = 0 as D = 0, it led me to doubt the entire process of treating differential equation as algebraic equation. Can someone throw light on this? Or any other site where i might get answer?

    Read the article

  • correct way of initializing variables

    - by OVERTONE
    ok this is just a shot in the dark but it may be the cause of most of the errors ive gotten. when your initializing something. lets say a smal swing program. would it go liek this variables here { private Jlist contactList; String [] contactArray; ArrayList <String> contactArrayList; ResultSet namesList constructor here public whatever() { GridLayout aGrid = new GridLayout(2,2,10,10); contact1 = new String(); contact2 = new String(); contact3 = new String(); contactArrayList = new ArrayList<String>(); // is something supposed too go in the () of this JList? contactList = new JList(); contactArray = new String[5]; from1 =new JLabel ("From: " + contactArray[1]); gridlayout.add(components)// theres too many components to write onto SO. } // methods here public void fillContactsGui() { createConnection(); ArrayList<String> contactsArrayList = new ArrayList<String>(); while (namesList.next()) { contactArrayList.add(namesList.getString(1)); ContactArray[1] = namesList[1]; } } i know this is probably a huge beginner question but this is the code ive gotten used too. im initializing thigns three and fours times without meaning too because im not sure where they gp. can anyone shed some light on this? p.s. sorry for the messy sample code. i done my best.

    Read the article

  • algorithm to generate maximum number of character 'A' using keystrokes 'A', CTRL + 'A', CTRL + 'C' and CTRL + 'V'

    - by munda
    This is an interview question from google. I am not able to solve it by myself. Can somebody throw some light? The question goes like this. Write a program to print the sequence of keystrokes such that it generates the maximum number of character 'A's. You are allowed to use only 4 keys: 'A', CTRL + 'A', CTRL + 'C' and CTRL + 'V'. Only N keystrokes are allowed. All CTRL+ characters are considered as one keystroke, so CTRL+A is one keystroke. e.g.: A, ctrl+A, ctrl+C, ctrl+V generates two As in 4 keystrokes. Edit: CTRL + A is Select All CTRL + C is copy CTRL + V is paste I did some mathematics. For any N, using x numbers of A's , one CTRL+A, one CTRL+C and y CTRL+V, we can generate max ((N-1)/2)^2 numbers of A's. But for some N M, it is better to use as many ^A, ^C and ^V as it doubles the number of As. Edit 2: ^A, ^V and ^C will not overwrite on the existing selection but it will append the copied selection to selected one.

    Read the article

  • difference between http.context.user and thread.currentprincipal and when to use them?

    - by yamspog
    I have just recently run into an issue running an asp.net web app under visual studio 2008. I get the error 'type is not resolved for member...customUserPrincipal'. Tracking down various discussion groups it seems that there is an issue with Visual Studio's web server when you assign a custom principal against the Thread.CurrentPrincipal. In my code, I now use... HttpContext.Current.User = myCustomPrincipal //Thread.CurrentPrincipal = myCustomPrincipal I'm glad that I got the error out of the way, but it begs the question "What is the difference between these two methods of setting a principal?". There are other stackoverflow questions related to the differences but they don't get into the details of the two approaches. I did find one tantalizing post that had the following grandiose comment but no explanation to back up his assertions... Use HttpConext.Current.User for all web (ASPX/ASMX) applications. Use Thread.CurrentPrincipal for all other applications like winForms, console and windows service applications. Can any of you security/dot.net gurus shed some light on this subject?

    Read the article

  • how to extract information from JSON using jQuery

    - by Richard Reddy
    Hi, I have a JSON response that is formatted from my C# WebMethod using the JavascriptSerializer Class. I currently get the following JSON back from my query: {"d":"[{\"Lat\":\"51.85036\",\"Long\":\"-8.48901\"},{\"Lat\":\"51.89857\",\"Long\":\"-8.47229\"}]"} I'm having an issue with my code below that I'm hoping someone might be able to shed some light on. I can't seem to get at the information out of the values returned to me. Ideally I would like to be able to read in the Lat and Long values for each row returned to me. Below is what I currently have: $.ajax({ type: "POST", url: "page.aspx/LoadWayPoints", data: "{'args': '" + $('numJourneys').val() + "'}", contentType: "application/json; charset=utf-8", dataType: "json", success: function (msg) { if (msg.d != '[]') { var lat = ""; var long = ""; $.each(msg.d, function () { lat = this['MapLatPosition']; long = this['MapLongPosition']; }); alert('lat =' + lat + ', long =' + long); } } }); I think the issue is something to do with how the JSON is formatted but I might be incorrect. Any help would be great. Thanks, Rich

    Read the article

  • Java Executor: Small tasks or big ones?

    - by Arash Shahkar
    Consider one big task which could be broken into hundreds of small, independently-runnable tasks. To be more specific, each small task is to send a light network request and decide upon the answer received from the server. These small tasks are not expected to take longer than a second, and involve a few servers in total. I have in mind two approaches to implement this using the Executor framework, and I want to know which one's better and why. Create a few, say 5 to 10 tasks each involving doing a bunch of send and receives. Create a single task (Callable or Runnable) for each send & receive and schedule all of them (hundreds) to be run by the executor. I'm sorry if my question shows that I'm lazy to test these and see for myself what's better (at least performance-wise). My question, while looking after an answer to this specific case, has a more general aspect. In situations like these when you want to use an executor to do all the scheduling and other stuff, is it better to create lots of small tasks or to group those into a less number of bigger tasks?

    Read the article

  • Recursive templates: compilation error under g++

    - by Johannes
    Hi, I am trying to use templates recursively to define (at compile-time) a d-tuple of doubles. The code below compiles fine with Visual Studio 2010, but g++ fails and complains that it "cannot call constructor 'point<1::point' directly". Could anyone please shed some light on what is going on here? Many thanks, Jo #include <iostream> #include <utility> using namespace std; template <const int N> class point { private: pair<double, point<N-1> > coordPointPair; public: point() { coordPointPair.first = 0; coordPointPair.second.point<N-1>::point(); } }; template<> class point<1> { private: double coord; public: point() { coord= 0; } }; int main() { point<5> myPoint; return 0; }

    Read the article

  • Communication between lexer and parser

    - by FredOverflow
    Every time I write a simple lexer and parser, I stumble upon the same question: how should the lexer and the parser communicate? I see four different approaches: The lexer eagerly converts the entire input string into a vector of tokens. Once this is done, the vector is fed to the parser which converts it into a tree. This is by far the simplest solution to implement, but since all tokens are stored in memory, it wastes a lot of space. Each time the lexer finds a token, it invokes a function on the parser, passing the current token. In my experience, this only works if the parser can naturally be implemented as a state machine like LALR parsers. By contrast, I don't think it would work at all for recursive descent parsers. Each time the parser needs a token, it asks the lexer for the next one. This is very easy to implement in C# due to the yield keyword, but quite hard in C++ which doesn't have it. The lexer and parser communicate through an asynchronous queue. This is commonly known under the title "producer/consumer", and it should simplify the communication between the lexer and the parser a lot. Does it also outperform the other solutions on multicores? Or is lexing too trivial? Is my analysis sound? Are there other approaches I haven't thought of? What is used in real-world compilers? It would be really cool if compiler writers like Eric Lippert could shed some light on this issue.

    Read the article

  • Namespace Traversal

    - by RikSaunderson
    I am trying to parse the following sample piece of XML: <?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soapenv:Body> <d2LogicalModel modelBaseVersion="1.0" xmlns="http://datex2.eu/schema/1_0/1_0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://datex2.eu/schema/1_0/1_0 http://datex2.eu/schema/1_0/1_0/DATEXIISchema_1_0_1_0.xsd"> <payloadPublication xsi:type="PredefinedLocationsPublication" lang="en"> <predefinedLocationSet id="GUID-NTCC-VariableMessageSignLocations"> <predefinedLocation id="VMS30082775"> <predefinedLocationName> <value lang="en">VMS M60/9084B</value> </predefinedLocationName> </predefinedLocation> </predefinedLocationSet> </payloadPublication> </d2LogicalModel> </soapenv:Body> </soapenv:Envelope> I specifically need to get at the contents of the top-level predefinedLocation tag. By my calculations, the correct XPath should be /soapenv:Envelope/soapenv:Body/d2LogicalModel/payloadPublication/predefinedLocationSet/predefinedLocation I am using the following C# code to parse the XML: string filename = "content-sample.xml"; XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(filename); XmlNamespaceManager nsmanager = new XmlNamespaceManager(xmlDoc.NameTable); nsmanager.AddNamespace("soapenv", "http://schemas.xmlsoap.org/soap/Envelope"); string xpath ="/soapenv:Envelope/soapenv:Body/d2LogicalModel/payloadPublication/predefinedLocationSet/predefinedLocation"; XmlNodeList itemNodes = xmlDoc.SelectNodes(xpath, nsmanager); However, this keeps coming up with no results. Can anyone shed any light on this, because I feel like I'm banging my head on a brick wall.

    Read the article

  • How to check if new version of Chrome is available?

    - by serg
    I am trying to build an extension that would notify a user when new version of Chrome is available. I tried to inspect network traffic when Chrome is checking for an update and it is sending a request to http://74.125.95.113/service/update2?w=3:{long_encoded_string} page that returns XML with information I need: <?xml version="1.0" encoding="UTF-8"?> <gupdate xmlns="http://www.google.com/update2/response" protocol="2.0" server="prod"> <daystart elapsed_seconds="31272"/> <app appid="{8A69D345-D564-463C-AFF1-A69D9E530F96}" status="ok"> <updatecheck status="noupdate"/> <ping status="ok"/> </app> </gupdate> Besides sending {long_encoded_string} as URL parameter it is also sending some encoded cookie. Maybe someone familiar with Chrome build process can shed some light on those encoded strings and how to build them? Maybe there is another easier way (I have a feeling that string encoding is a dead end for me)?

    Read the article

  • Java "Pool" of longs or Oracle sequence with reusable values

    - by Anthony Accioly
    Several months ago I implemented a solution to choose unique values from a range between 1 and 65535 (16 bits). This range is used to generate unique Route Targets suffixes, which for this customer massive network (it's a huge ISP) are a very disputed resource, so any free index needs to become immediately available to the end user. To tackle this requirement I used a BitSet. Allocate on the RT index with set and deallocate a suffix with clear. The method nextClearBit() can find the next available index. I handle synchronization / concurrency issues manually. This works pretty well for a small range... The entire index is small (around 10k), it is blazing fast and can be easy serialized into a Blob field. The problem is, some new devices can handle RTs of 32 bits (range 1 / 4294967296). Which can't be managed with a BitSet (it would, by itself, consume around 600Mb, plus be limited to int range). Even with this massive range available, the client still wants to free available Route Targets for the end user, mainly because the lowest ones (up to 65535) - which are compatible with old routers - are being heavily disputed. Before I tell the customer that this is impossible and he will have to conform with my reusable index for lower RTs (up to 65550) and use a database sequence for the other ones (which means that when the user frees a Route Target, it will not become available again). Would anyone shed some light? Maybe some kind soul already implemented a high performance number pool for Java (6 if it matters), or I am missing a killer feature of Oracle database (11R2 if it matters)... Wishful thinking. Thank you very much in advance.

    Read the article

  • App is fast on 3GS but slow on 3G

    - by Anthony Chan
    Hi all, I'm new to computer coding and have just finished coding an app and tested it on both 3G and 3GS. On 3GS, it worked as normal as on the simulator. However, when I tried to run it on 3G, the app becomes extremely slow. I'm not sure what's the reason and I hope someone could shed some light on me. Generally, my app has a couple of view controller classes, with one of them being the title page, one being the main page, one is setting, etc. I used a dissolve to transition from the title page to the main page. But even this simple transition shows un-smooth performance on a 3G! My other part of the app involves zooming in to some images by scaling up the images, switching images by push or dissolve upon receiving touch events, saving photos into photo library and storing and retrieving some photos in a folder and some data in a SQlite database, each showing un-smooth actions. Compared with some heavy graphic or heavy maths app, I think mine is pretty simple. I totally have no clue why the app would behave so slow and un-smooth that it is barely useful on a 3G. Any help/ direction would be much appreciated. Thanks for helping out.

    Read the article

  • Versioning code in two separate projects concurently with subverison

    - by Matt1776
    I have a need to create a library of Object Oriented PHP code that will see much reuse and aspires to be highly flexible and modular. Because of its independent nature I would like it to exist as its own SVN project. I would like to be able to create a new web project, save it in SVN as its own separate project, and include within it the library project code as well. During this process, while coding the web application code and making commits, I may need to add a class to the library. I would like to be able to do so and commit those changes back to the libraries project code. In light of all this I could manage the code in two ways Commit the changes to the library back to a branch of its original base project code and make the branch name relevant to the web project I was using it with Commit the changes to the library back to the original code, growing it in size regardless of any specific references that might exist. I have two questions How can I include this library project code into a new project yet not break the subversion functionality, i.e. allowing me to make changes to each project individually? How I can keep the code synchronized? If I choose the first method of managing the library code I may want to grab changes from another branch and pull it in for use in another.

    Read the article

  • C++ Storing variables and inheritance

    - by Kaa
    Hello Everyone, Here is my situation: I have an event driven system, where all my handlers are derived from IHandler class, and implement an onEvent(const Event &event) method. Now, Event is a base class for all events and contains only the enumerated event type. All actual events are derived from it, including the EventKey event, which has 2 fields: (uchar) keyCode and (bool)isDown. Here's the interesting part: I generate an EventKey event using the following syntax: Event evt = EventKey(15, true); and I ship it to the handlers: EventDispatch::sendEvent(evt); // void EventDispatch::sendEvent(const Event &event); (EventDispatch contains a linked list of IHandlers and calls their onEvent(const Event &event) method with the parameter containing the sent event. Now the actual question: Say I want my handlers to poll the events in a queue of type Event, how do I do that? x Dynamic pointers with reference counting sound like too big of a solution. x Making copies is more difficult than it sounds, since I'm only receiving a reference to a base type, therefore each time I would need to check the type of event, upcast to EventKey and then make a copy to store in a queue. Sounds like the only solution - but is unpleasant since I would need to know every single type of event and would have to check that for every event received - sounds like a bad plan. x I could allocate the events dynamically and then send around pointers to those events, enqueue them in the array if wanted - but other than having reference counting - how would I be able to keep track of that memory? Do you know any way to implement a very light reference counter that wouldn't interfere with the user? What do you think would be a good solution to this design? I thank everyone in advance for your time. Sincerely, Kaa

    Read the article

  • How To perform a SQL Query to DataTable Operation That Can Be Cancelled

    - by David W
    I tried to make the title as specific as possible. Basically what I have running inside a backgroundworker thread now is some code that looks like: SqlConnection conn = new SqlConnection(connstring); SqlCommand cmd = new SqlCommand(query, conn); conn.Open(); SqlDataAdapter sda = new SqlDataAdapter(cmd); sda.Fill(Results); conn.Close(); sda.Dispose(); Where query is a string representing a large, time consuming query, and conn is the connection object. My problem now is I need a stop button. I've come to realize killing the backgroundworker would be worthless because I still want to keep what results are left over after the query is canceled. Plus it wouldn't be able to check the canceled state until after the query. What I've come up with so far: I've been trying to conceptualize how to handle this efficiently without taking too big of a performance hit. My idea was to use a SqlDataReader to read the data from the query piece at a time so that I had a "loop" to check a flag I could set from the GUI via a button. The problem is as far as I know I can't use the Load() method of a datatable and still be able to cancel the sqlcommand. If I'm wrong please let me know because that would make cancelling slightly easier. In light of what I discovered I came to the realization I may only be able to cancel the sqlcommand mid-query if I did something like the below (pseudo-code): while(reader.Read()) { //check flag status //if it is set to 'kill' fire off the kill thread //otherwise populate the datatable with what was read } However, it would seem to me this would be highly ineffective and possibly costly. Is this the only way to kill a sqlcommand in progress that absolutely needs to be in a datatable? Any help would be appreciated!

    Read the article

  • Questions regarding detouring by modifying the virtual table

    - by Elliott Darfink
    I've been practicing detours using the same approach as Microsoft Detours (replace the first five bytes with a jmp and an address). More recently I've been reading about detouring by modifying the virtual table. I would appreciate if someone could shed some light on the subject by mentioning a few pros and cons with this method compared to the one previously mentioned! I'd also like to ask about patched vtables and objects on the stack. Consider the following situation: // Class definition struct Foo { virtual void Call(void) { std::cout << "FooCall\n"; } }; // If it's GCC, 'this' is passed as the first parameter void MyCall(Foo * object) { std::cout << "MyCall\n"; } // In some function Foo * foo = new Foo; // Allocated on the heap Foo foo2; // Created on the stack // Arguments: void ** vtable, uint offset, void * replacement PatchVTable(*reinterpret_cast<void***>(foo), 0, MyCall); // Call the methods foo->Call(); // Outputs: 'MyCall' foo2.Call(); // Outputs: 'FooCall' In this case foo->Call() would end up calling MyCall(Foo * object) whilst foo2.Call() call the original function (i.e Foo::Call(void) method). This is because the compiler will try to decide any virtual calls during compile time if possible (correct me if I'm wrong). Does that mean it does not matter if you patch the virtual table or not, as long as you use objects on the stack (not heap allocated)?

    Read the article

  • Manually wiring up unobtrusive jquery validation client-side without Model/Data Annotations, MVC3

    - by cmorganmcp
    After searching and experimenting for 2 days I relent. What I'd like to do is manually wire up injected html with jquery validation. I'm getting a simple string array back from the server and creating a select with the strings as options. The static fields on the form are validating fine. I've been trying the following: var dates = $("<select id='ShiftDate' data-val='true' data-val-required='Please select a date'>"); dates.append("<option value=''>-Select a Date-</option>"); for (var i = 0; i < data.length; i++) { dates.append("<option value='" + data[i] + "'>" + data[i] + "</option>"); } $("fieldset", addShift).append($("<p>").append("<label for='ShiftDate'>Shift Date</label>\r").append(dates).append("<span class='field-validation-valid' data-valmsg-for='ShiftDate' data-valmsg-replace='true'></span>")); // I tried the code below as well instead of adding the data-val attributes and span manually with no luck dates.rules("add", { required: true, messages: { required: "Please select a date" } }); // Thought this would do it when I came across several posts but it didn't $.validator.unobtrusive.parse(dates.closest("form")); I know I could create a view model ,decorate it with a required attribute, create a SelectList server-side and send that, but it's more of a "how would I do this" situation now. Can anyone shed light on why the above code wouldn't work as I expect? -chad

    Read the article

  • While loop in foreach loop not looping correctly

    - by tominated
    I'm trying to make a very basic php ORM as for a school project. I have got almost everything working, but I'm trying to map results to an array. Here's a snippet of code to hopefully assist my explanation. $results = array(); foreach($this->columns as $column){ $current = array(); while($row = mysql_fetch_array($this->results)){ $current[] = $row[$column]; print_r($current); echo '<br><br>'; } $results[$column] = $current; } print_r($results); return mysql_fetch_array($this->results); This works, but the while loop only works on the first column. The print_r($results); shows the following: Array ( [testID] => Array ( [0] => 1 [1] => 2 ) [testName] => Array ( ) [testData] => Array ( ) ) Can anybody shed some light? Thanks in advance!

    Read the article

  • Can't find momd file: Core Data problems

    - by thekevinscott
    Aw geez! I screwed something up! I'm a Core Data noob, working on my first iOS app. After much Stack Overflowing I'm using this code: NSString *path = [[NSBundle mainBundle] pathForResource:@"CoreData" ofType:@"momd"]; if (!path) { path = [[NSBundle mainBundle] pathForResource:@"CoreData" ofType:@"mom"]; } NSAssert(path != nil, @"Unable to find Resource in main bundle"); CoreData is the name of my app. I've tried to put in initial data into the app by finding the path to the sqlite file in my iPhone simulator, and then going and inserting into that sqlite file. But at some point, I moved the sqlite (thinking it would create a fresh copy), deleted the app from the simulator, and the sqlite file is gone. I'm not sure if I'm leaving out some part of the process (this was a few hours ago) but the end result is that everything is screwed up. How do I resubstantiate this sqlite / momd file? "Clean" and "Clean all targets" are grayed out. I'm happy to post the relevant code from my app that would help shed some light on this problem but there's tons of code relating to Core Data which I don't understand, so I'm not sure what part to post! Any help is greatly appreciated.

    Read the article

  • Saving ntext data from SQL Server to file directory using asp

    - by April
    A variety of files (pdf, images, etc.) are stored in a ntext field on a MS SQL Server. I am not sure what type is in this field, other than it shows question marks and undefined characters, I am assuming they are binary type. The script is supposed to iterate through the rows and extract and save these files to a temp directory. "filename" and "contenttype" are given, and "data" is whatever is in the ntext field. I have tried several solutions: 1) data.SaveToFile "/temp/"&filename, 2 Error: Object required: '????????????????????' ??? 2) File.WriteAllBytes "/temp/"&filename, data Error: Object required: 'File' I have no idea how to import this, or the Server for MapPath. (Cue: what a noob!) 3) Const adTypeBinary = 1 Const adSaveCreateOverWrite = 2 Dim BinaryStream Set BinaryStream = CreateObject("ADODB.Stream") BinaryStream.Type = adTypeBinary BinaryStream.Open BinaryStream.Write data BinaryStream.SaveToFile "C:\temp\" & filename, adSaveCreateOverWrite Error: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another. 4) Response.ContentType = contenttype Response.AddHeader "content-disposition","attachment;" & filename Response.BinaryWrite data response.end This works, but the file should be saving to the server instead of popping up save-as dialog. I am not sure if there is a way to save the response to file. Thanks for shedding light on any of these problems!

    Read the article

  • web service slowdown

    - by user238591
    Hi, I have a web service slowdown. My (web) service is in gsoap & managed C++. It's not IIS/apache hosted, but speaks xml. My client is in .NET The service computation time is light (<0.1s to prepare reply). I expect the service to be smooth, fast and have good availability. I have about 100 clients, response time is 1s mandatory. Clients have about 1 request per minute. Clients are checking web service presence by tcp open port test. So, to avoid possible congestion, I turned gSoap KeepAlive to false. Until there everything runs fine : I bearly see connections in TCPView (sysinternals) New special synchronisation program now calls the service in a loop. It's higher load but everything is processed in less 30 seconds. With sysinternals TCPView, I see that about 1 thousands connections are in TIME_WAIT. They slowdown the service and It takes seconds for the service to reply, now. Could it be that I need to reset the SoapHttpClientProtocol connection ? Someone has TIME_WAIT ghosts with a web service call in a loop ?

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >