Search Results

Search found 9889 results on 396 pages for 'behind the compiler'.

Page 336/396 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • Does it ever make sense to make a fundamental (non-pointer) parameter const?

    - by Scott Smith
    I recently had an exchange with another C++ developer about the following use of const: void Foo(const int bar); He felt that using const in this way was good practice. I argued that it does nothing for the caller of the function (since a copy of the argument was going to be passed, there is no additional guarantee of safety with regard to overwrite). In addition, doing this prevents the implementer of Foo from modifying their private copy of the argument. So, it both mandates and advertises an implementation detail. Not the end of the world, but certainly not something to be recommended as good practice. I'm curious as to what others think on this issue. Edit: OK, I didn't realize that const-ness of the arguments didn't factor into the signature of the function. So, it is possible to mark the arguments as const in the implementation (.cpp), and not in the header (.h) - and the compiler is fine with that. That being the case, I guess the policy should be the same for making local variables const. One could make the argument that having different looking signatures in the header and source file would confuse others (as it would have confused me). While I try to follow the Principle of Least Astonishment with whatever I write, I guess it's reasonable to expect developers to recognize this as legal and useful.

    Read the article

  • Questions about the Backpropogation Algorithm

    - by Colemangrill
    I have a few questions concerning backpropogation. I'm trying to learn the fundamentals behind neural network theory and wanted to start small, building a simple XOR classifier. I've read a lot of articles and skimmed multiple textbooks - but I can't seem to teach this thing the pattern for XOR. Firstly, I am unclear about the learning model for backpropogation. Here is some pseudo-code to represent how I am trying to train the network. [Lets assume my network is setup properly (ie: multiple inputs connect to a hidden layer connect to an output layer and all wired up properly)]. SET guess = getNetworkOutput() // Note this is using a sigmoid activation function. SET error = desiredOutput - guess SET delta = learningConstant * error * sigmoidDerivative(guess) For Each Node in inputNodes For Each Weight in inputNodes[n] inputNodes[n].weight[j] += delta; // At this point, I am assuming the first layer has been trained. // Then I recurse a similar function over the hidden layer and output layer. // The prime difference being that it further divi's up the adjustment delta. I realize this is probably not enough to go off of, and I will gladly expound on any part of my implementation. Using the above algorithm, my neural network does get trained, kind of. But not properly. The output is always XOR 1 1 [smallest number] XOR 0 0 [largest number] XOR 1 0 [medium number] XOR 0 1 [medium number] I can never train the [1,1] [0,0] to be the same value. If you have any suggestions, additional resources, articles, blogs, etc for me to look at I am very interested in learning more about this topic. Thank you for your assistance, I appreciate it greatly!

    Read the article

  • Delphi: How to avoid EIntOverflow underflow when subtracting?

    - by Ian Boyd
    Microsoft already says, in the documentation for GetTickCount, that you could never compare tick counts to check if an interval has passed. e.g.: Incorrect (pseudo-code): DWORD endTime = GetTickCount + 10000; //10 s from now ... if (GetTickCount > endTime) break; The above code is bad because it is suceptable to rollover of the tick counter. For example, assume that the clock is near the end of it's range: endTime = 0xfffffe00 + 10000 = 0x00002510; //9,488 decimal Then you perform your check: if (GetTickCount > endTime) Which is satisfied immediatly, since GetTickCount is larger than endTime: if (0xfffffe01 > 0x00002510) The solution Instead you should always subtract the two time intervals: DWORD startTime = GetTickCount; ... if (GetTickCount - startTime) > 10000 //if it's been 10 seconds break; Looking at the same math: if (GetTickCount - startTime) > 10000 if (0xfffffe01 - 0xfffffe00) > 10000 if (1 > 10000) Which is all well and good in C/C++, where the compiler behaves a certain way. But what about Delphi? But when i perform the same math in Delphi, with overflow checking on ({Q+}, {$OVERFLOWCHECKS ON}), the subtraction of the two tick counts generates an EIntOverflow exception when the TickCount rolls over: if (0x00000100 - 0xffffff00) > 10000 0x00000100 - 0xffffff00 = 0x00000200 What is the intended solution for this problem? Edit: i've tried to temporarily turn off OVERFLOWCHECKS: {$OVERFLOWCHECKS OFF}] delta = GetTickCount - startTime; {$OVERFLOWCHECKS ON} But the subtraction still throws an EIntOverflow exception. Is there a better solution, involving casts and larger intermediate variable types?

    Read the article

  • submittingthe form even if the form is in invalid state

    - by Abu Hamzah
    i am using asp.net web forms and i am using bassistance.de jquery validation.. i have bunch of fields in the form and its validatating how it suppose to but its still executing my code behind code, why is it doing that? here is my code: form: <script type="text/javascript"> $(document).ready(function() { $("#aspnetForm").validate({ rules: { <%=txtVisitName.UniqueID %>: { maxlength:1, //minlength: 12, required: true } ................. }, messages: { <%=txtVisitName.UniqueID %>: { required: "Enter visit name", minlength: jQuery.format("Enter at least {0} characters.") } ............ }, success: function(label) { label.html("&nbsp;").addClass("checked"); } }); $("#aspnetForm").validate(); }); </script> <div > <h1> Page</h1> <asp:Label runat="server" ID='Label1'>Visit Name:</asp:Label> <asp:TextBox ID="txtVisitName" runat='server'></asp:TextBox> ............ ............... <button id="btnSubmit" name="btnSubmit" type="submit"> Submit</button> </div> and i have a external .js file referencing to my web form page. $(document).ready(function() { $("#btnSubmit").click(function() { SavePage(); }); }); i have a WebMethod .cs and i have bookmark and it does hitting that line of code even thoug my page is in invalid state. how would i fix this? thanks.

    Read the article

  • Concerned with footerrow in gridview

    - by ramyatk06
    hi guys, I have following gridview. <asp:GridView ID="gvMarks" runat="server" AutoGenerateColumns="false" DataKeyNames="MarkId" Width="80%" onrowdatabound="gvMarks_RowDataBound" ShowFooter="True" onrowcommand="gvMarks_RowCommand"> <Columns> <asp:TemplateField> <HeaderTemplate> SubjectCode </HeaderTemplate> <FooterTemplate> <asp:DropDownList ID="dlSubjectCode" runat="server" width="100px" AutoPostBack="false"></asp:DropDownList> </FooterTemplate> </asp:TemplateField> <asp:TemplateField> <HeaderTemplate> Mark </HeaderTemplate> <FooterTemplate> <asp:TextBox ID="txtInternalMark" runat="server" ></asp:TextBox> </FooterTemplate> </asp:TemplateField> <asp:TemplateField> <HeaderTemplate> Insert </HeaderTemplate> <FooterTemplate> <asp:LinkButton ID="lnkInsert" runat="server" Text="Insert" CommandName="Insert" ></asp:LinkButton> </FooterTemplate> </asp:TemplateField> </Columns> <FooterStyle BackColor="White" ForeColor="#000066" /> <RowStyle ForeColor="#000066" /> <SelectedRowStyle BackColor="#669999" Font-Bold="True" ForeColor="White" /> <PagerStyle BackColor="White" ForeColor="#000066" HorizontalAlign="Left" /> <HeaderStyle BackColor="#006699" Font-Bold="True" ForeColor="White" /> </asp:GridView> If i enter some value in txtInternalMark,iam not geting its value in code behind.Iam geting the value as "".Iam using following code if (e.CommandName.Equals("Insert")) { TextBox txtInternalMark = (TextBox)gvMarks.FooterRow.FindControl("txtInternalMark"); lblMessage.Text = txtInternalMark .Text; } Can anybody help to get the value of textbox in codebehind.

    Read the article

  • Refactoring an ASP.NET 2.0 app to be more "modern"

    - by Wayne M
    This is a hypothetical scenario. Let's say you've just been hired at a company with a small development team. The company uses an internal CRM/ERP type system written in .NET 2.0 to manage all of it's day to day things (let's simplify and say customer accounts and records). The app was written a couple of years ago when .NET 2.0 was just out and uses the following architectural designs: Webforms Data layer is a thin wrapper around SqlCommand that calls stored procedures Rudimentary DTO-style business objects that are populated via the sprocs A "business logic" layer that acts as a gateway between the webform and database (i.e. code behind calls that layer) Let's say that as there are more changes and requirements added to the application, you start to feel that the old architecture is showing its age, and changes are increasingly more difficult to make. How would you go about introducing refactoring steps to A) Modernize the app (i.e. proper separation of concerns) and B) Make sure that the app can readily adapt to change in the organization? IMO the changes would involve: Introduce an ORM like Linq to Sql and get rid of the sprocs for CRUD Assuming that you can't just throw out Webforms, introduce the M-V-P pattern to the forms Make sure the gateway classes conform to SRP and the other SOLID principles. Change the logic that is re-used to be web service methods instead of having to reuse code What are your thoughts? Again this is a totally hypothetical scenario that many of us have faced in the past, or may end up facing.

    Read the article

  • Sharepoint fails to load a C++ dll on windows 2008

    - by Nathan
    I have a sharepoint DLL that does some licensing things and as part of the code it uses an external C++ DLL to get the serial number of the hardisk. When i run this application on windows server 2003 it works fine, but on 2008 the whole site (loaded on load) crashes and resets continually. This is not 2008 R2 and is the same in 64 or 32 bits. If i put a debugger.break before the dll execution then I see the code get to the point of the break then never come back into the dll again. I do get some debug assertion warnings from within the function, again only in 2008, but im not sure this is related. I created a console app that runs the c# dll, which in turn loads the c++ dll, and this works perfectly on 2008 (although does show the assertion errors, but I have suppressed these now). The assertion errors are not in my code but within ICtypes.c and not something I can debug. If i put a breakpoint in the DLL it is never hit and the compiler says : "step in: Stepping over non user code" if i try to debug into the DLL using VS. I have tried wrapping the code used to call the DLL in: SPSecurity.RunWithElevatedPrivileges(delegate() but this also does not help. I have the sourcecode for this DLL so that is not a problem. If i delete the DLL from the directory I get an error about a missing DLL, if i replace it back to no error or warning just a complete failure. If i replace this code with a hardcoded string the whole application works fine. Any advice would be much appreciated, I can't understand why it works as a console app yet not when run by sharepoint, this is with the same user account, on the same machine... This is the code used to call the DLL: [DllImport("idDll.dll", EntryPoint = "GetMachineId", SetLastError = true)] extern static string GetComponentId([MarshalAs(UnmanagedType.LPStr)]String s); public static string GetComponentId() { Debugger.Break(); if (_machine == string.Empty) { string temp = ""; id= ComponentId.GetComponentId(temp); } return id; }

    Read the article

  • Scala path dependent return type from parameter

    - by Rich Oliver
    In the following code using 2.10.0M3 in Eclipse plugin 2.1.0 for 2.10M3. I'm using the default setting which is targeting JVM 1.5 class GeomBase[T <: DTypes] { abstract class NewObjs { def newHex(gridR: GridBase, coodI: Cood): gridR.HexRT } class GridBase { selfGrid => type HexRT = HexG with T#HexTr def uniformRect (init: NewObjs) { val hexCood = Cood(2 ,2) val hex: HexRT = init.newHex(selfGrid, hexCood)// won't compile } } } Error message: Description Resource Path Location Type type mismatch; found: GeomBase.this.GridBase#HexG with T#HexTr required: GridBase.this.HexRT (which expands to) GridBase.this.HexG with T#HexTr GeomBase.scala Why does the compiler think the method returns the type projection GridBase#HexG when it should be this specific instance of GridBase? Edit transferred to a simpler code class in responce to comments now getting a different error message. package rStrat class TestClass { abstract class NewObjs { def newHex(gridR: GridBase): gridR.HexG } class GridBase { selfGrid => def uniformRect (init: NewObjs) { val hex: HexG = init.newHex(this) //error here } class HexG { val test12 = 5 } } } . Error line 11:Description Resource Path Location Type type mismatch; found : gridR.HexG required: GridBase.this.HexG possible cause: missing arguments for method or constructor TestClass.scala /SStrat/src/rStrat line 11 Scala Problem Update I've switched to 2.10.0M4 and updated the plug-in to the M4 version on a fresh version of Eclipse and switched to JVM 1.6 (and 1.7) but the problems are unchanged.

    Read the article

  • c++ floating point precision loss: 3015/0.00025298219406977296

    - by SigTerm
    The problem. Microsoft Visual C++ 2005 compiler, 32bit windows xp sp3, amd 64 x2 cpu. Code: double a = 3015.0; double b = 0.00025298219406977296; //*((unsigned __int64*)(&a)) == 0x40a78e0000000000 //*((unsigned __int64*)(&b)) == 0x3f30945640000000 double f = a/b;//3015/0.00025298219406977296; the result of calculation (i.e. "f") is 11917835.000000000 (*((unsigned __int64*)(&f)) == 0x4166bb4160000000) although it should be 11917834.814763514 (i.e. *((unsigned __int64*)(&f)) == 0x4166bb415a128aef). I.e. fractional part is lost. Unfortunately, I need fractional part to be correct. Questions: 1) Why does this happen? 2) How can I fix the problem? Additional info: 0) The result is taken directly from "watch" window (it wasn't printed, and I didn't forget to set printing precision). I also provided hex dump of floating point variable, so I'm absolutely sure about calculation result. 1) The disassembly of f = a/b is: fld qword ptr [a] fdiv qword ptr [b] fstp qword ptr [f] 2) f = 3015/0.00025298219406977296; yields correct result (f == 11917834.814763514 , *((unsigned __int64*)(&f)) == 0x4166bb415a128aef ), but it looks like in this case result is simply calculated during compile-time: fld qword ptr [__real@4166bb415a128aef (828EA0h)] fstp qword ptr [f] So, how can I fix this problem? P.S. I've found a temporary workaround (i need only fractional part of division, so I simply use f = fmod(a/b)/b at the moment), but I still would like to know how to fix this problem properly - double precision is supposed to be 16 decimal digits, so such calculation isn't supposed to cause problems.

    Read the article

  • InputManager ignores cut/copy/paste when initiated from menu

    - by Wallstreet Programmer
    I'm using the InputManager to check if changes to controls are done by user or code. This works fine, except when user uses the context menu for cut/copy/paste. If the user do ctrl+v in a textbox, InputManager correctly notices it. However, if the paste is done from the context menu of the textbox, the InputManager never fires the PreNotifyInput or PostNotifyInput events. Anyone knows why? Or how to detect that these user actions? Below is a working sample. The lower textblock never gets updated when user uses the cut/copy/paste menu in the above textbox since PreNotifyInput never fires. XAML: <Window x:Class="InputMgrDemo.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Height="300" Width="300"> <StackPanel> <TextBox TextChanged="TextBox_TextChanged" /> <TextBlock Name="_text" /> </StackPanel> </Window> Code behind: using System.Windows; using System.Windows.Controls; using System.Windows.Input; namespace InputMgrDemo { public partial class Window1 : Window { public Window1() { InitializeComponent(); InputManager.Current.PreNotifyInput += ((sender, e) => _userInput = true); InputManager.Current.PostNotifyInput += ((sender, args) => _userInput = false); } private void TextBox_TextChanged(object sender, TextChangedEventArgs e) { if (_userInput) { _text.Text = (sender as TextBox).Text; } } private bool _userInput; } }

    Read the article

  • Are you using C++0x today? [closed]

    - by Roger Pate
    This is a question in two parts, the first is the most important and concerns now: Are you following the design and evolution of C++0x? What blogs, newsgroups, committee papers, and other resources do you follow? Even where you're not using any new features, how have they affected your current choices? What new features are you using now, either in production or otherwise? The second part is a follow-up, concerning the new standard once it is final: Do you expect to use it immediately? What are you doing to prepare for C++0x, other than as listed for the previous questions? Obviously, compiler support must be there, but there's still co-workers, ancillary tools, and other factors to consider. What will most affect your adoption? Edit: The original really was too argumentative; however, I'm still interested in the underlying question, so I've tried to clean it up and hopefully make it acceptable. This seems a much better avenue than duplicating—even though some answers responded to the argumentative tone, they still apply to the extent that they addressed the questions, and all answers are community property to be cleaned up as appropriate, too.

    Read the article

  • Rapid taps on an OpenGL ES app introducing input delay

    - by Tim R.
    I am starting out writing a 2D game in OpenGL ES, and I have encountered an odd problem: if I rapidly tap the touchscreen, the input starts lagging behind the display. The more times I tap, the more delay it causes between the input and any indication of that input onscreen. It only happens if I intentionally tap very rapidly, but not from tapping and dragging with any number of fingers. What could be causing this? Excessive details follow: Both accelerometer input and taps are delayed by just tapping. The only events I am responding to are touchesBegan (below) in my EAGLView and accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration in my Game object. There doesn't seem to be any upper limit to the amount of delay: I've gotten up to 12 seconds of delay by tapping rapidly with five fingers. I have not seen any drops in framerate (it stays constantly at 60 fps) in the OpenGL ES tool in Instruments or by taking 1/the time between updates. Possibly relevant code: - (void) drawView:(id) sender { [game update:allTouches]; [renderer render:game]; } -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { allTouches = [event allTouches]; } allTouches is a pointer that gets passed to my Game every update, which passes it to each GameObject in their update methods.

    Read the article

  • different thread accessing MemoryStream

    - by Wayne
    There's a bit of code which writes data to a MemoryStream object directly into it's data buffer by calling GetBuffer(). It also uses and updates the Position and SetLength() properties appropriately. This code works purposes 99.9999% of the time. Literally. Only every so many 100,000's of iterations it will barf. The specific problem is that the memory.Position property suddenly returns zero instead of the appropriate value. However, code was added that checks for the 0 and throws an exception which include log of the MemoryStream properties like Position and Length in a separate method. Those return the correct value. Further addition shows that when this rare condition occurs, the memory.Position only has zero inside this particular method. Okay. Obviously, this must be a threading issue. But this code is well locked. However, the nature of this software is that it's organized by "tasks" with a scheduler and so any one of several actual O/S thread may run this code at any give time--but never more than one at a time. So it's my guess that ordinarily it so happens that the same thread keeps getting used for this method and then on a rare occasion a different thread get used. Then due to compiler optimizations, the different thread never gets the correct value. It gets a "stale" value. Ordinarily in a situation like this I would apply a "volatile" keyword to the variable in question. But that (those) variables are inside the MemoryStream object. Does anyone have any other idea? Or does this mean we have to implement our own MemoryStream object? (Just like we end up having to do with practically every collection in .NET?) It's a shame to have such an awesome platform as .NET and have virtually the entire system useless as-is for seriously parallelized applications. If I'm wrong or you have other ideas, please advise. Sincerely, Wayne

    Read the article

  • My python program always brings down my internet connection after several hours running, how do I debug and fix this problem?

    - by Shane
    I'm writing a python script checking/monitoring several server/websites status(response time and similar stuff), it's a GUI program and I use separate thread to check different server/website, and the basic structure of each thread is using an infinite while loop to request that site every random time period(15 to 30 seconds), once there's changes in website/server each thread will start a new thread to do a thorough check(requesting more pages and similar stuff). The problem is, my internet connection always got blocked/jammed/messed up after several hours running of this script, the situation is, from my script side I got urlopen error timed out each time it's requesting a page, and from my FireFox browser side I cannot open any site. But the weird thing is, the moment I close my script my Internet connection got back on immediately which means now I can surf any site through my browser, so it must be the script causing all the problem. I've checked the program carefully and even use del to delete any connection once it's used, still get the same problem. I only use urllib2, urllib, mechanize to do network requests. Anybody knows why such thing happens? How do I debug this problem? Is there a tool or something to check my network status once such situation occurs? It's really bugging me for a while... By the way I'm behind a VPN, does it have something to do with this problem? Although I don't think so because my network always get back on once the script closed, and the VPN connection never drops(as it appears) during the whole process.

    Read the article

  • clang does not compile but g++ does

    - by user1095108
    Can someone help me with this code: #include <type_traits> #include <vector> struct nonsense { }; template <struct nonsense const* ptr, typename R> typename std::enable_if<!std::is_void<R>::value, int>::type fo(void* const) { return 0; } template <struct nonsense const* ptr, typename R> typename std::enable_if<std::is_void<R>::value, int>::type fo(void* const) { return 1; } typedef int (*func_type)(void*); template <std::size_t O> void run_me() { static struct nonsense data; typedef std::pair<char const* const, func_type> pair_type; std::vector<pair_type> v; v.push_back(pair_type{ "a", fo<&data, int> }); v.push_back(pair_type{ "b", fo<&data, void> }); } int main(int, char*[]) { run_me<2>(); return 0; } clang-3.3 does not compile this code, but g++-4.8.1 does, which of the two compiler is right? Is something wrong with the code, as I suspect? The error reads: a.cpp:32:15: error: no matching constructor for initialization of 'pair_type' (aka 'pair<const char *const, func_type>') v.push_back(pair_type{ "a", fo<&data, int> }); ^ ~~~~~~~~~~~~~~~~~~~~~~~ a.cpp:33:15: error: no matching constructor for initialization of 'pair_type' (aka 'pair<const char *const, func_type>') v.push_back(pair_type{ "b", fo<&data, void> }); ^ ~~~~~~~~~~~~~~~~~~~~~~~~

    Read the article

  • `enable_shared_from_this` has a non-virtual destructor

    - by Shtééf
    I have a pet project with which I experiment with new features of the upcoming C++0x standard. While I have experience with C, I'm fairly new to C++. To train myself into best practices, (besides reading a lot), I have enabled some strict compiler parameters (using GCC 4.4.1): -std=c++0x -Werror -Wall -Winline -Weffc++ -pedantic-errors This has worked fine for me. Until now, I have been able to resolve all obstacles. However, I have a need for enable_shared_from_this, and this is causing me problems. I get the following warning (error, in my case) when compiling my code (probably triggered by -Weffc++): base class ‘class std::enable_shared_from_this<Package>’ has a non-virtual destructor So basically, I'm a bit bugged by this implementation of enable_shared_from_this, because: A destructor of a class that is intended for subclassing should always be virtual, IMHO. The destructor is empty, why have it at all? I can't imagine anyone would want to delete their instance by reference to enable_shared_from_this. But I'm looking for ways to deal with this, so my question is really, is there a proper way to deal with this? And: am I correct in thinking that this destructor is bogus, or is there a real purpose to it?

    Read the article

  • Creating an object in the loop

    - by Jacob
    std::vector<double> C(4); for(int i = 0; i < 1000;++i) for(int j = 0; j < 2000; ++j) { C[0] = 1.0; C[1] = 1.0; C[2] = 1.0; C[3] = 1.0; } is much faster than for(int i = 0; i < 1000;++i) for(int j = 0; j < 2000; ++j) { std::vector<double> C(4); C[0] = 1.0; C[1] = 1.0; C[2] = 1.0; C[3] = 1.0; } I realize this happens because std::vector is repeatedly being created and instantiated in the loop, but I was under the impression this would be optimized away. Is it completely wrong to keep variables local in a loop whenever possible? I was under the (perhaps false) impression that this would provide optimization opportunities for the compiler. EDIT: I use VC++2005 (release mode) with full optimization (/Ox)

    Read the article

  • Problem with inner classes of the same name in Visual C++

    - by starblue
    I have a problem with Visual C++, where apparently inner classes with the same name but in different outer classes are confused. The problem occurs for two layers, where each layer has a listener interface as an inner class. B is a listener of A, and has its own listener in a third layer above it (not shown). The structure of the code looks like this: A.h class A { class Listener { Listener(); virtual ~Listener() = 0; } [...] } B.h class B : public A::Listener { class Listener { Listener(); virtual ~Listener() = 0; } [...] } B.cpp B::Listener::Listener() {} B::Listener::~Listener() {} I get the error B.cpp(49) : error C2509: '{ctor}' : member function not declared in 'B' The C++ compiler for Renesas sh2a has no problem with this, but then it is more liberal than Visual C++ in some other respects, too. If I rename the listener interfaces to have different names the problem goes away, but I'd like to avoid that (the real class names instead of A or B are rather long). Is what I'm doing correct C++, or is the complaint by Visual C++ justified? Is there a way to work around this problem without renaming the listener interfaces?

    Read the article

  • Visual Studio 2008 closes unexpectedly

    - by Jose
    I don't know if I can really get an answer to this question, but it really irks me and I would like to know if someone has an idea how to arrive to an answer. I have a pretty large solution in VS 2008 that maybe every week/every other week whenever I click properties to get to the project properties the IDE closes without warning. After that happens it will close EVERY time I try and view the properties. At that point I try and delete the .suo file, I resize the IDE, I close the tabs within the project, I restore default VS Settings(when I'm desperate). Eventually 20-30 minutes later I can actually view the properties. I haven't figured out exactly what fixes it, seems to be different every time. Once it's "fixed" I can't break it again so I can figure out what "fixed" it. This seems to be project specific, because I can view properties of other projects while this project is misbehaving. I guess my first question is, does VS log reasons for closing unexpectedly? Can I find out what the offending reason behind this is? The main frustration is I don't know that cause, nor the cure. Any ideas?

    Read the article

  • Why is it that an int in C++ that isnt initialized (then used) doesn't return an error?

    - by omizzle
    I am new to C++ (just starting). I come from a Java background and I was trying out the following piece of code that would sum the numbers between 1 and 10 (inclusive) and then print out the sum: /* * File: main.cpp * Author: omarestrella * * Created on June 7, 2010, 8:02 PM */ #include <cstdlib> #include <iostream> using namespace std; int main() { int sum; for(int x = 1; x <= 10; x++) { sum += x; } cout << "The sum is: " << sum << endl; return 0; } When I ran it it kept printing 32822 for the sum. I knew the answer was supposed to be 55 and realized that its print the max value for a short (32767) plus 55. Changing int sum; to int sum = 0; would work (as it should, since the variable needs to be initialized!). Why does this behavior happen, though? Why doesnt the compiler warn you about something like this? I know Java screams at you when something isnt initialized. Thank you.

    Read the article

  • IE7 & 8 not fireing jQuery click events for elements appended inside a table

    - by Keith
    I have an IE bug that I'm not sure how to fix. Using jQuery I'm dynamically moving a menu to appear on an element on mouseover. My code (simplified) looks something like this: $j = jQuery.noConflict(); $j(document).ready(function() { //do something on the menu clicks $j('div.ico').click(function() { alert($j(this).parent().html()); }); setUpActions('#tableId', '#menuId'); }); //on mouseover set up the actions menu to appear on mouseover function setUpActions(tableSelector, menuSelector) { $j(tableSelector + ' div.test').mouseover(function() { //note that append will move the underlying //DOM element with all events from it's old //parent to the end of this one. $j(this).append($j(menuSelector).show()); }); } This menu doesn't seem to register events correctly for the menu after it's been moved in IE7, IE8 and IE8-as-IE7 (yeah MS, that's really a 'new rendering engine' in IE8, we all believe you). It works as expected in everything else. You can see the behaviour in a basic demo here. In the demo you can see two examples of the issue: The image behind the buttons should change on hover (done with a CSS :hover selector). It works on the first mouseover but then persists. The click event doesn’t fire – however with the dev tools you can manually call it and it is still subscribed. You can see (2) in IE8's dev tools: Open page in IE8 Open dev tools Select "Script" tab and "Console" sub-tab Type: $j('#testFloat div.ico:first').click() to manually call any subscribed events There will be an alert on the page This means that I'm not losing the event subscriptions, they're still there, IE's just not calling them when I click. Does anyone know why this bug occurs (other than just because of IE's venerable engine)? Is there a workaround? Could it be something that I'm doing wrong that just happens to work as expected in everything else?

    Read the article

  • Can't get node.js built on cygwin

    - by mwt
    Following the instructions here: https://github.com/ry/node/wiki/Building-node.js-on-Cygwin-(Windows) I've tried installing on two machines, either of which I'd be happy to get up and running. WinXP On 'make', I get: Build failed: -> task failed <err #2>: {task: libv8.a SConstruct -> libv8.a} According to the instructions, this is caused by having $SHELL set to a Windows style path, but I've set it to /bin/bash and get the same error. Win7 On './configure', I get: $ ./configure Checking for program g++ or c++ : /usr/bin/g++ Checking for program cpp : /usr/bin/cpp Checking for program ar : /usr/bin/ar Checking for program ranlib : /usr/bin/ranlib Checking for g++ : ok Checking for program gcc or cc : /usr/bin/gcc 0 [main] python 1092 C:\bin\python.exe: *** fatal error - unable to remap \\?\C:\lib\python2.6\lib-dynload\_functools.dll to same address as parent: 0x360000 != 0x3E0000 Stack trace: Frame Function Args 002891E8 6102749B (002891E8, 00000000, 00000000, 00000000) 002894D8 6102749B (61177B80, 00008000, 00000000, 61179977) 0028A508 61004AFB (611A136C, 61241CF4, 00360000, 003E0000) End of stack trace 0 [main] python 3536 fork: child 1092 - died waiting for dll loading, errno 11 /Users/Michael/Desktop/node/wscript:177: error: could not configure a c compiler! I've run 'rebaseall' and restarted the machine but still get that error. Edit: Ok, rebaseall was apparently erroring on some mingw stuff, so I edited the rebaseall script to fix that, and now it configures on Win7. The new problem is that it emits the exact same error as my XP machine now when I try to make. This is on tag v0.3.5.

    Read the article

  • What Test Environment Setup do Top Project Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • ISO C90 forbids mixed declarations and code sscanf

    - by Need4Sleep
    I'm getting a strange error attempting to compile my unit test code,. For some reason the compiler treats my sscanf call as a mixed declaration? I don't quite understand, here is the entire error: cc1: warnings being treated as errors /home/brlcad/brlcad/src/libbn/tests/bn_complex.c: In function 'main': /home/brlcad/brlcad/src/libbn/tests/bn_complex.c:53: error: ISO C90 forbids mixed declarations and code make[2]: *** [src/libbn/tests/CMakeFiles/tester_bn_complex.dir/bn_complex.c.o] Error 1 make[1]: *** [src/libbn/tests/CMakeFiles/tester_bn_complex.dir/all] Error 2 make: *** [all] Error 2 int main(int argc, char *argv[]) { double expRe1, expIm2, expSqRe1, expSqIm2; double actRe1, actIm2, actSqRe1, actSqIm2; actRe1 = actIm2 = actSqRe1 = actSqIm2 = expRe1 = expIm2 = expSqRe1 = expSqIm2 = 0.0; bn_complex_t com1,com2; //a struct that holds two doubles if(argc < 5) bu_exit(1, "ERROR: Invalid parameters[%s]\n", argv[0]); sscanf(argv[1], "%lf,%lf", &com1.re, &com1.im); /* Error is HERE */ sscanf(argv[2], "%lf,%lf", &com2.re, &com2.im); sscanf(argv[3], "%lf,%lf", &expRe1, &expIm2); sscanf(argv[4], "%lf,%lf", &expSqRe1, &expSqIm2); test_div(com1, com2, &actRe1, &actIm2); test_sqrt(com1,com2, &actSqRe1, &actSqIm2); if((fabs(actRe1 - expRe1) < 0.00001) || (fabs(actIm2 - expIm2) < 0.00001)){ printf("Division failed...\n"); return 1; } if((fabs(actSqRe1 - expSqRe1) < 0.00001) || (fabs(actSqIm2 - expSqIm2) < 0.00001)){ printf("Square roots failed...\n"); return 1; } return 0; }

    Read the article

  • C: Fifo between threads, writing and reading strings

    - by Yonatan
    Hello once more dear internet, I writing a small program that among other things, writes a log file of commands received. to do that, I want to use a thread that all it should do is just attempt to read from a pipe, while the main thread will write into that pipe whenever it should. Since i don't know the length of each string command, i thought about writing and reading the pointer to the char buf[MAX_MESSAGE_LEN]. Since what i've tried so far doesn't work, i'll post my best effort :P char str[] = "hello log thread 123456789 10 11 12 13 14 15 16 17 18 19\n"; if (pipe(pipe_fd) != 0) return -1; pthread_t log_thread; pthread_create(&log_thread,NULL, log_thread_start, argv[2]); success_write = 0; do { write(pipe_fd[1],(void*)&str,sizeof(char*)); } while (success_write < sizeof(char*)); and the thread does this: char buffer[MAX_MSGLEN]; int success_read; success_read = 0; //while(1) { do { success_read += read(pipe_fd[0],(void*)&buffer, sizeof(char*)); } while (success_read < sizeof(char*)); //} printf("%s",buffer); (Sorry if this doesn't indent, i can't seem to figure out this editor...) oh, and pipe_fd[2] is a global parameter. So, any help with this, either by the way i thought of, or another way i could read strings without knowing the length, would be much appreciated. On a side note, i'm working on Eclipse IDE C/C++, version 1.2.1 and i can't seem to set up the compiler so it will link the pthread library to my project. I've resorted to writing my own Makefile to make it (pun intended :P) work. Anyone knows what to do ? i've looked online, but all i find are solutions that are probably good on an older version because the tabs and option keys are different. Anyways, Thanks a bunch internet ! Yonatan

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >