Search Results

Search found 11190 results on 448 pages for 'bug report'.

Page 396/448 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • Need a piece of advice about e-mail automation in ms exchange + ms office environment

    - by be here now
    Hi, guys. I need your help in the following simple situation. I've got an MS Exchange server and some client computers running on XP with Office 2003 installed. And I've got a process I need to automate. Twice a day a known list of people sends an e-mail to a certain mailbox (let's call it manager's mailbox) - basically, an accomplishment report. After recieving letters from all of these people the mailbox owner sends and e-mail to another mailbox, meaning that a certain process is done. What I need to do is to replace this manager's mailbox with a depersonalized mailbox that will accumulate all the reports and automatically send a message after collecting all of them. I am definitely not in a "oh my God, what shold I do?" situation, and currently my imagination shows me a couple of ways to solve this problem, which I'm going to try, and I'm not ascking for a ready solution. But since I'm not experienced in Office/VBA developement, I'd like to ask a corresponding pro's opinion. Can you point me to a right direction from the best practices' point of view?

    Read the article

  • do the Python libraries have a natural dependence on the global namespace?

    - by msw
    I first ran into this when trying to determine the relative performance of two generators: t = timeit.repeat('g.get()', setup='g = my_generator()') So I dug into the timeit module and found that the setup and statement are evaluated with their own private, initially empty namespaces so naturally the binding of g never becomes accessible to the g.get() statement. The obvious solution is to wrap them into a class, thus adding to the global namespace. I bumped into this again when attempting, in another project, to use the multiprocessing module to divide a task among workers. I even bundled everything nicely into a class but unfortunately the call pool.apply_async(runmc, arg) fails with a PicklingError because buried inside the work object that runmc instantiates is (effectively) an assignment: self.predicate = lambda x, y: x > y so the whole object can't be (understandably) pickled and whereas: def foo(x, y): return x > y pickle.dumps(foo) is fine, the sequence bar = lambda x, y: x > y yields True from callable(bar) and from type(bar), but it Can't pickle <function <lambda> at 0xb759b764>: it's not found as __main__.<lambda>. I've given only code fragments because I can easily fix these cases by merely pulling them out into module or object level defs. The bug here appears to be in my understanding of the semantics of namespace use in general. If the nature of the language requires that I create more def statements I'll happily do so; I fear that I'm missing an essential concept though. Why is there such a strong reliance on the global namespace? Or, what am I failing to understand? Namespaces are one honking great idea -- let's do more of those!

    Read the article

  • How do I prevent qFatal() from aborting the application?

    - by Dave
    My Qt application uses Q_ASSERT_X, which calls qFatal(), which (by default) aborts the application. That's great for the application, but I'd like to suppress that behavior when unit testing the application. (I'm using the Google Test Framework.) I have by unit tests in a separate project, statically linking to the class I'm testing. The documentation for qFatal() reads: Calls the message handler with the fatal message msg. If no message handler has been installed, the message is printed to stderr. Under Windows, the message is sent to the debugger. If you are using the default message handler this function will abort on Unix systems to create a core dump. On Windows, for debug builds, this function will report a _CRT_ERROR enabling you to connect a debugger to the application. ... To supress the output at runtime, install your own message handler with qInstallMsgHandler(). So here's my main.cpp file: #include <gtest/gtest.h> #include <QApplication> void testMessageOutput(QtMsgType type, const char *msg) { switch (type) { case QtDebugMsg: fprintf(stderr, "Debug: %s\n", msg); break; case QtWarningMsg: fprintf(stderr, "Warning: %s\n", msg); break; case QtCriticalMsg: fprintf(stderr, "Critical: %s\n", msg); break; case QtFatalMsg: fprintf(stderr, "My Fatal: %s\n", msg); break; } } int main(int argc, char **argv) { qInstallMsgHandler(testMessageOutput); testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } But my application is still stopping at the assert. I can tell that my custom handler is being called, because the output when running my tests is: My Fatal: ASSERT failure in MyClass::doSomething: "doSomething()", file myclass.cpp, line 21 The program has unexpectedly finished. What can I do so that my tests keep running even when an assert fails?

    Read the article

  • Clearing Session in Global Application_Error

    - by Zarigani
    Whenever an unhandled exception occurs on our site, I want to: Send a notification email Clear the user's session Send the user to a error page ("Sorry, a problem occurred...") The first and last I've had working for a long time but the second is causing me some issues. My Global.asax.vb includes: Sub Application_Error(ByVal sender As Object, ByVal e As EventArgs) ' Send exception report Dim ex As System.Exception = Nothing If HttpContext.Current IsNot Nothing AndAlso HttpContext.Current.Server IsNot Nothing Then ex = HttpContext.Current.Server.GetLastError End If Dim eh As New ErrorHandling(ex) eh.SendError() ' Clear session If HttpContext.Current IsNot Nothing AndAlso HttpContext.Current.Session IsNot Nothing Then HttpContext.Current.Session.Clear() End If ' User will now be sent to the 500 error page (by the CustomError setting in web.config) End Sub When I run a debug, I can see the session being cleared, but then on the next page the session is back again! I eventually found a reference that suggests that changes to session will not be saved unless Server.ClearError is called. Unfortunately, if I add this (just below the line that sets "ex") then the CustomErrors redirect doesn't seem to kick in and I'm left with a blank page? Is there a way around this?

    Read the article

  • RegQueryValueEx not working with a Release version but working fine with Debug

    - by Nux
    Hi. I'm trying to read some ODBC details form a registry and for that I use RegQueryValueEx. The problem is when I compile the release version it simply cannot read any registry values. The code is: CString odbcFuns::getOpenedKeyRegValue(HKEY hKey, CString valName) { CString retStr; char *strTmp = (char*)malloc(MAX_DSN_STR_LENGTH * sizeof(char)); memset(strTmp, 0, MAX_DSN_STR_LENGTH); DWORD cbData; long rret = RegQueryValueEx(hKey, valName, NULL, NULL, (LPBYTE)strTmp, &cbData); if (rret != ERROR_SUCCESS) { free(strTmp); return CString("?"); } strTmp[cbData] = '\0'; retStr.Format(_T("%s"), strTmp); free(strTmp); return retStr; } I've found a workaround for this - I disabled Optimization (/Od), but it seems strange that I needed to do that. Is there some other way? I use Visual Studio 2005. Maybe it's a bug in VS? Almost forgot - the error code is 2 (as the key wouldn't be found).

    Read the article

  • Serialization of Queue type not working

    - by Soham
    Consider this piece of code: private Queue Date=new Queue(); //other declarations public DateTime _Date { get { return (DateTime)Date.Peek();} set { Date.Enqueue(value); } } //other properties and stuff.... public void UpdatePosition(...) { //other code IFormatter formatter = new BinaryFormatter(); Stream Datestream = new MemoryStream(); formatter.Serialize(Datestream, Date); byte[] Datebin = new byte[2048]; Datestream.Read(Datebin,0,2048); //Debug-Bug Console.WriteLine(Convert.ToString(this._Date)); Console.WriteLine(BitConverter.ToString(Datebin, 0, 3)); //other code } The output of the first WriteLine is perfect. I.e to check if really the Queue is initialised or not. It is. The right variables are stored etc. (I inserted a value in that Queue, that part of the code is not shown.) But the second WriteLine is not giving the right expected answer: It serializes the entire Queue to 00-00-00.

    Read the article

  • Grouping Categorized Data In WPF.

    - by VoidDweller
    Here is what I am trying to do. Dynamic Category: Columns can be 0 or more. Must contain 1 or more Type Columns. Will only be displayed if any row contains Type Column data associated with it. Data Rows: Will be added Asynchronously. Will be grouped by a Common Category column. Will add a Dynamic Category if it does not yet exist. Will add a Type Column if it does not yet exist within its appropriate Dynamic Category. Platform Info: WPF .Net 3.5 sp1 C# MVVM I have a few partially functional prototypes, but each has it's own major set of problems. Can any of you give me some guidance on this? Envision this nicely styled. :-) -------------------------------------------------------------------------- |[ Common Category ]|[ Dynamic Category 0 ]|[ Dynamic Category N ]| -------------------------------------------------------------------------- |[Header 1]|[Header 2]|[ Type 0 ]|[ Type N ]|[ Type 0 ]|[ Type N ]| -------------------------------------------------------------------------- |[Data 2 Group] | -------------------------------------------------------------------------- | Data A | Data 2 || Null | Data 1 || Data 0 | Data 1 || | Data B | Data 2 || Data 0 | Null || Data 0 | Data 1 || -------------------------------------------------------------------------- |[Data 1 Group] | -------------------------------------------------------------------------- | Data C | Data 1 || Null | Data 1 || Data 0 | Data 1 || | Data D | Data 1 || Null | Null || Data 0 | Null || -------------------------------------------------------------------------- Edit: Sorting and Paging is not necessary. I have looked at nested ListViews and DataGrids, dynamically building a Grid. Dynamically building a Grid and leveraging the SharedSizeGroup property seems the most promising strategy, but I am concerned about performance. Would a better approach be to consider this a dynamic report? If so, what should I be looking at? Thanks for your help.

    Read the article

  • UITableViewRowAnimationBottom doesn't work for last row

    - by GendoIkari
    I've come across a very similar question here: Inserting row to end of table with UITableViewRowAnimationBottom doesn't animate., though no answers have been given. His code was also a little different than mine. I have an extremely simple example, built from the Navigation application template. NSMutableArray *items; - (void)viewDidLoad { [super viewDidLoad]; items = [[NSMutableArray array] retain]; self.navigationItem.rightBarButtonItem = [[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:@selector(addItem)] autorelease]; } - (void)addItem{ [items insertObject:@"new" atIndex:0]; [self.tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:[NSIndexPath indexPathForRow:0 inSection:0]] withRowAnimation:UITableViewRowAnimationBottom]; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return items.count; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } cell.textLabel.text = [items objectAtIndex:indexPath.row]; return cell; } - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { [items removeObjectAtIndex:indexPath.row]; [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationBottom]; } } The problem is, when I either insert or delete the very last row in the table, the animation doesn't work at all; the row just appears or disappears instantly. This only happens with UITableViewRowAnimationBottom, but that's the animation that makes the most sense for creating or deleting table cells in this way. Is this a bug in Apple's framework? Or does it do this on purpose? Would it make sense to add an extra cell to the count, and then setup this cell so that it looks like it's not there at all, just to get around this behavior?

    Read the article

  • NHibernate - Retrieving Lots of Data Becomes Exponentially Slow

    - by nfplee
    Hi, I have an issue when I retrieve lots of data in NHibernate (such as when producing a report) the page becomes exponentially slower the more data it has to retrieve. I found the following article: http://nhforge.org/blogs/nhibernate/archive/2008/10/30/bulk-data-operations-with-nhibernate-s-stateless-sessions.aspx It explains how doing bulk data operations in NHibernate is slow since the first level cache grows too large and how you should use the IStatelessSession instead. The trouble I have is that I don't wish to tie my application to NHibernate so I've added a wrapper around ISession. I then use Linq as my query mechanism but IStatelessSession does not support Linq (it may do in NHibernate 3 but the Linq provider is not stable as it stands at the moment). I then read that you could do a clear after so many iterations to clear out the first level cache. The problem now is that you can't use lazy loading. The linq provider doesn't allow you to override the mapping defined (or eagerly fetch the additional data) so whenever I grab data which is lazy loaded after I have cleared the session an exception is thrown. I'm completely lost on what do now. I like the ease of producing reports with linq but the limitations of the inbuilt linq provider in NHibernate seem to be holding me back. I'd really appreciate it if someone could show me an alternative approach. Thanks

    Read the article

  • Enterprise integration of disparate systems

    - by Chris Latta
    We're about to embark on a fairly large integration effort to kill off a bunch of Access and Sql Server databases and get everything into one coherent enterprise system. There are also a number of other systems (accounting, CRM, payroll, MS Exchange) that hold critical data that we need to integrate (use for data validation in other systems), report on and otherwise expose. It is likely that some of these systems will change in the next few years, so we need to isolate our systems to be ready for change. Ideally we would be able to expose our forms in a consistent manner across as many of our our systems as possible without having to re-develop them for each system. We are currently targeting SharePoint (2007 and soon 2010), Office (2007 and soon 2010 - Word, Excel, PowerPoint and Outlook), Reporting Services, .Net console applications, .Net Windows applications, shell extensions, and with the possibility of exposing some functionality on mobile devices (BlackBerries currently, maybe iPhones later) and via our website. We're moving development to Visual Studio 2010 (from 2005) ahead of migrating to SharePoint 2010 and Office 2010. Given that most of our development is presently targeted to the .Net framework (mostly in C#) it seems logical to stick with this unless there is some compelling reason to switch frameworks/platform for some aspects. We're thinking of your standard Database-Data Integration layer-Business Objects Layer-Web Services (or REST) layer-Client Application plus doing our own client application with WPF (or something else?) forms that can also be exposed in the MS systems (SharePoint, Office, Windows). So, we don't want much, just everything :) Basically we need to isolate ourselves from database and systems changes, create an API that can be used throughout our systems and then make this functionality available in our client applications. I'm very keen to get pointers from anyone who has tips on how to pull this off. Should we look at the Enterprise Library as a place to start? Is REST with ASP.Net MVC2 a better solution than Web Services for a system like this? Will WPF deliver forms re-use or is there something better?

    Read the article

  • Binary Search Tree can't delete the root

    - by Ali Zahr
    Everything is working fine in this function, but the problem is that I can't delete the root, I couldn't figure out what's the bug here.I've traced the "else part" it works fine until the return, it returns the old value I don't know why. Plz Help! node *removeNode(node *Root, int key) { node *tmp = new node; if(key > Root->value) Root->right = removeNode(Root->right,key); else if(key < Root->value) Root->left = removeNode(Root->left, key); else if(Root->left != NULL && Root->right != NULL) { node *minNode = findNode(Root->right); Root->value = minNode->value; Root->right = removeNode(Root->right,Root->value); } else { tmp = Root; if(Root->left == NULL) Root = Root->right; else if(Root->right == NULL) Root = Root->left; delete tmp; } return Root; }

    Read the article

  • How to catch unintentional function interpositioning with GCC?

    - by SiegeX
    Reading through my book Expert C Programming, I came across the chapter on function interpositioning and how it can lead to some serious hard to find bugs if done unintentionally. The example given in the book is the following: my_source.c mktemp() { ... } main() { mktemp(); getwd(); } libc mktemp(){ ... } getwd(){ ...; mktemp(); ... } According to the book, what happens in main() is that mktemp() (a standard C library function) is interposed by the implementation in my_source.c. Although having main() call my implementation of mktemp() is intended behavior, having getwd() (another C library function) also call my implementation of mktemp() is not. Apparently, this example was a real life bug that existed in SunOS 4.0.3's version of lpr. The book goes on to explain the fix was to add the keyword static to the definition of mktemp() in my_source.c; although changing the name altogether should have fixed this problem as well. This chapter leaves me with some unresolved questions that I hope you guys could answer: Should our software group adopt the practice of putting the keyword static in front of all functions that we don't want to be exposed? Does GCC have a way to warn about function interposition? We certainly don't ever intend on this happening and I'd like to know about it if it does. Can interposition happen with functions introduced by static libraries? Thanks for the help.

    Read the article

  • Strange Access Denied warning when running the simplest C++ program.

    - by DaveJohnston
    I am just starting to learn C++ (coming from a Java background) and I have come across something that I can't explain. I am working through the C++ Primer book and doing the exercises. Every time I get to a new exercise I create a new .cpp file and set it up with the main method (and any includes I think I will need) e.g.: #include <list> #include <vector> int main(int argc, char **args) { } and just to make sure I go to the command prompt and compile and run: g++ whatever.cpp a.exe Normally this works just fine and I start working on the exercise, but I just did it and got a strange error. It compiles fine, but when I run it it says Access Denied and AVG pops up telling me that a threat has been detected 'Trojan Horse Generic 17.CKZT'. I tried compiling again using the Microsoft Compiler (cl.exe) and it runs fines. So I went back, and added: #include <iostream> compiled using g++ and ran. This time it worked fine. So can anyone tell me why AVG would report an empty main method as a trojan horse but if the iostream header is included it doesn't?

    Read the article

  • MouseListener fired without checking JCheckBox

    - by Morinar
    This one is pretty crazy: I've got an AppSight recording (for those not familiar, it's a recording of what they did including keyboard/mouse input + network traffic, etc) of a customer reproducing a bug. Basically, we've got a series of items listed on the screen with JCheckBox-es down the left side. We've got a MouseListener set for the JPanel that looks something like this: private MouseAdapter createMouseListener() { return new MouseAdapter(){ public void mousePressed( MouseEvent e ) { if( e.getComponent() instanceof JCheckBox ) { // Do stuff } } }; } Based on the recording, it appears very strongly that they click just above one of the checkboxes. After that, it's my belief that this listener fired and the "Do stuff" block happened. However, it did NOT check the box. The user then saw that the box was unchecked, so they clicked on it. This caused the "Do stuff" block to fire again, thus undoing what it had done the first time. This time, the box was checked. Therefore, the user thinks that the box is checked, and it looks like it is, but our client thinks that the box is unchecked as it was clicked twice. Is this possible at all? For the life of me, I can't reproduce it or see how it could be possible, but based on the recording and the data the client sent to the server, I can't see any other logical explanation. Any help, thoughts, and or ideas would be much appreciated.

    Read the article

  • Providing downloads on ASP.net website

    - by Dave
    I need to provide downloads of large files (upwards of 2 GB) on an ASP.net website. It has been some time since I've done something like this (I've been in the thick-client world for awhile now), and was wondering on current best practices for this. Ideally, I would like: To be able to track download statistics: # of downloads is essential; actual bytes sent would be nice. To provide downloads in a way that "plays nice" with third-party download managers. Many of our users have unreliable internet connections, and being able to resume a download is a must. To allow multiple users to download the same file simultaneously. My download files are not security-sensitive, so providing a direct link ("right-click to download...") is a possibility. Is just providing a direct link sufficient, letting IIS handle it, and then using some log analyzer service (any recommendations?) to compile and report the statistics? Or do I need to intercept the download request, store some info in a database, then send a custom Response? Or is there an ASP.net user control (built-in or third party) that does this? I appreciate all suggestions.

    Read the article

  • wpf progress bar slows 10x times serial port communications... how could be possible that?

    - by D_Guidi
    I know that this could look a dumb question, but here's my problem. I have a worker dialog that "hides" a backgroundworker, so in a worker thread I do my job, I report the progress in a standard way and then I show the results in my WPF program. The dialog contains a simply animated gif and a standard wpf progress bar, and when a progress is notified I set Value property. All lokks as usual and works well for any kind of job, like web service calls, db queries, background elaboration and so on. For my job we use also many "couplers", card readers that reads data from smart card, that are managed with native C code that access to serial port (so, I don't use .NET SerialPort object). I have some nunit tests and I read a sample card in 10 seconds, but using my actual program, under the backgroundworker and showing my worker dialog, I need 1.30 minutes to do the SAME job. I struggled into problem for days until I decide to remove the worker dialog, and without dialog I obtain the same performances of the tests! So I investigated, and It's not the dialog, not the animated gif, but the wpf progress bar! Simply the fact that a progress bar is shown (so, no animation, no Value set called, nothing of nothing) slows serialport communicatitons. Looks incredible? I've tested this behavior and it's exactly what happens.

    Read the article

  • Update text on CCLabelTFF end in bad access?

    - by TheDeveloper
    I'm doing a little game in Coco2D and I have a countdown clock Note: As I am just trying to fix a bug, I am not working on cleanup so the timer can stop, etc. Here is my code I'm using to setup the label and start the timer: timer = [CCLabelTTF labelWithString:@"10.0000" fontName:@"Helvetica" fontSize:20]; timerDisplay = timer; timerDisplay.position = ccp(277,310); [self addChild:timerDisplay]; timeLeft = 10; timerObject = [NSTimer scheduledTimerWithTimeInterval:0.1 target:self selector:@selector(updateTimer) userInfo:nil repeats:YES]; Note: timeLeft is a double This is updateTimers's code: -(void)updateTimer { NSLog(@"Got Called!"); timeLeft = timeLeft -0.1; [timer setString:[NSString stringWithFormat:@"%f",timeLeft]]; timerDisplay = timer; timerDisplay.position = ccp(277,310); [self removeChild:timerDisplay cleanup:YES]; //[self addChild:timerDisplay]; if (timeLeft <= 0) { [timerObject invalidate]; } } When I run this I toggle between crashing on this this: [timer setString:[NSString stringWithFormat:@"%f",timeLeft]]; and in the green arrow thing it gives Thread 1: EXEC_BAD_ACCESS (code=2, address=0x8) and 0x197a7ff: movl 16(%edi), %esi and in the green arrow thing it gives Thread 1: EXEC_BAD_ACCESS (code=2, address=0x8)

    Read the article

  • is this a design pattern?

    - by Michel
    Hi all, i have to build some financial data report, and for making the calculation, there are a lot of 'if then' situations: if it's a large client, subtract 10%, if it's postal code equals '10101', add 10%, if the day is on a saturday, make a difficult calculation etc. so i once read about this kind of example, and what they did was (hope i remember well) create a class with some base info and make it possible to add all kinds of calculationobjects to it. So to put what i remembered in pseudo code Basecalc bc = new baseCalc(); //put the info in the bc so other objects can do their if bc.Add(new Largecustomercalc()); bc.Add(new PostalcodeCalc()); bc.add(new WeekdayCalc()); the the bc would run the Calc() methods of all of the added Calc objects. As i type this i think all the Calc objects must be able to see the Basecalc properties to correctly perform their calculation logic. So all the if's are in the different Calc objects and not ALL in the Basecalc. does this make sense? I was wondering if this is some kind of design pattern?

    Read the article

  • When is a try catch not a try catch?

    - by Dearmash
    I have a fun issue where during application shutdown, try / catch blocks are being seemingly ignored in the stack. I don't have a working test project (yet due to deadline, otherwise I'd totally try to repro this), but consider the following code snippet. public static string RunAndPossiblyThrow(int index, bool doThrow) { try { return Run(index); } catch(ApplicationException e) { if(doThrow) throw; } return ""; } public static string Run(int index) { if(_store.Contains(index)) return _store[index]; throw new ApplicationException("index not found"); } public static string RunAndIgnoreThrow(int index) { try { return Run(index); } catch(ApplicationException e) { } return ""; } During runtime this pattern works famously. We get legacy support for code that relies on exceptions for program control (bad) and we get to move forward and slowly remove exceptions used for program control. However, when shutting down our UI, we see an exception thrown from "Run" even though "doThrow" is false for ALL current uses of "RunAndPossiblyThrow". I've even gone so far as to verify this by modifying code to look like "RunAndIgnoreThrow" and I'll still get a crash post UI shutdown. Mr. Eric Lippert, I read your blog daily, I'd sure love to hear it's some known bug and I'm not going crazy. EDIT This is multi-threaded, and I've verified all objects are not modified while being accessed

    Read the article

  • Why do I get this strange output behavior?

    - by WilliamKF
    I have the following program test.cc: #include <iostream> unsigned char bogus1[] = { // Changing # of periods (0x2e) changes output after periods. 0x2e, 0x2e, 0x2e, 0x2e }; unsigned int bogus2 = 1816; // Changing this value changes output. int main() { std::clog << bogus1; } I build it with: g++ -g -c -o test.o test.cc; g++ -static-libgcc -o test test.o Using g++ version 3.4.6 I run it through valgrind and nothing is reported wrong. However the output has two extra control characters and looks like this: .... Thats a control-X and a control-G at the end. If you change the value of bogus2 you get different control characters. If you change the number of periods in the array the issue goes away or changes. I suspect it is a memory corruption bug in the compiler or iostream package. What is going on here?

    Read the article

  • Has subversion lost some of my revisions in a branch?

    - by BombDefused
    I've been working on my project using a subversion branch. I've used the branching feature few times before without issue, until today. I've come to merge back into the trunk, and noticed that not everything from my branch was there. I go back to my project folder which I've been committing to the branch and look at the log messages using TortoiseSVN (the command line basic log command shows the same). See the attached image. The revision numbers go up incrementally, until revision 303 (the last trunk revision was 299). Then there are numbers missing. The latest commit, about half an hour ago was 316, but it doesn't show up in the log for the branch. Trying to commit the files again doesn't do anything. I am the only person committing to this repository at present. The missing revisions do not show up in the log for the trunk project. What's going on here. Is this a bug or am I doing something wrong? Update - the revisions do show in the repo browser (Thanks Antonio Perez), but I don't understand why they are not being included with the merge?

    Read the article

  • PL-SQL - Two statements with begin and end, run fine seperately but not together?

    - by Twiss
    Hi all, Just wondering if anyone can help with this, I have two PLSQL statements for altering tables (adding extra fields) and they are as follows: -- Make GC_NAB field for Next Action By Dropdown begin if 'VARCHAR2' = 'NUMBER' and length('VARCHAR2')>0 and length('')>0 then execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NAB VARCHAR2(10, ))'; elsif ('VARCHAR2' = 'NUMBER' and length('VARCHAR2')>0 and length('')=0) or 'VARCHAR2' = 'VARCHAR2' then execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NAB VARCHAR2(10))'; else execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NAB VARCHAR2)'; end if; commit; end; -- Make GC_NABID field for Next Action By Dropdown begin if 'NUMBER' = 'NUMBER' and length('NUMBER')>0 and length('')>0 then execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NABID NUMBER(, ))'; elsif ('NUMBER' = 'NUMBER' and length('NUMBER')>0 and length('')=0) or 'NUMBER' = 'VARCHAR2' then execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NABID NUMBER())'; else execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NABID NUMBER)'; end if; commit; end; When I run these two queries seperately, no problems. However, when run together as shown above, Oracle gives me an error when it starts the second statement: Error report: ORA-06550: line 15, column 1: PLS-00103: Encountered the symbol "BEGIN" 06550. 00000 - "line %s, column %s:\n%s" *Cause: Usually a PL/SQL compilation error. *Action: I'm assuming that this means the first statement is not terminated properly... is there anything I should put inbetween the statements to make it work properly? Thanks in advance everyone!

    Read the article

  • Lightcore IoC is returning the same instance when it should give a new one

    - by Anthony
    I have the following code using the lightcore IoC container. But it fails with "NUnit.Framework.AssertionException: Contained objects are equal" which indicates that the objects that should be transient, are not. Is this a bug in lightcore, or am I doing it wrong? [Test] public void JellybeanDispenserHasNewInstanceEachTimeWithDefault() { var builder = new ContainerBuilder(); builder.Register<IJellybeanDispenser, VanillaJellybeanDispenser>(); builder.Register<SweetVendingMachine>().ControlledBy<TransientLifecycle>(); builder.Register<SweetShop>(); builder.DefaultControlledBy<TransientLifecycle>(); IContainer container = builder.Build(); SweetShop sweetShop = container.Resolve<SweetShop>(); SweetShop sweetShop2 = container.Resolve<SweetShop>(); Assert.IsFalse(ReferenceEquals(sweetShop, sweetShop2), "Root objects are equal"); Assert.IsFalse(ReferenceEquals(sweetShop.SweetVendingMachine, sweetShop2.SweetVendingMachine), "Contained objects are equal"); Assert.IsFalse(ReferenceEquals(sweetShop.SweetVendingMachine.JellybeanDispenser, sweetShop2.SweetVendingMachine.JellybeanDispenser), "services are equal"); } PS: I would tag this question with "lightcore", but suddenly my reputation isn't good enough to make a new tag. Huh.

    Read the article

  • How should I deal with floating numbers that numbers that can get so small that the become zero

    - by Tristan Havelick
    So I just fixed an interesting bug in the following code, but I'm not sure the approach I took it the best: p = 1 probabilities = [ ... ] # a (possibly) long list of numbers between 0 and 1 for wp in probabilities: if (wp > 0): p *= wp # Take the natural log, this crashes when 'probabilites' is long enough that p ends up # being zero try: result = math.log(p) Because the result doesn't need to be exact, I solved this by simply keeping the smallest non-zero value, and using that if p ever becomes 0. p = 1 probabilities = [ ... ] # a long list of numbers between 0 and 1 for wp in probabilities: if (wp > 0): old_p = p p *= wp if p == 0: # we've gotten so small, its just 0, so go back to the smallest # non-zero we had p = old_p break # Take the natural log, this crashes when 'probabilites' is long enough that p ends up # being zero try: result = math.log(p) This works, but it seems a bit kludgy to me. I don't do a ton of this kind of numerical programming, and I'm not sure if this is the kind of fix people use, or if there is something better I can go for.

    Read the article

  • How Best to Replace Ugly Queries and Dynamic PL/SQL with C#?

    - by Mike
    Hi, I write a lot of one-off Oracle SQL queries (in Toad), and sometimes they can get complex, involving lots of unions, joins, and subqueries, and sometimes requiring dynamic SQL. That is, sometimes SQL queries require set based processing along with significant procedural processing. This is what PL/SQL is custom made for, but as a language it does not begin to compare to C#. Now and then I convert a PL/SQL procedure to C#, and am always amazed at how much cleaner and easier to both read and write the C# version is. The C# program might for example construct a SQL query string piece by piece and/or run several queries and process them as needed. The C# version is usually much faster as well, which must mean that I'm not very good at PL/SQL either. I do not currently have access to LINQ. My question is, how best to package all these little C# programs, which are really just mini reports, that is, replacements for ugly SQL queries? Right now I'm actually using NUnit to hold them, and calling each report a [Test], even though they aren't really tests. NUnit just happens to provide a convenient packaging framework.

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >