Search Results

Search found 11190 results on 448 pages for 'bug report'.

Page 380/448 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • SQL Server 2008: Comparing similar records - Need to still display an ID for a record when the JOIN has no matches

    - by aleppke
    I'm writing a SQL Server 2008 report that will compare genetic test results for animals. A genetic test consists of an animalId, a gene and a result. Not all animals will have the same genes tested but I need to be able to display the results side-by-side for a given set of animals and only include the genes that are present for at least one of the selected animals. My TestResult table has the following data in it: animalId gene result 1 a CC 1 b CT 1 d TT 2 a CT 2 b CT 2 c TT 3 a CT 3 b TT 3 c CC 3 d CC 3 e TT I need to generate a result set that looks like the following. Note that Animal 3 is not being displayed (user doesn't want to see its results) and neither are results for Gene "e" since neither Animal 1 nor Animal 2 have a result for that gene: SireID SireResult CalfID CalfResult Gene 1 CC 2 CT a 1 CT 2 CT b 1 NULL 2 TT c 1 TT 2 NULL d But I can only manage to get this: SireID SireResult CalfID CalfResult Gene 1 CC 2 CT a 1 CT 2 CT b NULL NULL 2 TT c 1 TT NULL NULL d This is the query I'm using. SELECT sire.animalId AS 'SireID' ,sire.result AS 'SireResult' ,calf.animalId AS 'CalfID' ,calf.result AS 'CalfResult' ,sire.gene AS 'Gene' FROM (SELECT s.animalId ,s.result ,m1.gene FROM (SELECT [animalId ] ,result ,gene FROM TestResult WHERE animalId IN (1)) s FULL JOIN (SELECT DISTINCT gene FROM TestResult WHERE animalId IN (1, 2)) m1 ON s.marker = m1.marker) sire FULL JOIN (SELECT c.animalId ,c.result ,m2.gene FROM (SELECT animalId ,result ,gene FROM TestResult WHERE animalId IN (2)) c FULL JOIN (SELECT DISTINCT gene FROM TestResult WHERE animalId IN (1, 2)) m2 ON c.gene = m2.gene) calf ON sire.gene = calf.gene How do I get the SireIDs and CalfIDs to display their values when they don't have a record associated with a particular Gene? I was thinking of using COALESCE but I can't figure out how to specify the correct animalId to pass in. Any help would be appreciated.

    Read the article

  • Objects in JavaScript defined and undefined at the same time (in a FireFox extension)

    - by Alexey Romanov
    I am chasing down a bug in a FireFox extension. I've finally managed to see it for myself (I've only had reports before) and I can't understand how what I saw is possible. One error message from my extension in the Error Console is "gBrowser is not defined". This by itself would be surprising enough, since the overlay is over browser.xul and navigator.xul, and I expect gBrowser to be available from both. Even worse is the actual place where it happens: line 101 of nextplease.js. That is, inside the function isTopLevelDocument, which is only called from onContentLoaded, which is only called from onLoad here: gBrowser.addEventListener(this.loadType, function (event) { nextplease.loadListener.onContentLoaded(event); }, true); So gBrowser is defined in onLoad, but somehow undefined in isTopLevelDocument. When I tried to actually use the extension, I got another error: "nextplease is not defined". The interesting thing is that it happened on lines 853 and 857. That is, inside the functions nextplease.getNextLink = function () { nextplease.getLink(window.content, nextplease.NextPhrasesMap, nextplease.NextImagesMap, nextplease.isNextRegExp, nextplease.NEXT_SEARCH_TYPE); } nextplease.getPrevLink = function () { nextplease.getLink(window.content, nextplease.PrevPhrasesMap, nextplease.PrevImagesMap, nextplease.isPrevRegExp, nextplease.PREV_SEARCH_TYPE); } So nextplease is somehow defined enough to call these functions, but isn't defined inside them. Finally, executing typeof(nextplease) in Execute JS returns "object". Same for gBrowser. How can this happen? Any ideas?

    Read the article

  • Bind SQLiteDataReader to GridView in ASP.NET

    - by Charles Gargent
    Hi, this is all rather new to me, but I have searched for a good while and cant find any idea why I cant get this to work, dr looks like it is populated but I get this nullreferenceexeception when I try to bind to the gridview Thanks code SQLiteConnection cnn = new SQLiteConnection(@"Data Source=c:\log.db"); cnn.Open(); SQLiteCommand cmd = new SQLiteCommand(@"SELECT * FROM evtlog", cnn); SQLiteDataReader dr = cmd.ExecuteReader(); GridView1.DataSource = dr; GridView1.DataBind(); dr.Close(); cnn.Close(); Codebehind <asp:ContentPlaceHolder ID="MainContent" runat="server"> <asp:GridView ID="GridView1" runat="server"> </asp:GridView> </asp:ContentPlaceHolder> error Object reference not set to an instance of an object. at WPKG_Report.SiteMaster.Button1_Click(Object sender, EventArgs e) in C:\Projects\Report\Site.Master.cs:line 32 at System.Web.UI.WebControls.Button.OnClick(EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)

    Read the article

  • Absolute positioned div jumps outside containing div in IE7

    - by user367232
    Having trouble with a couple of display issues in IE7. Firstly, my large text headers display too far up in Internet Explorer (all pages) Secondly, my descriptions on my Portfolio pages end up outside their containing divs. Works great on FF/Chrome/Opera/Safari though! You'll see what I mean: http://bit.ly/a3hUD4 (I've used bitly so my dumb questions don't show up when someone googles my website). I've googled extensively. Not sure if problem number 2 is a overflow issue, or a absolute positioning bug in IE. Here's the CSS for the centre div with the jumbo-text titles .column1 { padding: 103px 10px 10px 10px; float: left; width: 500px; margin: 0; } And for the description div on the portfolio page .porttxtbox { text-align: right; background-image: url(images/porttxtBG.png); bottom: 0; position: absolute; width: 100%; padding: 0px; margin: 0px; } And it's container div .portimgbox { padding: 0px; margin: 0px; height: 250px; width: 480px; position: relative; border: 5px solid #EAEAEA; Thanks in advance!

    Read the article

  • How to stop ejabberd from using mnesia

    - by Eldad Mor
    I'm trying to establish a procedure for restoring my database from a crashed server to a new server. My server is running Ejabberd as an XMPP server, and I configured it to use postgresql instead of mnesia - or so I thought. My procedure goes something like "dump the contents of the original DB, run the new server, restore the contents of the DBs using psql, then run the system". However, when I try running Ejabberd again I get a crash: =CRASH REPORT==== 3-Dec-2010::22:05:00 === crasher: pid: <0.36.0> registered_name: [] exception exit: {bad_return,{{ejabberd_app,start,[normal,[]]}, {'EXIT',"Error reading Mnesia database"}}} in function application_master:init/4 Here I was thinking that my system is running on PostgreSQL, while it seems I was still using Mnesia. I have several questions: How can I make sure mnesia is not being used? How can I divert all the ejabberd activities to PGSQL? This is the modules part in my ejabberd.cfg file: {modules, [ {mod_adhoc, []}, {mod_announce, [{access, announce}]}, % requires mod_adhoc {mod_caps, []}, {mod_configure,[]}, % requires mod_adhoc {mod_ctlextra, []}, {mod_disco, []}, {mod_irc, []}, {mod_last_odbc, []}, {mod_muc, [ {access, muc}, {access_create, muc}, {access_persistent, muc}, {access_admin, muc_admin}, {max_users, 500} ]}, {mod_offline_odbc, []}, {mod_privacy_odbc, []}, {mod_private_odbc, []}, {mod_pubsub, [ % requires mod_caps {access_createnode, pubsub_createnode}, {plugins, ["default", "pep"]} ]}, {mod_register, [ {welcome_message, none}, {access, register} ]}, {mod_roster_odbc, []}, {mod_stats, []}, {mod_time, []}, {mod_vcard_odbc, []}, {mod_version, []} ]}. What am I missing? I am assuming the crash is due to the mnesia DB being used by Ejabberd, and since it's out of sync with the PGSQL DB, it cannot operate correctly - but maybe I'm totally off track here, and would love some direction. EDIT: One problem solved. Since I'm using amazon cloud, I needed to hardcode the ERLANG_NODE so it won't be defined by the hostname (which changes on reboot). This got my ejabberd running, but still I wish to stop using mnesia, and I wonder what part of ejabberd is still using it and how can I found it.

    Read the article

  • Why did this work with Visual C++, but not with gcc?

    - by Carlos Nunez
    I've been working on a senior project for the last several months now, and a major sticking point in our team's development process has been dealing wtih rifts between Visual-C++ and gcc. (Yes, I know we all should have had the same development environment.) Things are about finished up at this point, but I ran into a moderate bug just today that had me wondering whether Visual-C++ is easier on newbies (like me) by design. In one of my headers, there is a function that relies on strtok to chop up a string, do some comparisons and return a string with a similar format. It works a little something like the following: int main() { string a, b, c; //Do stuff with a and b. c = get_string(a,b); } string get_string(string a, string b) { const char * a_ch, b_ch; a_ch = strtok(a.c_str(),","); b_ch = strtok(b.c_str(),","); } strtok is infamous for being great at tokenizing, but equally great at destroying the original string to be tokenized. Thus, when I compiled this with gcc and tried to do anything with a or b, I got unexpected behavior, since the separator used was completely removed in the string. Here's an example in case I'm unclear; if I set a = "Jim,Bob,Mary" and b="Grace,Soo,Hyun", they would be defined as a="JimBobMary" and b="GraceSooHyun" instead of staying the same like I wanted. However, when I compiled this under Visual C++, I got back the original strings and the program executed fine. I tried dynamically allocating memory to the strings and copying them the "standard" way, but the only way that worked was using malloc() and free(), which I hear is discouraged in C++. While I'm curious about that, the real question I have is this: Why did the program work when compiled in VC++, but not with gcc? (This is one of many conflicts that I experienced while trying to make the code cross-platform.) Thanks in advance! -Carlos Nunez

    Read the article

  • How do I find a source code position from an address given by a crash in Window CE

    - by Shane MacLaughlin
    I have a Windows mobile 4.0 application, written using EVC++ 4.0 SP4 with MFC, that is exhibiting a random occasional crash in the field. e.g. Exception ox800000002 at 00112584. It does not happen under various emulators and simulators, hence is very difficult to trace using a debugger. The crash throws up and address and exception type. Given that I have the PDB is there any way to track this address to the source. I can't recompile using VC++ 8 as it doesn't support the mobile 4 SDK. My guess is that without a stack trace I'm not going to have much joy, as the chances are that the exception may not be in my source. Worth a try all the same. Edit As suggested, I have looked at the address in the context of the .MAP file for the program. This reveals the following Address Publics by Value Rva+Base Lib:Object 0001:00000000 ?GetUnduValue@@YANMM@Z 00011000 f 7Par.obj ' ' ' 0001:001124b8 ?OnLButtonUp@CGXGridUserDragSelectRangeImp@@UAAHPAVCGXGridCore@@AAVCPoint@@AAI@Z 001234b8 f gxseldrg.obj 0001:001126d8 ?OnSelDragStart@CGXGridUserDragSelectRangeImp@@UAAHPAVCGXGridCore@@KK@Z 001236d8 f gxseldrg.obj Which suggests the error occured during CGXGridUserDragSelectRangeImp::OnLButtonUp(), which seems a bit odd as I don't think there was a mouse / keyboard / screen button pressed at the time. Could be the stack got fragged before the crash got reported, and I'm wasting my time. I'll recompile with assembler output to try to isolate it to a given line, but don't hold out much hope :( Does the fact that the map file reports segmented addresses e.g. 0001:xxxxxxxxx and the crash report unsegmented addresses mean I have to carry out some computation to get the map address from the crash address?

    Read the article

  • Why does 12:20 PM parse to 0:20 on the next day?

    - by Hanno Fietz
    I'm using java.text.SimpleDateFormat to parse string representations of date/time values inside an XML document. I'm seeing all times that have an hour value of 12 shifted by 12 hours into the future, i. e. 20 minutes past noon gets parsed to mean 20 minutes past midnight the following day. I wrote a unit test which seems to confirm that the error is made upon parsing (I checked the return values from getTime() with the linux shell command date). Now I'm wondering: is there a bug in the parse() method? is there something wrong with the input string? am I using the wrong format string for the input? The input data is taken from Yahoo's YWeather service. Here's the test and its output: public class YWeatherReaderTest { public static final String[] rgDateSamples = { "Thu, 08 Apr 2010 12:20 PM CEST", "Thu, 08 Apr 2010 12:20 AM CEST" }; public void dateParsing() throws ParseException { DateFormat formatter = new SimpleDateFormat("EEE, dd MMM yyyy K:m a z", Locale.US); for (String dtsSrc : YWeatherReaderTest.rgDateSamples) { Date dt = formatter.parse(dtsSrc); String dtsDst = formatter.format(dt); System.out.println(dtsSrc); System.out.println(dtsDst); System.out.println(); } } } Thu, 08 Apr 2010 12:20 PM CEST Fri, 09 Apr 2010 0:20 AM CEST Thu, 08 Apr 2010 12:20 AM CEST Thu, 08 Apr 2010 0:20 PM CEST The second output line of the second iteration is slightly weird, because 00:20 isn't PM. The milliseconds value of the Date object, however, corresponds to the (wrong) time of 20 minutes past noon.

    Read the article

  • Trying to use Rhino, getEngineByName("JavaScript") returns null in OpenJDK 7

    - by Yuval
    When I run the following piece of code, the engine variable is set to null when I'm using OepnJDK 7 (java-7-openjdk-i386). import javax.script.ScriptEngine; import javax.script.ScriptEngineManager; import javax.script.ScriptException; public class TestRhino { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub ScriptEngineManager factory = new ScriptEngineManager(); ScriptEngine engine = factory.getEngineByName("JavaScript"); try { System.out.println(engine.eval("1+1")); } catch (ScriptException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } It runs fine with java-6-openjdk and Oracle's jre1.7.0. Any idea why? I'm using Ubuntu 11.10. All JVMs are installed under /usr/lib/jvm. I noticed OpenJDK 7 has a different directory structure. Perhaps something is not installed right? $ locate rhino.jar /usr/lib/jvm/java-6-openjdk/jre/lib/rhino.jar /usr/lib/jvm/java-7-openjdk-common/jre/lib/rhino.jar /usr/lib/jvm/java-7-openjdk-i386/jre/lib/rhino.jar Edit Since ScriptEngineManager uses a ServiceProvider to find the available script engines, I snooped around resources.jar's META-INF/services. I noticed that in OpenJDK 6, resources.jar has a META-INF/services/javax.script.ScriptEngineFactory entry which is missing from OpenJDK 7. Any idea why? I suspect this is a bug? Here is the contents of that entry (from OpenJDK 6): #script engines supported com.sun.script.javascript.RhinoScriptEngineFactory #javascript Another edit Apparently, according to this thread, the code simply isn't there, perhaps because of merging issues between Sun and Mozilla code. I still don't understand why it was present in OpenJDK 6 and not 7. The class com.sun.script.javascript.RhinoScriptEngineFactory exists in 6's rt.jar but not in 7's. If it was not meant to be included, why is there a OpenJDK 7 rhino.jar then; and why is the source still in the OpenJDK source tree (here)?

    Read the article

  • Standardizing a Release/Tools group on a specific language

    - by grahzny
    I'm part of a six-member build and release team for an embedded software company. We also support a lot of developer tools, such as Atlassian's Fisheye, Jira, etc., Perforce, Bugzilla, AnthillPro, and a couple of homebrew tools (like my Django release notes generator). Most of the time, our team just writes little plugins for larger apps (ex: customize workflows in Anthill), long-term utility scripts (package up a release for QA), or things like Perforce triggers (don't let people check into a specific branch unless their change description includes a bug number; authenticate against Active Directory instead of Perforce's internal passwords). That's about the scale of our problems, although we sometimes tackle something slightly more sizable. My boss, who is reasonably technical, has asked us to standardize on one or two languages so we can more easily substitute for each other. He's advocating bash scripts and Perl, due to their universality and simplicity. I can see his point--we mostly do "glue", so why not use "glue" languages rather than saddle ourselves with something designed for much larger projects? Since some of the tools we work with are Java-based, we do need to use something that speaks JVM sometimes. (The path of least resistance for these projects is BeanShell and Groovy.) I feel a tremendous itch toward language advocacy, but I'm trying to avoid saying "We should use Python 'cause I like it and Perl is gross." Instead, I'm trying to come up with a good approach to defining our problem set: what problems do we solve with scripts? Would we benefit from a library of common functions by our team, or are most of our projects more isolated? What is it reasonable to expect my co-workers to learn? What languages give us the most ease of development and ease of modification? Can you folks suggest some useful ways to approach this problem, both for my own thinking process and to help me facilitate some brainstorming among my coworkers?

    Read the article

  • simpletest - Why does setReturnValue() seem to change behaviour depending whether test is run in iso

    - by JW
    I am using SimpleTest version 1.0.1 for a unit test. I create a new mock object within a test method and on it i do: $MockDbAdaptor->setReturnValue('query',1); Now, when i run this in a standalone unit test my tested object is happy to see 1 returned when query() is called on the mock db adaptor. However, when this exact same test is run as part of my 'all_tests' TestSuite, the test is failing. This happens because a call to the mock's query() method does not appear to return any value - thus causing my test subject to complain and trigger an unexpected exception that fails the test. So, the behaviour of setReturnValue() seems to change depending on whether the test is run in isolation or not. I can get it to work in both a standalone and TestSuite contexts by using this instead: $MockDbAdaptor->setReturnValueAt(0,'query',1); So my immediate problem can be fixed ...but it feels like a hack. I thought if i create a new mock within a test method then why is the setReturnValue() behaviour getting affected by the context in which the test class instance is run? It feel like a bug.

    Read the article

  • Using PHP SoapClient with Java JAX-WS RI (Webservice)

    - by Kau-Boy
    For a new project, we want to build a web service in JAVA using JAX-WS RI and for the web service client, we want to use PHP. In a small tutorial about JAX-WS RI I found this example web service: package webservice; import javax.jws.WebService; import javax.jws.soap.SOAPBinding; import javax.jws.soap.SOAPBinding.Style; @WebService @SOAPBinding(style = Style.RPC) public class Calculator { public long addValues(int val1, int val2) { return val1 + val2; } } and for the web server: package webservice; import javax.xml.ws.Endpoint; import webservice.Calculator; public class CalculatorServer { public static void main(String args[]) { Calculator server = new Calculator(); Endpoint endpoint = Endpoint.publish("http://localhost:8080/calculator", server); } } Starting the Server and browsing the WDSL with the URL "http://localhost:8080/calculator?wsdl" works perfectly. But calling the web service from PHP fails My very simple PHP call looks like this: $client = new SoapClient('http://localhost:8080/calculator?wsdl', array('trace' => 1)); echo 'Sum: '.$client->addValues(4, 5); But than I either get a "Fatal error: Maximum execution time of 60 seconds exceeded..." or a "Uncaught SoapFault exception: [WSDL] SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://localhost:8080/calculator?wsdl' ..." exception. I have tested the PHP SoapClient() with other web services and they work without any issues. Is there a known issue with JAX-WS RI in comibation with PHP, or is there an error in my web service I didn't see? I have found this bug report, but even updating to PHP 5.3.2 doesn't solved the problem. Can anyone tell me what to do? And by the way, my java version running on Windows 7 x64 is the following: java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode)

    Read the article

  • Too many false positives when using FxCop.

    - by mark
    Dear ladies and sirs. We are using FxCop and it generates too many false positives to our liking. For instance, if a private method is invoked using reflection, then this method is reported as potentially unused - understandable and we suppress this warning explicitly using the SuppressMessage attribute. However, FxCop reports the same warning for the methods invoked from that method, which we already suppressed warnings about. This is stupid and generates too much noise. There are also false reports on member variables used in these methods. Also, there are problems with generic types (I even saw something about it in MS connect). Anyway, I am wondering if anyone knows whether Microsoft is going to upgrade FxCop, because it seems to be stuck in version 1.36 for a long time. BTW, I we do not use StyleCop, because it is way too picky and we just do not have the time to examine all the zillion messages in order to suppress them all. Besides, the StyleCop report seem to augment, rather than replace FxCop. Maybe anyone can suggest a good alternative to FxCop? We are using VS2008 pro. Thanks.

    Read the article

  • Debug NAudio MP3 reading difference?

    - by Conrad Albrecht
    My code using NAudio to read one particular MP3 gets different results than several other commercial apps. Specifically: My NAudio-based code finds ~1.4 sec of silence at the beginning of this MP3 before "audible audio" (a drum pickup) starts, whereas other apps (Windows Media Player, RealPlayer, WavePad) show ~2.5 sec of silence before that same drum pickup. The particular MP3 is "Like A Rolling Stone" downloaded from Amazon.com. Tested several other MP3s and none show any similar difference between my code and other apps. Most MP3s don't start with such a long silence so I suspect that's the source of the difference. Debugging problems: I can't actually find a way to even prove that the other apps are right and NAudio/me is wrong, i.e. to compare block-by-block my code's results to a "known good reference implementation"; therefore I can't even precisely define the "error" I need to debug. Since my code reads thousands of samples during those 1.4 sec with no obvious errors, I can't think how to narrow down where/when in the input stream to look for a bug. The heart of the NAudio code is a P/Invoke call to acmStreamConvert(), which is a Windows "black box" call which I can't think how to error-check. Can anyone think of any tricks/techniques to debug this?

    Read the article

  • Automatic web form testing/filling

    - by Polatrite
    I recently became lead on getting an inordinate amount of testing done in a very short period of time. We have many different web forms, using custom (Telerik) controls that need to be tested for proper data validation and sensible handling of the data. Some of the forms are several pages long with 30-80 different controls for data entry. I am looking for a software solution (that is free) that would allow me to automate the process of filling in these forms by designing a script, or using a UI. The other requirement is that I can't use any browsers but IE6 (terrible, I know). I have previously used AutoHotkey to great success for automatic Windows form testing, since Autohotkey's API allows you to directly reference controls on the Windows form. However Autohotkey does not have similar support for web forms (everything is just one big "InternetExplorer" control). While I would prefer that I could script some variance in the data to help serialize each test, it's not necessary, as I could go back through and manually edit a field or two (plus "break" whatever control I'm currently testing) to serialize each test. If you've ever seen Spawner: http://forge.mysql.com/projects/project.php?id=214 It's almost exactly the sort of thing I'm looking for (Spawner generates dummy SQL data, as opposed to dummy webform data) - but I won't be picky, I've got a really short deadline to meet and had this thrust in my lap just today. ;) Edit1: One of the challenges of just using Autohotkey to simulate keyboard input (tabbing through controls) is that some controls don't currently have tab index (bug), and some controls cause a page reload after modification, resulting in inconsistent control focus (tabbing screwed up). Our application makes heavy use of page reloads to populate fields (select a location, it auto-populates a city, for example).

    Read the article

  • Adding a decal using multitexturing on an iPhone

    - by Axis
    I'm trying to overlay one image on top of another onto a simple quad. I set my bottom image as texture unit 0, and then my top image (which has a variable alpha) as texture unit 1. Unit 2 has mode GL_DECAL, which means the bottom texture should show up when the alpha is 0, and the top texture should show when the alpha is 1. But, only the top texture shows up and the bottom one doesn't appear at all. It's just white where the bottom texture should show through. glGetError() doesn't report any problems. Any help is appreciated. Thanks! glVertexPointer(3, GL_FLOAT, 0, boxVertices); glEnableClientState(GL_VERTEX_ARRAY); glClientActiveTexture(GL_TEXTURE0); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glTexCoordPointer(2, GL_FLOAT, 0, boxTextureCoords); glClientActiveTexture(GL_TEXTURE1); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glTexCoordPointer(2, GL_FLOAT, 0, boxTextureCoords); glClientActiveTexture(GL_TEXTURE0); glEnable(GL_TEXTURE_2D); glClientActiveTexture(GL_TEXTURE1); glEnable(GL_TEXTURE_2D); glClientActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, one.texture); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glClientActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, two.texture); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glDrawArrays(GL_TRIANGLE_FAN, 0, 4);

    Read the article

  • XSP2 crashes serving static images

    - by Dawkins
    Hi, Requesting a simple HTML page with a jpg image makes XSP2 crash. If I remove the image from the HTML then the page is served OK all the time. The version is XSP2 2.0 mono 2.6.1. the version 2.4.2.2 in the same machine works fine. I have tested it in two different machines, both Windows Vista Business SP1. Anyone has experienced the same? Any clue of what can be the problem? Below is the stack trace displayed by the console: (The line in Spanish says "it has been forced the interruption of an existing connection by the remote host") EDIT: since another user is having the same problem I have submited a bug to Novell and created a litle zip to reproduce the problem: https://bugzilla.novell.com/show_bug.cgi?id=582162 Peer unexpectedly closed the connection on write. Closing our end. System.IO.IOException: Write failure ---> System.Net.Sockets.SocketException: Se ha forzado la interrupción de una conexión existente por el host remoto. at System.Net.Sockets.Socket.Send (System.Byte[] buf, Int32 offset, Int32 size , SocketFlags flags) [0x00000] in <filename unknown>:0 at System.Net.Sockets.NetworkStream.Write (System.Byte[] buffer, Int32 offset, Int32 size) [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Write (System.Byte[] buffer, Int32 offset, Int32 size) [0x00000] in <filename unknown>:0 at Mono.WebServer.XSPWorker.Write (System.Byte[] buffer, Int32 position, Int32 size) [0x00000] in <filename unknown>:0 Peer unexpectedly closed the connection on write. Closing our end. System.ObjectDisposedException: The object was used after being disposed. at System.Net.Sockets.NetworkStream.CheckDisposed () [0x00000] in <filename un known>:0 at System.Net.Sockets.NetworkStream.Write (System.Byte[] buffer, Int32 offset, Int32 size) [0x00000] in <filename unknown>:0 at Mono.WebServer.XSPWorker.Write (System.Byte[] buffer, Int32 position, Int32 size) [0x00000] in <filename unknown>:0 Thank you.

    Read the article

  • Can't get automated release working with Hudson + Git + Maven Release Plugin

    - by Christopher Maier
    As the title says, I'm trying to get an automated release job working on Hudson. It's a Maven project, and all the code is in Git. Manually, I do the release on my personal machine like so: git checkout master mvn -B release:prepare release:perform This works perfectly. The Maven release plugin properly pushes the release tag to the origin repository as well as the next commit that bumps the version to the next SNAPSHOT. However, when I run this same Maven job through Hudson (either by creating my own "release" job or by using the M2 Release Plugin) it doesn't work so well. The release tag gets pushed out to the origin repository, and the release gets pushed out to our Nexus repository, but the subsequent commit that bumps the version to the next SNAPSHOT doesn't go out. Furthermore, the "master" branch in the origin repository doesn't get changed at all. I've looked in Hudson's workspace for the job, however, and the version has been updated. After looking at the output from the Hudson job, it appears that the Git plugin does not actually checkout "master", but rather it's SHA1 id. That is, if the "master" branch label points to commit "f6af76f541f1a1719e9835cdb46a183095af6861", Hudson does git checkout -f f6af76f541f1a1719e9835cdb46a183095af6861 instead of git checkout -f master As a result, the changes that the Maven release plugin is making are not actually on any branch (certainly not on "master") and these changes don't make it to the origin repository. It runs on the right code, but bookkeeping-wise, the changes seem to get lost because no branch label points to them. Has anybody gotten the Hudson + Git + Maven Release Plugin combo to work properly? Is there some additional configuration somewhere I can set to make this happen? Or is this a bug in the Hudson Git plugin? Thanks in advance.

    Read the article

  • WPF validation red border doesn't show If UserControl collapsed first

    - by Creepy Gnome
    There seems to be a bug with WPF in 3.5, and I was hoping someone may have found a workaround. Basically if you have a custom UserControl that contains a TextBox and it is in a Window but initialized to be Collapsed by default in the xaml or code behind if it fails validation when you make the control visible it will not show the red border until it fails while visible. However, this works correctly when visibility is set to Hidden, just no when Collapsed. I am already overriding the ErrorTemplate with a style to workaround the Adornment issue with the red border staying visibile when you collapse the control. Below is my full style for the TextBox. If there is any additional changes or additions to make it work correctly with collapsed controls that would be great. <Style TargetType="TextBox"> <Setter Property="Margin" Value="3" /> <Setter Property="Validation.ErrorTemplate"> <Setter.Value> <ControlTemplate> <ControlTemplate.Resources> <BooleanToVisibilityConverter x:Key="converter" /> </ControlTemplate.Resources> <DockPanel LastChildFill="True"> <Border BorderThickness="2" BorderBrush="Red" Visibility="{ Binding ElementName=placeholder, Mode=OneWay, Path=AdornedElement.IsVisible, Converter={StaticResource converter}}" > <AdornedElementPlaceholder x:Name="placeholder" /> </Border> </DockPanel> </ControlTemplate> </Setter.Value> </Setter> <Style.Triggers> <Trigger Property="Validation.HasError" Value="true" > <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self}, Path=(Validation.Errors)[0].ErrorContent}" /> </Trigger> </Style.Triggers> </Style>

    Read the article

  • Why are mercurial subrepos behaving as unversioned files in eclipse AND torotoiseHG

    - by noam
    I am trying to use the subrepo feature of mercurial, using the mercurial eclipse plugin\tortoiseHG. These are the steps I took: Created an empty dir /root cloned all repos that I want to be subrepos inside this folder (/root/sub1, /root/sub2) Created and added the .hgsub file in the root repo /root/.hgsub and put all the mappings of the sub repos in it using tortoiseHG, right clicked on /root and selected create repository here again with tortoise, selected all the files inside /root and added them to to the root repo commited the root repo pushed the local root repo into an empty repo I have set up on kiln Then, I pulled the root repo in eclipse, using import-mercurial. Now I see that all the subrepos appear as though they are unversioned (no "orange cylinder" icon next to their corresponding folders in the eclipse file explorer). Furthermore, when I right click on one of the subrepos, I don't get all the hg commands in the "team" menu as I usually get, with root projects - no "pull", "push" etc. Also, when I made a change to a file in a subrepo, and then "committed" the root project, it told me there were no changes found. I see the same behavior also in tortoiseHG - When I am browsing files under /root, the files belonging directly to the root repo have an small icon (a V sign) on them marking they are version controlled, while the subrepos' folders aren't marked as such. Am I doing something wrong, or is it a bug?

    Read the article

  • Persistent warning message about "initWithDelegate"!

    - by RickiG
    Hi This is not an actual Xcode error message, it is a warning that has been haunting me for a long time. I have found no way of removing it and I think I maybe have overstepped some unwritten naming convention rule. If I build a class, most often extending NSObject, whose only purpose is to do some task and report back when it has data, I often give it a convenience constructor like "initWithDelegate". The first time I did this in my current project was for a class called ISWebservice which has a protocol like this: @protocol ISWebserviceDelegate @optional - (void) serviceFailed:(NSError*) error; - (void) serviceSuccess:(NSArray*) data; @required @end Declared in my ISWebservice.h interface, right below my import statements. I have other classes that uses a convenience constructor named "initWithDelegate". E.g. "InternetConnectionLost.h", this class does not however have its methods as optional, there are no @optional @required tags in the declaration, i.e. they are all required. Now my warning pops up every time I instantiate one of these Classes with convenience constructors written later than the ISWebservice, so when utilizing the "InternetConnectionLost" class, even though the entire Class owning the "InternetConnectionLost" object has nothing to do with the "ISWebservice" Class, no imports, methods being called, no nothing, the warning goes: 'ClassOwningInternetConnectionLost' does not implement the 'ISWebserviceDelegate' protocol I does not break anything, crash at runtime or do me any harm, but it has begun to bug me as I near release. Also, because several classes use the "initWithDelegate" constructor naming, I have 18 of these warnings in my build results and I am getting uncertain if I did something wrong, being fairly new at this language. Hope someone can shed a little light on this warning, thank you:)

    Read the article

  • Accuracy of OpenGL ES Instrument

    - by Rob Jones
    I'm developing a game for the iPhone. I've decided that 30FPS is plenty so I've written some code that only allows the App to present the render buffer every 1/30 of a second. When I tried to verify this with Instruments I got varying information. On an iPod Touch (2009 edition, 32G) it reports 30 FPS for Core Animation Frames Per Second. On an iPhone 3G I get wildly varying results. And not just less than 30 FPS. I see 30 FPS on a regular basis. It actually seems to hang closer to 36-39. To investigate this anomaly I added my own FPS to the app and update it once per second. I stays right at 29 FPS on both devices. So, does anyone have any suggestions as to what might be going on? I expect Instruments to be accurate so it really concerns me that it appears inaccurate. It makes me think I have a bug somewhere, but I sure can't find it.

    Read the article

  • Pinax TemplateSyntaxError

    - by Spikie
    hi, i ran into this errors while trying to modify pinax database model i am using eclipse pydev i have this error on the pydev Exception Type: TemplateSyntaxError at / Exception Value: Caught an exception while rendering: (1146, "Table 'test1.announcements_announcement' doesn't exist") please how do i correct this UPDATE: i asked this question and left unresolved some months back and you what ran into the bug again this week and typed the error message in google hit the page with the question and unanswered so i think i have to answer it and hope it help someone in the future have the same problem. some the problem is that the sqlite path is out of place so django or this case pinax can not find it so to resolve that change the absolute path to sqlite like it DATABASE_ENGINE = 'sqlite3' # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'ado_mssql'. DATABASE_NAME = os.path.join(PROJECT_ROOT,'dev.db' ) # Or path to database file if using sqlite3. DATABASE_USER = '' # Not used with sqlite3. DATABASE_PASSWORD = '' # Not used with sqlite3. DATABASE_HOST = '' # Set to empty string for localhost. Not used with sqlite3. DATABASE_PORT = '' # Set to empty string for default. Not used with sqlite3. i hope that help

    Read the article

  • Delphi 6 OleServer.pas Invoke memory leak

    - by Mike Davis
    There's a bug in delphi 6 which you can find some reference online for when you import a tlb the order of the parameters in an event invocation is reversed. It is reversed once in the imported header and once in TServerEventDIspatch.Invoke. you can find more information about it here: http://cc.embarcadero.com/Item/16496 somewhat related to this issue there appears to be a memory leak in TServerEventDispatch.Invoke with a parameter of a Variant of type Var_Array (maybe others, but this is the more obvious one i could see). The invoke code copies the args into a VarArray to be passed to the event handler and then copies the VarArray back to the args after the call, relevant code pasted below: // Set our array to appropriate length SetLength(VarArray, ParamCount); // Copy over data for I := Low(VarArray) to High(VarArray) do VarArray[I] := OleVariant(TDispParams(Params).rgvarg^[I]); // Invoke Server proxy class if FServer <> nil then FServer.InvokeEvent(DispID, VarArray); // Copy data back for I := Low(VarArray) to High(VarArray) do OleVariant(TDispParams(Params).rgvarg^[I]) := VarArray[I]; // Clean array SetLength(VarArray, 0); There are some obvious work-arounds in my case: if i skip the copying back in case of a VarArray parameter it fixes the leak. to not change the functionality i thought i should copy the data in the array instead of the variant back to the params but that can get complicated since it can hold other variants and seems to me that would need to be done recursively. Since a change in OleServer will have a ripple effect i want to make sure my change here is strictly correct. can anyone shed some light on exactly why memory is being leaked here? I can't seem to look up the callstack any lower than TServerEventDIspatch.Invoke (why is that?) I imagine that in the process of copying the Variant holding the VarArray back to the param list it added a reference to the array thus not allowing it to be release as normal but that's just a rough guess and i can't track down the code to back it up. Maybe someone with a better understanding of all this could shed some light?

    Read the article

  • Boost causes an invalid block while overloading new/delete operators

    - by user555746
    Hi, I have a problem which appears to a be an invalid memory block that happens during a Boost call to Boost:runtime:cla::parser::~parser. When that global delete is called on that object, C++ asserts on the memory block as an invalid: dbgdel.cpp(52): /* verify block type */ _ASSERTE(_BLOCK_TYPE_IS_VALID(pHead->nBlockUse)); An investigation I did revealed that the problem happened because of a global overloading of the new/delete operators. Those overloadings are placed in a separate DLL. I discovered that the problem happens only when that DLL is compiled in RELEASE while the main application is compiled in DEBUG. So I thought that the Release/Debug build flavors might have created a problem like this in Boost/CRT when overloading new/delete operators. So I then tried to explicitly call to _malloc_dbg and _free_dbg withing the overloading functions even in release mode, but it didn't solve the invalid heap block problem. Any idea what the root cause of the problem is? is that situation solvable? I should stress that the problem began only when I started to use Boost. Before that CRT never complained about any invalid memory block. So could it be an internal Boost bug? Thanks!

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >