Search Results

Search found 8818 results on 353 pages for 'undefined behavior'.

Page 304/353 | < Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >

  • named type not used for constructor injection

    - by nmarun
    Hi, I have a simple console application where I have the following setup: public interface ILogger { void Log(string message); } class NullLogger : ILogger { private readonly string version; public NullLogger() { version = "1.0"; } public NullLogger(string v) { version = v; } public void Log(string message) { Console.WriteLine("NULL " + version + " : " + message); } } The configuration details are below: <type type="UnityConsole.ILogger, UnityConsole" mapTo="UnityConsole.NullLogger, UnityConsole"> My calling code looks as below: IUnityContainer container = new UnityContainer(); UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity"); section.Containers.Default.Configure(container); ILogger nullLogger = container.Resolve(); nullLogger.Log("hello"); This works fine, but once I give a name to this type something like: <type type="UnityConsole.ILogger, UnityConsole" mapTo="UnityConsole.NullLogger, UnityConsole" name="NullLogger"> The above calling code does not work even if I explicitly register the type using container.RegisterType<ILogger, NullLogger>(); I get the error: {"Resolution of the dependency failed, type = \"UnityConsole.ILogger\", name = \"\". Exception message is: The current build operation (build key Build Key[UnityConsole.NullLogger, null]) failed: The parameter v could not be resolved when attempting to call constructor UnityConsole.NullLogger(System.String v). (Strategy type BuildPlanStrategy, index 3)"} Why doesn't unity look into named instances? To get it to work, I'll have to do: ILogger nullLogger = container.Resolve("NullLogger"); Where is this behavior documented? Arun

    Read the article

  • How to check whether user is login in web application?

    - by Morgan Cheng
    I want to learn the whole details of web application authentication. So, I decided to write a CodeIgniter authentication library from scratch. Now, I have to make design decision about how to determine whether one user is login. Basically, after user input username & password pair. A cookie is set for this session, following navigations in the web application will not require username & password. The server side will check whether the session cookie is valid to determine whether current user is login. The question is: how to determine whether cookie is valid cookie issued from server side? I can image the most simple way is to have the cookie value stored in session status as well. For each HTTP request, compare the value from cookie and the value from server session. (Since CodeIgniter session library store session variables in cookies, it is not applicable without some tweak.) This method requires storage in server side. For huge web application that is deployed in multiple datacenters. It is possible that user input username & password when browsing in one datacenter, while he/she access the web application in another datacenter later. The expected behavior is that user just input username & password once. As a result, all datacenters should be able to access the session status. That is possible not applicable even the session status is stored in external storage such as database. I tried Google. I login Google with Asian proxy which is supposed to direct me to datacenters in Asian. Then I switch to North American proxy which should direct me to datacenters in North America. It recognize my login without asking username and password again. So, is there any way to determine whether user is login without server side session status?

    Read the article

  • CSS Clearing Floats

    - by Frank
    I'm making more of an effort to separate my html structure from presentation, but sometimes when I look at the complexity of the hacks or workarounds to make things work cross-browser, I'm amazed at huge collective waste of productive hours that are put into this. As I understand it, floats were never created for creating layouts, but because many layouts need a footer, that's how they're often being used. To clear the floats, you can add an empty div that clears both sides (div class="clear"). That is simple and works cross browser, but it adds "non-semantic" html rather than solving the presentation problem within the CSS. I realize this, but after looking at all of the solutions with their benefits and drawbacks, it seems to make more sense to go with the empty div (predictable behavior across browsers), rather than create separate stylesheets, including various css hacks and workarounds, etc. which would also need to change as CSS evolves. Is it o.k. to do this as long as you do understand what you're doing and why you're doing it? Or is it better to find the CSS workarounds, hacks and separate structure from presentation at all costs, even when the CSS presentation tools provided are not evolved to the point where they can handle such basic layout issues?

    Read the article

  • What happens to class members when malloc is used instead of new?

    - by Felix
    I'm studying for a final exam and I stumbled upon a curious question that was part of the exam our teacher gave last year to some poor souls. The question goes something like this: Is the following program correct, or not? If it is, write down what the program outputs. If it's not, write down why. The program: #include<iostream.h> class cls { int x; public: cls() { x=23; } int get_x(){ return x; } }; int main() { cls *p1, *p2; p1=new cls; p2=(cls*)malloc(sizeof(cls)); int x=p1->get_x()+p2->get_x(); cout<<x; return 0; } My first instinct was to answer with "the program is not correct, as new should be used instead of malloc". However, after compiling the program and seeing it output 23 I realize that that answer might not be correct. The problem is that I was expecting p2->get_x() to return some arbitrary number (whatever happened to be in that spot of the memory when malloc was called). However, it returned 0. I'm not sure whether this is a coincidence or if class members are initialized with 0 when it is malloc-ed. Is this behavior (p2->x being 0 after malloc) the default? Should I have expected this? What would your answer to my teacher's question be? (besides forgetting to #include <stdlib.h> for malloc :P)

    Read the article

  • Timer appears to be pausing when screen becomes inactive

    - by elchuppa
    So I have a very simple android activity that starts a timer when you hit a button. Timer timer = new Timer(); timer.schedule(new TimerTask() { @Override public void run() { doStuff(); } }, 15 * 60 * 1000); So this worked reasonably well for me when I was testing but as it turns out when the screen becomes inactive so does the timer. I was a bit surprised by this. I understand you need to create a service to have anything running in the background but I hadn't realized this is required for an activity in the foreground when the phone has inactivated the screen due to lack of activity. What confuses me is I think this worked as I expected originally and just in the last few weeks or so has the timer been affected by the phone saving power. I could be wrong though.. So basically my questions are: am I seeing expected behavior? Do I need to create all timers as services or somehow disallow powersaving? thanks for any advice, Patrick

    Read the article

  • libcurl (c api) READFUNCTION for http PUT blocking forever

    - by Duane
    I am using libcurl for a RESTful library. I am having two problems with a PUT message, I am just trying to send a small content like "hello" via put. My READFUNCTION for PUT's blocks for a very large amount of time (minutes) when I follow the manual at curl.haxx.se and return a 0 indicating I have finished the content. (on os X) When I return something 0 this succeeds much faster (< 1 sec) When I run this on my linux machine (ubuntu 10.4) this blocking event appears to NEVER return when I return 0, if I change the behavior to return the size written libcurl appends all the data in the http body sending way more data and it fails with a "too much data" message from the server. my readfunction is below, any help would be greatly appreciated. I am using libcurl 7.20.1 typedef struct{ void *data; int body_size; int bytes_remaining; int bytes_written; } postdata; size_t readfunc(void *ptr, size_t size, size_t nmemb, void *stream) { if(stream) { postdata ud = (postdata)stream; if(ud->bytes_remaining) { if(ud->body_size > size*nmemb) { memcpy(ptr, ud->data+ud->bytes_written, size*nmemb); ud->bytes_written+=size+nmemb; ud->bytes_remaining = ud->body_size-size*nmemb; return size*nmemb; } else { memcpy(ptr, ud->data+ud->bytes_written, ud->bytes_remaining); ud->bytes_remaining=0; return 0; } }

    Read the article

  • Zend hostname route doesn't match when it has child routes

    - by talisker
    I am implementing an Admin module, which contains the following routes: 'router' => array( 'routes' => array( 'admin' => array( 'type' => 'Zend\Mvc\Router\Http\Hostname', 'options' => array( 'route' => ':subdomain.mydomain.local', 'constraints' => array( 'subdomain' => 'admin', ), 'defaults' => array( 'module' => '__NAMESPACE__', 'controller' => 'Admin\Controller\Index', 'action' => 'index', ), ), 'priority' => 9000, 'may_terminate' => true, 'child_routes' => array( 'users' => array( 'type' => 'Zend\Mvc\Router\Http\Literal', 'options' => array( 'route' => '/users', 'defaults' => array( 'module' => '__NAMESPACE__', 'controller' => 'Admin\Controller\Users', 'action' => 'index', ), ), ), ) ), ), ), And this is the home route configuration: 'home' => array( 'type' => 'Zend\Mvc\Router\Http\Literal', 'options' => array( 'route' => '/', 'defaults' => array( 'controller' => 'Application\Controller\Index', 'action' => 'index', ), ), ), When I try to access to http://admin.mydomain.com, the route match always with the homeroute, but if I remove all the child routes from the admin route, the behavior is correct and a http://admin.mydomain.com matches with the adminroute. Any idea?

    Read the article

  • are C functions declared in <c____> headers gauranteed to be in the global namespace as well as std?

    - by Evan Teran
    So this is something that I've always wondered but was never quite sure about. So it is strictly a matter of curiosity, not a real problem. As far as I understand, what you do something like #include <cstdlib> everything (except macros of course) are declared in the std:: namespace. Every implementation that I've ever seen does this by doing something like the following: #include <stdlib.h> namespace std { using ::abort; // etc.... } Which of course has the effect of things being in both the global namespace and std. Is this behavior guaranteed? Or is it possible that an implementation could put these things in std but not in the global namespace? The only way I can think of to do that would be to have your libstdc++ implement every c function itself placing them in std directly instead of just including the existing libc headers (because there is no mechanism to remove something from a namespace). Which is of course a lot of effort with little to no benefit. The essence of my question is, is the following program strictly conforming and guaranteed to work? #include <cstdio> int main() { ::printf("hello world\n"); }

    Read the article

  • Need opinions on LaTeX and ever upgrading

    - by yCalleecharan
    Hi, I've been using LaTeX since 2005 with the TeXLive distribution and I've been upgrading as each new TeXLive distribution comes out. In the recent years I noticed an increase in new packages, updated packages and in one instance a new package bearing a different name replacing an old one by the same package author. A LaTeX document which relies heavily on packages and which has been produced a few years back may start to get some warnings and error messages on present-day LaTeX compilation. The primary reason I switched to LaTeX is because of its reliability and robustness to create big documents easily, not to mention the adorable typographic quality. With LaTeX one doesn't have to worry about how to open a docx in an old program supporting only doc for instance. Now, when there are so much continual changes in the packages in a LaTeX distribution, I tend to wonder when will this madness end. Not that having enhanced and new features are bad in packages, but not all updated packages are backward compatible. Eventually one would like to be able to compile a LaTeX file in 10 years time that he/she is working on at present and not get any compilation warnings/error messages due to some unpredictable behavior of updated packages or due to a package that has been cast-off from a LaTeX distribution. If I understand correctly CTAN do keep a database with all packages from different versions. I would like to know how you LaTeX users handle this issue. Thanks a lot...

    Read the article

  • Using tarantula to test a Rails app

    - by Benjamin Oakes
    I'm using Tarantula to test a Rails app I'm developing. It works pretty well, but I'm getting some strange 404s. After looking into it, Tarantula is following DELETE requests (destroy actions on controllers) throughout my app when it tests. Since Tarantula gets the index action first (and seems to keep a list of unvisited URLs), it eventually tries to follow a link to a resource which it had deleted... and gets a 404. Tarantula is right that the URL doesn't exist anymore (because it deleted the resource itself). However, it's flagging it as an error -- that's hardly the behavior I would expect. I'm basically just using the Rails scaffolding and this problem is happening. How do I prevent Tarantula doing this? (Or, is there a better way of specifying the links?) Updates: Still searching, but I found a relevant thread here: http://github.com/relevance/tarantula/issues#issue/3 Seems to be coming from relying on JS too much, in a way (see also http://thelucid.com/2010/03/15/rails-can-we-please-have-a-delete-action-by-default/)

    Read the article

  • Manually changing keyboard orientation for a view that's on top of a camera view

    - by XKR
    I'm basically trying to reproduce the core functionality of the "At Once" app. I have a camera view and another view with a text view on it. I add both views to the window. All is well so far. [window addSubview:imagePicker.view]; [window addSubview:textViewController.view]; I understand that the UIImagePickerController does not support autorotation, so I handle it manually by watching UIDeviceOrientationDidChangeNotifications and applying the necessary transforms to the textViewController.view. Now, the problem here is the keyboard. If I do nothing, it just stays in portrait mode. I can get it to rotate by adding the following code to the notification handler. [[UIApplication sharedApplication] setStatusBarOrientation:interfaceOrientation]; [textView resignFirstResponder]; [textView becomeFirstResponder]; However, the following simple test produces weird behavior. Start the app in portrait mode. Rotate the device 90 degrees clockwise. Rotate the device 90 degrees counterclockwise (back to the initial position). Rotate the device 90 degrees clockwise. After step 4, instead of the landscape-mode keyboard, the portrait-style keyboard is shown, skewed to fit in the landscape keyboard frame. Perhaps my approach is wrong from the start. I was wondering if anyone has been able to reliably make the keyboard change its orientation in response to setStatusBarOrientation.

    Read the article

  • How is timezone handled in the lifecycle of an ADO.NET + SQL Server DateTime column?

    - by stimpy77
    Using SQL Server 2008. This is a really junior question and I could really use some elaborate information, but the information on Google seems to dance around the topic quite a bit and it would be nice if there was some detailed elaboration on how this works... Let's say I have a datetime column and in ADO.NET I set it to DateTime.UtcNow. 1) Does SQL Server store DateTime.UtcNow accordingly, or does it offset it again based on the timezone of where the server is installed, and then return it offset-reversed when queried? I think I know that the answer is "of course it stores it without offsetting it again" but want to be certain. So then I query for it and cast it from, say, an IDataReader column to a DateTime. As far as I know, System.DateTime has metadata that internally tracks whether it is a UTC DateTime or it is an offsetted DateTime, which may or may not cause .ToLocalTime() and .ToUniversalTime() to have different behavior depending on this state. So, 2) Does this casted System.DateTime object already know that it is a UTC DateTime instance, or does it assume that it has been offset? Now let's say I don't use UtcNow, I use DateTime.Now, when performing an ADO.NET INSERT or UPDATE. 3) Does ADO.NET pass the offset to SQL Server and does SQL Server store DateTime.Now with the offset metadata? So then I query for it and cast it from, say, an IDataReader column to a DateTime. 4) Does this casted System.DateTime object already know that it is an offset time, or does it assume that it is UTC?

    Read the article

  • Why does TrimStart trims a char more when asked to trim "PRN.NUL" ?

    - by James
    Here is the code: namespace TrimTest { class Program { static void Main(string[] args) { string ToTrim = "PRN.NUL"; Console.WriteLine(ToTrim); string Trimmed = ToTrim.TrimStart("PRN.".ToCharArray()); Console.WriteLine(Trimmed); ToTrim = "PRN.AUX"; Console.WriteLine(ToTrim); Trimmed = ToTrim.TrimStart("PRN.".ToCharArray()); Console.WriteLine(Trimmed); ToTrim = "AUX.NUL"; Console.WriteLine(ToTrim); Trimmed = ToTrim.TrimStart("AUX.".ToCharArray()); Console.WriteLine(Trimmed); } } } The output is like this: PRN.NUL UL PRN.AUX AUX AUX.NUL NUL As you can see, the TrimStart took out the N from NUL. But it doesn't do that for other strings even if it started with PRN. I tried with .NET Framework 3.5 and 4.0 and the results are same. Are there any explanation on what causes this behavior?

    Read the article

  • UIImagePickerController crashes on rapid scrolling, slower than photos app

    - by vvanhee
    Most of the time, my image picker works perfectly (iOS 4.2.1). However, if I scroll very rapidly up and down about 4-6 times through my camera roll of about 300 photos, I get a crash. This never happens with the "photos" app on the same iPhone 3Gs. Also, I'm noticing that the stock "photos" app scrolls much more smoothly than my image picker. Has anyone else noticed this behavior? I'd be interested if others could attempt this in their own apps and see if they crash. I don't think it's related to other objects hogging memory on my iPhone because it's a simple app, and this happens right after I start the app. It also doesn't seem to be related to messages sent to other released objects or overreleasing of other objects in viewdidunload, based on my crash logs and the fact that the simulator responds well to simulated memory warnings. I think it might be a bug in the internal implementation of the UIImagePickerController... This is how I start the picker. I've done this multiple ways (including setting a retain property for the UIImagePickerController in my header and releasing on dealloc). This seems to be the best way (crashes least): UIImagePickerController *picker = [[UIImagePickerController alloc] init]; picker.delegate = self; picker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum; picker.allowsEditing = YES; [self presentModalViewController:picker animated:YES]; [picker release]; This is the crashed thread (I get various exception types): Exception Type: SIGSEGV Exception Codes: SEGV_ACCERR at 0xfffffffff4faafa4 Crashed Thread: 8 ... Thread 8 Crashed: 0 CoreFoundation 0x000494ea -[__NSArrayM replaceObjectAtIndex:withObject:] + 98 1 PhotoLibrary 0x00008e0f -[PLImageTable _segmentAtIndex:] + 527 2 PhotoLibrary 0x00008a21 -[PLImageTable _mappedImageDataAtIndex:] + 221 3 PhotoLibrary 0x0000893f -[PLImageTable dataForEntryAtIndex:] + 15 4 PhotoLibrary 0x000087e7 PLThumbnailManagerImageDataAtIndex + 35 5 PhotoLibrary 0x00008413 -[PLThumbnailManager _dataForPhoto:format:width:height:bytesPerRow:dataWidth:dataHeight:imageDataOffset:imageDataFormat:preheat:] + 299 6 PhotoLibrary 0x000b6c13 __-[PLThumbnailManager preheatImageDataForImages:withFormat:]_block_invoke_1 + 159 7 libSystem.B.dylib 0x000d6680 _dispatch_call_block_and_release + 20 8 libSystem.B.dylib 0x000d6ba0 _dispatch_worker_thread2 + 128 9 libSystem.B.dylib 0x0007b251 _pthread_wqthread + 265

    Read the article

  • Problems with video conversions through the web (local host)

    - by ron-d
    Hello, I get the following errors when I attempt video format conversions called from the local host: “An invalid media type was specified” for M4V to WMV conversions. “One or more arguments are invalid” for MP4 to WMV conversions. Here are the details of the problems: I’ve written a dll in C# that accepts videos in the formats AVI, WMV, M4V and MP4 and performs the following actions: Creates a copy of the input video in WMV format . Creates a WAV file of the input video audio portion. Creates a JPG image from a frame of the input video. I attached the dll to an ASP.NET web project that performs the dll actions. When tested through the developer studio, the actions are performed as intended for all formats. When I place the web project in place to be read when the local host is called through the web browser, the following behavior takes place: WMV format: All actions performed as intended. AVI format: Creates WMV file – OK Creates JPG image – OK Creates empty WAV file – problem. M4V format: Creates empty WAV file – problem. Does not create WMV file -problem Does not create JPG file –problem Throws me the error “An invalid media type was specified” MP4 format: Creates empty WAV file – problem. Does not create WMV file -problem Does not create JPG file –problem Throws me the error “One or more arguments are invalid” When I check their security property, all the files have the same permission access parameters (when I check their security property. Can anyone guide me as to how to solve these problems when the web project is called from the local host? Thank you.

    Read the article

  • Nose2 multiprocess error on Windows7

    - by tt293
    I was looking into nose2 as a way to get around the restrictions of having both xunit output and multiprocessing in nose1.3. However, when always-on is set to False in the [multiprocess] section, I can only get a single process running, while when running with always-on set to True, I get the following error: ---------------------------------------------------------------------- Ran 0 tests in 0.043s OK Traceback (most recent call last): File "C:\dev\testing\Tests\PythonTests\venv\Scripts\nose2-script.py", line 8, in <module> load_entry_point('nose2==0.4.7', 'console_scripts', 'nose2')() File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\main.py", line 284, in discover return main(*args, **kwargs) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\main.py", line 98, in __init__ super(PluggableTestProgram, self).__init__(**kw) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\unittest2-0.5.1- py2.7.egg\unittest2\main.py", line 98, in __init__ self.runTests() File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\main.py", line 260, in runTests self.result = runner.run(self.test) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\runner.py", line 53, in run executor(test, result) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\plugins\mp.py", line 60, in _runmp ready, _, _ = select.select(rdrs, [], [], self.testRunTimeout) select.error: (10038, 'An operation was attempted on something that is not a soc ket') This is running python 2.7.5 (32bit) on Windows 7 in a virtualenv with six-1.1.0, unittest2-0.5.1 and nose2-0.4.7 (I get the same behavior outside of the venv, so I don't think that is the issue here).

    Read the article

  • C++ Suppress Automatic Initialization and Destruction

    - by Travis G
    How does one suppress the automatic initialization and destruction of a type? While it is wonderful that T buffer[100] automatically initializes all the elements of buffer, and destroys them when they fall out of scope, this is not the behavior I want. #include <iostream> static int created = 0, destroyed = 0; struct S { S() { ++created; } ~S() { ++destroyed; } }; template <typename T, size_t KCount> class Array { private: T m_buffer[KCount]; public: Array() { // some way to suppress the automatic initialization of m_buffer } ~Array() { // some way to suppress the automatic destruction of m_buffer } }; int main() { { Array<S, 100> arr; } std::cout << "Created:\t" << created << std::endl; std::cout << "Destroyed:\t" << destroyed << std::endl; return 0; } The output of this program is: Created: 100 Destroyed: 100 I would like it to be: Created: 0 Destroyed: 0 My only idea is to make m_buffer some trivially constructed and destructed type like char and then rely on operator[] to wrap the pointer math for me, although this seems like a horribly hacked solution. Another solution would be to use malloc and free, but that gives a level of indirection that I do not want.

    Read the article

  • I am confused about how to use @SessionAttributes

    - by yusaku
    I am trying to understand architecture of Spring MVC. However, I am completely confused by behavior of @SessionAttributes. Please look at SampleController below , it is handling post method by SuperForm class. In fact, just field of SuperForm class is only binding as I expected. However, After I put @SessionAttributes in Controller, handling method is binding as SubAForm. Can anybody explain me what happened in this binding. ------------------------------------------------------- @Controller @SessionAttributes("form") @RequestMapping(value = "/sample") public class SampleController { @RequestMapping(method = RequestMethod.GET) public String getCreateForm(Model model) { model.addAttribute("form", new SubAForm()); return "sample/input"; } @RequestMapping(method = RequestMethod.POST) public String register(@ModelAttribute("form") SuperForm form, Model model) { return "sample/input"; } } ------------------------------------------------------- public class SuperForm { private Long superId; public Long getSuperId() { return superId; } public void setSuperId(Long superId) { this.superId = superId; } } ------------------------------------------------------- public class SubAForm extends SuperForm { private Long subAId; public Long getSubAId() { return subAId; } public void setSubAId(Long subAId) { this.subAId = subAId; } } ------------------------------------------------------- <form:form modelAttribute="form" method="post"> <fieldset> <legend>SUPER FIELD</legend> <p> SUPER ID:<form:input path="superId" /> </p> </fieldset> <fieldset> <legend>SUB A FIELD</legend> <p> SUB A ID:<form:input path="subAId" /> </p> </fieldset> <p> <input type="submit" value="register" /> </p> </form:form>

    Read the article

  • JavaScript: How is "function x() {}" different from "x = function() {}" ?

    - by jleedev
    In the answers to this question, we read that function f() {} defines the name locally, while [var] f = function() {} defines it globally. That makes perfect sense to me, but there's some strange behavior that's different between the two declarations. I made an HTML page with the script onload = function() { alert("hello"); } and it worked as expected. When I changed it to function onload() { alert("hello"); } nothing happened. (Firefox still fired the event, but WebKit, Opera, and Internet Explorer didn't, although frankly I've no idea which is correct.) In both cases (in all browsers), I could verify that both window.onload and onload were set to the function. In both cases, the global object this is set to the window, and I no matter how I write the declaration, the window object is receiving the property just fine. What's going on here? Why does one declaration work differently from the other? Is this a quirk of the JavaScript language, the DOM, or the interaction between the two?

    Read the article

  • additive texture combiner

    - by ivicaa
    I have a problem which is driving me crazy. Enironment: IPHONE, OpenGL ES 1.1 Basically I have a simple GL_COMBINE for vertex color and texture color. glColor4f(0.1f, 0.1f, 0.1f, 0); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE); glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_ADD); glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PRIMARY_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_TEXTURE); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_ADD); glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PRIMARY_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA); glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_TEXTURE); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA); It should simply do VertexColorRGBA + TextureRGBA. With Alpha everything works fine, but if as soon as I change R,G,B in the glColor4f call, the final alpha is also modified. Does anyone have a hint for this unexpected behavior? Thanks in advance! Ivica

    Read the article

  • jquery dialog is clearing my form fields and I need to return a value from a jquery dialog

    - by Seth
    1.) I have a jQuery dialog that is opened whenever a particular textbox is focused. The dialog's contents are loaded from ajax and the unique ID of the textbox that was focused is passed in the ajax call (like this): $('[name=start_airport[]],[name=finish_airport[]]').click(function(){   var id = $(this).attr('id');   if($('#use_advanced_airport_selector').attr('checked')) {     $('#advanced_airport_selector').dialog({       open : function() {         $(this).load('/flight-booker/advanced-airport-selector.php?callerID='+id);       }     });     $('#advanced_airport_selector').dialog('open');   } }); (where advanced_airport_selector is an empty div) THAT PART WORKS FINE. However, when I make my ajax call within my dialog, all my form values are reset! No matter what I do, when that dialog opens, all form values are reset (not just the value of the textbox that was focused). I simply don't understand what would cause this behavior! But that's only issue #1. 2.) I need to be able to return a value from that dialog box. I am passing the ID in the ajax query so that I can use a jquery selector to update the caller's value after certain actions are performed within the dialog box. However, I can't actually access that textbox because of DOM_ERRORS that I've never come across. It doesn't make any sense! There's way to much code to post, and it's really hard to explain, so sorry if I'm unclear as to what I'm asking.

    Read the article

  • Fluent NHibernate AutoMap

    - by Markus
    Hi. I have a qouestion regarding the AutoMap xml generation. I have two classes: public class User { virtual public Guid Id { get; private set; } virtual public String Name { get; set; } virtual public String Email { get; set; } virtual public String Password { get; set; } virtual public IList<OpenID> OpenIDs { get; set; } } public class OpenID { virtual public Guid Id { get; private set; } virtual public String Provider { get; set; } virtual public String Ticket { get; set; } virtual public User User { get; set; } } The generated sequences of xml files are: For User class: <bag name="OpenIDs"> <key> <column name="User_Id" /> </key> <one-to-many class="BL_DAL.Entities.OpenID, BL_DAL, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </bag> For OpenID class: <many-to-one class="BL_DAL.Entities.User, BL_DAL, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="User"> <column name="User_id" /> </many-to-one> I don't see the inverse=true attribute for the User mapping. Is it a normal behavior, or I made a mistake somewhere?

    Read the article

  • Determine whether app is communicating with APNS sandbox or production environment

    - by goldierox
    I have push notifications set up in my app. I'm trying to determine whether the device token I've received from APNS in the application:didRegisterForRemoteNotificationsWithDeviceToken: method came from the sandbox or development environment. If I can distinguish which environment initialized the token, I'll be able to tell my server to which environment to send the push notification. I've tried using the DEBUG macro to determine this, but I've seen some strange behavior with this and don't trust it to be 100% correct. #ifdef DEBUG BOOL isProd = YES; #else BOOL isProd = NO; #endif Ideally, I'd be able to examine the aps-environment entitlement (value is Development or Production) in code, but I'm not sure if this is even possible. What's the proper way to determine whether your app is communicating with the APNS sandbox or production environments? I'm assuming that the server needs to know this in the first place. Please correct me if this is assumption is incorrect. Edited: Apple's documentation on Provider Communication with APNS details the difference between communicating with the sandbox and production. However, the documentation doesn't give information on how to be consistent with registering the token (from the iOS client app) and communicating with the server.

    Read the article

  • Is a control's OnInit called even when attaching it during parent's OnPreRender?

    - by Xerion
    My original understanding was that the asp.net page lifecycle is run once for all pages and controls under normal circumstances. When I attached a control during a container's OnPreRender, I encountered a situation where the control's OnInit was not called. OK, I considered that a bug in my code and fixed as such, by attaching the control earlier. But just today, I encountered a situation where OnInit for a control seems to be called after the normal OnInit has been done for everyone else. See stack below. It seems that during the page's PreRender, the control's OnInit is called as it is being dynamically added. So I just want to confirm exactly what ASP.NET's behavior is? Does it actually keep track of the stage of each control's lifecycle, and upon adding a new control, it will run from the very beginning? [HttpException (0x80004005): The control collection cannot be modified during DataBind, Init, Load, PreRender or Unload phases.] System.Web.UI.ControlCollection.Add(Control child) +8678663 MyCompany.Web.Controls.SetStartPageWrapper.Initialize() MyCompany.Web.Controls.SetStartPageWrapper.OnInit(EventArgs e) System.Web.UI.Control.InitRecursive(Control namingContainer) +333 System.Web.UI.Control.InitRecursive(Control namingContainer) +210 System.Web.UI.Control.AddedControl(Control control, Int32 index) +198 System.Web.UI.ControlCollection.Add(Control child) +80 MyCompany.Web.Controls.PageHeader.OnPreRender(EventArgs e) in System.Web.UI.Control.PreRenderRecursiveInternal() +80 System.Web.UI.Control.PreRenderRecursiveInternal() +171 System.Web.UI.Control.PreRenderRecursiveInternal() +171 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +842

    Read the article

  • Compiler turning a string& into a basic_string<>&

    - by Shtong
    Hello I'm coming back to C++ after long years spent on other technologies and i'm stuck on some weird behavior when calling some methods taking std::string as parameters : An example of call : LocalNodeConfiguration *LocalNodeConfiguration::ReadFromFile(std::string & path) { // ... throw configuration_file_error(string("Configuration file empty"), path); // ... } When I compile I get this (I cropped file names for readability) : /usr/bin/g++ -g -I/home/shtong/Dev/OmegaNoc/build -I/usr/share/include/boost-1.41.0 -o CMakeFiles/OmegaNocInternals.dir/configuration/localNodeConfiguration.cxx.o -c /home/shtong/Dev/OmegaNoc/source/configuration/localNodeConfiguration.cxx .../localNodeConfiguration.cxx: In static member function ‘static OmegaNoc::LocalNodeConfiguration* OmegaNoc::LocalNodeConfiguration::ReadFromFile(std::string&)’: .../localNodeConfiguration.cxx:72: error: no matching function for call to ‘OmegaNoc::configuration_file_error::configuration_file_error(std::string, std::basic_string<char, std::char_traits<char>, std::allocator<char> >&)’ .../configurationManager.hxx:25: note: candidates are: OmegaNoc::configuration_file_error::configuration_file_error(std::string&, std::string&) .../configurationManager.hxx:22: note: OmegaNoc::configuration_file_error::configuration_file_error(const OmegaNoc::configuration_file_error&) So as I understand it, the compiler is considering that my path parameter turned into a basic_string at some point, thus not finding the constructor overload I want to use. But I don't really get why this transformation happened. Some search on the net suggested me to use g++ but I was already using it. So any other advice would be appreciated :) Thanks

    Read the article

< Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >