Search Results

Search found 6342 results on 254 pages for 'behavior'.

Page 214/254 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • UIButton stops responding after going into landscape mode - iPhone

    - by casey
    I've been trying different things the last few days and I've run out of ideas so I'm looking for help. The situation is that I'm displaying my in-app purchasing store view after the user clicks a button. Button pressed, view is displayed. The store shows fine. Inside this view, I have a few labels with descriptions of the product, and then below them I have the price and a Buy button which triggers the in-app purchase. Problem is when I rotate the phone to landscape, that Buy button no longer responds, weird. Works fine in portrait. The behavior in landscape when the I touch the button is nothing. It doesn't appear to press down and be selected or anything, just not responding to my touches. But then when I rotate back to portrait or even upside down portrait, it works fine. Here is the rough structure of my view in IB, all the rotating and layout is setup in IB. I set the autoresizing in IB so that everything looks ok in landscape and the Buy button expands horizontally a little bit. The only layout manipulation I do in my code is after loading, I set the content size of the scroll view. File Owner with view set to the scrollView / scrollView ----/ view --------/ label --------/ label --------/ label --------/ label --------/ label --------/ label --------/ label --------/ label --------/ uibutton (Buy) After orientation changes I printed out the userInteractionEnabled property of the scrollView and the button, and they were both TRUE at all orientations. Ideas? Or maybe some other way of displaying a buy button that won't be nonfunctional? I've already begun a branch that plays with a toolbar and placing the buy button there, but I can't seem to get the bar to stay in place while scrolling.

    Read the article

  • Python re module becomes 20 times slower when called on greater than 101 different regex

    - by Wiil
    My problem is about parsing log files and removing variable parts on each lines to be able to group them. For instance: s = re.sub(r'(?i)User [_0-9A-z]+ is ', r"User .. is ", s) s = re.sub(r'(?i)Message rejected because : (.*?) \(.+\)', r'Message rejected because : \1 (...)', s) I have about 120+ matching rules like those above. I have found no performances issues while searching successively on 100 different regex. But a huge slow down comes when applying 101 regex. Exact same behavior happens when replacing my rules set by for a in range(100): s = re.sub(r'(?i)caught here'+str(a)+':.+', r'( ... )', s) Got 20 times slower when putting range(101) instead. # range(100) % ./dashlog.py file.bz2 == Took 2.1 seconds. == # range(101) % ./dashlog.py file.bz2 == Took 47.6 seconds. == Why such thing is happening ? And is there any known workaround ? (Happens on Python 2.6.6/2.7.2 on Linux/Windows.)

    Read the article

  • Quality of TFS 2008 merged code

    - by paologios
    Does the quality of code merged by TFS 2008 depend on the used programming language? I know merging in Java / Subversion, and merging a branch to its trunk usually does not create much conflicts. Now in my company, we use VB.NET. When I merge two files TFS does not always get code blocks right, e.g. does not find the right If..then / end if lines. To give you an example, I mean: File 2 is created as a branch of File 1. Both files were changed later, now I'm going to merge those files and am recieving conficts: The marked end-if lines (1) are detected as corresponding, meaning the added event handler Button1_Click is being deleted. Now I wonder if this behavior is by language (C# vs. VB.NET) or are other source control solutions just better than TFS? (And I really liked TFS up to now :) ) File 1: Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Not Page.IsPostBack Then Label1.Text = "Hello" Label2.Text = "World" End If End Sub Protected Sub Button2_Click(ByVal sender, ByVal e as System.EventArgs) Handles Button2.Click // .... If Page.IsValid Then Label3.Text = "Hello Button 2" End If // .... End Sub File 2 (Branch of File 1): Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Not Page.IsPostBack Then fillTableFromDatabase() End If // (1) End Sub Protected Sub Button1_Click(ByVal sender, ByVal e as System.EventArgs) Handles Button1.Click // do something here End Sub Protected Sub Button2_Click(ByVal sender, ByVal e as System.EventArgs) Handles Button2.Click // .... If Page.IsValid Then End If // (1) // .... End Sub

    Read the article

  • named type not used for constructor injection

    - by nmarun
    Hi, I have a simple console application where I have the following setup: public interface ILogger { void Log(string message); } class NullLogger : ILogger { private readonly string version; public NullLogger() { version = "1.0"; } public NullLogger(string v) { version = v; } public void Log(string message) { Console.WriteLine("NULL " + version + " : " + message); } } The configuration details are below: <type type="UnityConsole.ILogger, UnityConsole" mapTo="UnityConsole.NullLogger, UnityConsole"> My calling code looks as below: IUnityContainer container = new UnityContainer(); UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity"); section.Containers.Default.Configure(container); ILogger nullLogger = container.Resolve(); nullLogger.Log("hello"); This works fine, but once I give a name to this type something like: <type type="UnityConsole.ILogger, UnityConsole" mapTo="UnityConsole.NullLogger, UnityConsole" name="NullLogger"> The above calling code does not work even if I explicitly register the type using container.RegisterType<ILogger, NullLogger>(); I get the error: {"Resolution of the dependency failed, type = \"UnityConsole.ILogger\", name = \"\". Exception message is: The current build operation (build key Build Key[UnityConsole.NullLogger, null]) failed: The parameter v could not be resolved when attempting to call constructor UnityConsole.NullLogger(System.String v). (Strategy type BuildPlanStrategy, index 3)"} Why doesn't unity look into named instances? To get it to work, I'll have to do: ILogger nullLogger = container.Resolve("NullLogger"); Where is this behavior documented? Arun

    Read the article

  • How do you use indent in vim for web development?

    - by Somebody still uses you MS-DOS
    I'm starting to use Linux and Vim at work. I'm starting to read vims documentation and creating my own .vimrc file and such. I'm a web developer working with HTML, XML, CSS, JS, Python, PHP, ZPT, DTML and SQL. I would like to have an indent feature like this one: for each language/set, a corresponding indent solution. So, in js, writing function test(){|} would turn in function test(){ | } If php, writing <?php function test(){|}: <?php function test(){ | } <?php> ...and such. Writing a function definition in Python, and then creating a for loop sentece, it would automatically create an indent. I'm starting with autoindent, smartindent, cindent but I'm a little confused about their differences. How do the indent in vim works? Am I supposed to download plugins for each language? Is the behavior I described possible with already existing plugins you're used to or do I have to create it? I keep seeing people using Vim and I'm trying to do this as well since the machine I'm using is too limited, but I'm afraid I won't be able to have a decent auto indenting solution in it. (I have used autoindenting in a little small project in Visual Studio, and really liked their approach. Is there a plugin for that?)

    Read the article

  • Manually changing keyboard orientation for a view that's on top of a camera view

    - by XKR
    I'm basically trying to reproduce the core functionality of the "At Once" app. I have a camera view and another view with a text view on it. I add both views to the window. All is well so far. [window addSubview:imagePicker.view]; [window addSubview:textViewController.view]; I understand that the UIImagePickerController does not support autorotation, so I handle it manually by watching UIDeviceOrientationDidChangeNotifications and applying the necessary transforms to the textViewController.view. Now, the problem here is the keyboard. If I do nothing, it just stays in portrait mode. I can get it to rotate by adding the following code to the notification handler. [[UIApplication sharedApplication] setStatusBarOrientation:interfaceOrientation]; [textView resignFirstResponder]; [textView becomeFirstResponder]; However, the following simple test produces weird behavior. Start the app in portrait mode. Rotate the device 90 degrees clockwise. Rotate the device 90 degrees counterclockwise (back to the initial position). Rotate the device 90 degrees clockwise. After step 4, instead of the landscape-mode keyboard, the portrait-style keyboard is shown, skewed to fit in the landscape keyboard frame. Perhaps my approach is wrong from the start. I was wondering if anyone has been able to reliably make the keyboard change its orientation in response to setStatusBarOrientation.

    Read the article

  • Python base classes share attributes?

    - by tad
    Code in test.py: class Base(object): def __init__(self, l=[]): self.l = l def add(self, num): self.l.append(num) def remove(self, num): self.l.remove(num) class Derived(Base): def __init__(self, l=[]): super(Derived, self).__init__(l) Python shell session: Python 2.6.5 (r265:79063, Apr 1 2010, 05:22:20) [GCC 4.4.3 20100316 (prerelease)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import test >>> a = test.Derived() >>> b = test.Derived() >>> a.l [] >>> b.l [] >>> a.add(1) >>> a.l [1] >>> b.l [1] >>> c = test.Derived() >>> c.l [1] I was expecting "C++-like" behavior, in which each derived object contains its own instance of the base class. Is this still the case? Why does each object appear to share the same list instance?

    Read the article

  • Cannot overload function

    - by anio
    So I've got a templatized class and I want to overload the behavior of a function when I have specific type, say char. For all other types, let them do their own thing. However, c++ won't let me overload the function. Why can't I overload this function? I really really do not want to do template specialization, because then I've got duplicate the entire class. Here is a toy example demonstrating the problem: http://codepad.org/eTgLG932 The same code posted here for your reading pleasure: #include <iostream> #include <cstdlib> #include <string> struct Bar { std::string blah() { return "blah"; } }; template <typename T> struct Foo { public: std::string doX() { return m_getY(my_t); } private: std::string m_getY(char* p_msg) { return std::string(p_msg); } std::string m_getY(T* p_msg) { return p_msg->blah(); } T my_t; }; int main(int, char**) { Foo<char> x; Foo<Bar> y; std::cout << "x " << x.doX() << std::endl; return EXIT_SUCCESS; } Thank you everyone for your suggestions. Two valid solutions have been presented. I can either specialize the doX method, or specialize m_getY() method. At the end of the day I prefer to keep my specializations private rather than public so I'm accepting Krill's answer.

    Read the article

  • What is the purpose of the Html "no-js" class?

    - by Swader
    I notice that in a lot of template engines, in the HTML5 Boilerplate, in various frameworks and in plain php sites there is the no-js class added onto the html element. Why is this done? Is there some sort of default browser behavior that reacts to this class? Why include it always? Does that not render the class itself obsolete, if there is no no-"no-js" case and html can be addressed directly? Here is an example from the HTML5 Boilerplate index.html: <!--[if lt IE 7 ]> <html lang="en" class="no-js ie6"> <![endif]--> <!--[if IE 7 ]> <html lang="en" class="no-js ie7"> <![endif]--> <!--[if IE 8 ]> <html lang="en" class="no-js ie8"> <![endif]--> <!--[if IE 9 ]> <html lang="en" class="no-js ie9"> <![endif]--> <!--[if (gt IE 9)|!(IE)]><!--> <html lang="en" class="no-js"> <!--<![endif]--> As you can see, the html element will always have this class. Can someone explain why this is done so often?

    Read the article

  • Are there equivalents to Ruby's method_missing in other languages?

    - by Justin Ethier
    In Ruby, objects have a handy method called method_missing which allows one to handle method calls for methods that have not even been (explicitly) defined: Invoked by Ruby when obj is sent a message it cannot handle. symbol is the symbol for the method called, and args are any arguments that were passed to it. By default, the interpreter raises an error when this method is called. However, it is possible to override the method to provide more dynamic behavior. The example below creates a class Roman, which responds to methods with names consisting of roman numerals, returning the corresponding integer values. class Roman def romanToInt(str) # ... end def method_missing(methId) str = methId.id2name romanToInt(str) end end r = Roman.new r.iv #=> 4 r.xxiii #=> 23 r.mm #=> 2000 For example, Ruby on Rails uses this to allow calls to methods such as find_by_my_column_name. My question is, what other languages support an equivalent to method_missing, and how do you implement the equivalent in your code?

    Read the article

  • UIImagePickerController crashes on rapid scrolling, slower than photos app

    - by vvanhee
    Most of the time, my image picker works perfectly (iOS 4.2.1). However, if I scroll very rapidly up and down about 4-6 times through my camera roll of about 300 photos, I get a crash. This never happens with the "photos" app on the same iPhone 3Gs. Also, I'm noticing that the stock "photos" app scrolls much more smoothly than my image picker. Has anyone else noticed this behavior? I'd be interested if others could attempt this in their own apps and see if they crash. I don't think it's related to other objects hogging memory on my iPhone because it's a simple app, and this happens right after I start the app. It also doesn't seem to be related to messages sent to other released objects or overreleasing of other objects in viewdidunload, based on my crash logs and the fact that the simulator responds well to simulated memory warnings. I think it might be a bug in the internal implementation of the UIImagePickerController... This is how I start the picker. I've done this multiple ways (including setting a retain property for the UIImagePickerController in my header and releasing on dealloc). This seems to be the best way (crashes least): UIImagePickerController *picker = [[UIImagePickerController alloc] init]; picker.delegate = self; picker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum; picker.allowsEditing = YES; [self presentModalViewController:picker animated:YES]; [picker release]; This is the crashed thread (I get various exception types): Exception Type: SIGSEGV Exception Codes: SEGV_ACCERR at 0xfffffffff4faafa4 Crashed Thread: 8 ... Thread 8 Crashed: 0 CoreFoundation 0x000494ea -[__NSArrayM replaceObjectAtIndex:withObject:] + 98 1 PhotoLibrary 0x00008e0f -[PLImageTable _segmentAtIndex:] + 527 2 PhotoLibrary 0x00008a21 -[PLImageTable _mappedImageDataAtIndex:] + 221 3 PhotoLibrary 0x0000893f -[PLImageTable dataForEntryAtIndex:] + 15 4 PhotoLibrary 0x000087e7 PLThumbnailManagerImageDataAtIndex + 35 5 PhotoLibrary 0x00008413 -[PLThumbnailManager _dataForPhoto:format:width:height:bytesPerRow:dataWidth:dataHeight:imageDataOffset:imageDataFormat:preheat:] + 299 6 PhotoLibrary 0x000b6c13 __-[PLThumbnailManager preheatImageDataForImages:withFormat:]_block_invoke_1 + 159 7 libSystem.B.dylib 0x000d6680 _dispatch_call_block_and_release + 20 8 libSystem.B.dylib 0x000d6ba0 _dispatch_worker_thread2 + 128 9 libSystem.B.dylib 0x0007b251 _pthread_wqthread + 265

    Read the article

  • Pre-formatting text to prevent reflowing

    - by mattjn
    I've written a fairly simple script that will take elements (in this case, <p> elements are the main concern) and type their contents out like a typewriter, one by one. The problem is that as it types, when it reaches the edge of the container mid-word, it reflows the text and jumps to the next line (like word wrap in any text editor). This is, of course, expected behavior; however, I would like to pre-format the text so that this does not happen. I figure that inserting <br> before the word that will wrap would be the best solution, but I'm not quite sure what the best way to go about doing that is that supports all font sizes and container widths, while also keeping any HTML tags intact. I figure something involving a hidden <span> element, adding text to it gradually and checking its width against the container width might be on the right track, but I'm not quite sure how to actually put this together. Any help or suggestions on better methods would be appreciated.

    Read the article

  • Need opinions on LaTeX and ever upgrading

    - by yCalleecharan
    Hi, I've been using LaTeX since 2005 with the TeXLive distribution and I've been upgrading as each new TeXLive distribution comes out. In the recent years I noticed an increase in new packages, updated packages and in one instance a new package bearing a different name replacing an old one by the same package author. A LaTeX document which relies heavily on packages and which has been produced a few years back may start to get some warnings and error messages on present-day LaTeX compilation. The primary reason I switched to LaTeX is because of its reliability and robustness to create big documents easily, not to mention the adorable typographic quality. With LaTeX one doesn't have to worry about how to open a docx in an old program supporting only doc for instance. Now, when there are so much continual changes in the packages in a LaTeX distribution, I tend to wonder when will this madness end. Not that having enhanced and new features are bad in packages, but not all updated packages are backward compatible. Eventually one would like to be able to compile a LaTeX file in 10 years time that he/she is working on at present and not get any compilation warnings/error messages due to some unpredictable behavior of updated packages or due to a package that has been cast-off from a LaTeX distribution. If I understand correctly CTAN do keep a database with all packages from different versions. I would like to know how you LaTeX users handle this issue. Thanks a lot...

    Read the article

  • Cannot pass null to server using jQuery AJAX. Value received at the server is the string "null".

    - by Tom
    I am converting a javascript/php/ajax application to use jQuery to ensure compatibility with browsers other than Firefox. I am having trouble passing true, false, and null values using jQuery's ajax function. Javascript code: $.ajax ( { url : <server_url>, dataType: 'json', type : 'POST', success : receiveAjaxMessage, data: { valueTrue : true, valueFalse : false, valueNull : null } } ); PHP code: var_dump($_POST); Server output: array(3) { ["valueTrue"]=> string(4) "true" ["valueFalse"]=> string(5) "false" ["valueNull"]=> string(4) "null" } The problem is that the null, true, and false values are being converted to strings. The Javascript AJAX code currently in use passes null, true, and false correctly but only works in Firefox. Does anyone know how to solve this problem using jQuery? Here is some working code (not using jQuery) to compare with the code not-working code given above. Javascript Code: ajaxPort.send ( <server_url>, { valueTrue : true, valueFalse : false, valueNull : null } ); PHP code: var_dump(json_decode(file_get_contents('php://input'), true)); Server output: array(3) { ["valueTrue"]=> bool(true) ["valueFalse"]=> bool(false) ["valueNull"]=> NULL } Note that the null, true, and false values are correctly received. Note also that in the second method the $_POST array is not used in the PHP code. I think this is the key to the problem, but I cannot find a way to replicate this behavior using jQuery.

    Read the article

  • Wicket application + Apache + mod_jk - AJP queues are filling up!

    - by nojyarg
    Dear community, We are having a Wicket-based Java application deployed in a production server cluster using Apache (2.2.3) with mod_jk (1.2.30) as load balancing component w/ sticky session and Jboss 5 as application container for the Java application. We are inconsistently seeing an issue in our production environment where our AJP queues between Apache and Jboss as shown in the JMX console fill up with requests to the point where the application server is no longer taking on any new requests. When looking at all involved system components (overall traffic, load db, process list db, load of all clustered application server nodes) nothing points towards a capacity issue which would explain why the calls are being stalled in the AJP queue. Instead all systems appear sufficiently idle. So far, our only remedy to this issue is to restart the appservers and the load balancer which only occasionally clears the AJP queues. We are trying to figure out why the queues are filling up to the point that no calls get returned to the end user although the system is not under a high load. Has anyone else experienced similar problems? Are there any other system metrics we should monitor that could explain the queuing behavior? Is this potentially a mod_jk issue? If so, is it advisable to swap mod_jk with mod_cluster to resolve the issue? Any advice is highly appreciated. If I can provide additional information for the sake of troubleshooting I would be more than willing to do so. /Ben

    Read the article

  • additive texture combiner

    - by ivicaa
    I have a problem which is driving me crazy. Enironment: IPHONE, OpenGL ES 1.1 Basically I have a simple GL_COMBINE for vertex color and texture color. glColor4f(0.1f, 0.1f, 0.1f, 0); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE); glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_ADD); glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PRIMARY_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_TEXTURE); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_ADD); glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PRIMARY_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA); glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_TEXTURE); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA); It should simply do VertexColorRGBA + TextureRGBA. With Alpha everything works fine, but if as soon as I change R,G,B in the glColor4f call, the final alpha is also modified. Does anyone have a hint for this unexpected behavior? Thanks in advance! Ivica

    Read the article

  • C++ Suppress Automatic Initialization and Destruction

    - by Travis G
    How does one suppress the automatic initialization and destruction of a type? While it is wonderful that T buffer[100] automatically initializes all the elements of buffer, and destroys them when they fall out of scope, this is not the behavior I want. #include <iostream> static int created = 0, destroyed = 0; struct S { S() { ++created; } ~S() { ++destroyed; } }; template <typename T, size_t KCount> class Array { private: T m_buffer[KCount]; public: Array() { // some way to suppress the automatic initialization of m_buffer } ~Array() { // some way to suppress the automatic destruction of m_buffer } }; int main() { { Array<S, 100> arr; } std::cout << "Created:\t" << created << std::endl; std::cout << "Destroyed:\t" << destroyed << std::endl; return 0; } The output of this program is: Created: 100 Destroyed: 100 I would like it to be: Created: 0 Destroyed: 0 My only idea is to make m_buffer some trivially constructed and destructed type like char and then rely on operator[] to wrap the pointer math for me, although this seems like a horribly hacked solution. Another solution would be to use malloc and free, but that gives a level of indirection that I do not want.

    Read the article

  • are C functions declared in <c____> headers gauranteed to be in the global namespace as well as std?

    - by Evan Teran
    So this is something that I've always wondered but was never quite sure about. So it is strictly a matter of curiosity, not a real problem. As far as I understand, what you do something like #include <cstdlib> everything (except macros of course) are declared in the std:: namespace. Every implementation that I've ever seen does this by doing something like the following: #include <stdlib.h> namespace std { using ::abort; // etc.... } Which of course has the effect of things being in both the global namespace and std. Is this behavior guaranteed? Or is it possible that an implementation could put these things in std but not in the global namespace? The only way I can think of to do that would be to have your libstdc++ implement every c function itself placing them in std directly instead of just including the existing libc headers (because there is no mechanism to remove something from a namespace). Which is of course a lot of effort with little to no benefit. The essence of my question is, is the following program strictly conforming and guaranteed to work? #include <cstdio> int main() { ::printf("hello world\n"); }

    Read the article

  • What are the default return values for operator< and operator[] in C++ (Visual Studio 6)?

    - by DustOff
    I've inherited a large Visual Studio 6 C++ project that needs to be translated for VS2005. Some of the classes defined operator< and operator[], but don't specify return types in the declarations. VS6 allows this, but not VS2005. I am aware that the C standard specifies that the default return type for normal functions is int, and I assumed VS6 might have been following that, but would this apply to C++ operators as well? Or could VS6 figure out the return type on its own? For example, the code defines a custom string class like this: class String { char arr[16]; public: operator<(const String& other) { return something1 < something2; } operator[](int index) { return arr[index]; } }; Would VS6 have simply put the return types for both as int, or would it have been smart enough to figure out that operator[] should return a char and operator< should return a bool (and not convert both results to int all the time)? Of course I have to add return types to make this code VS2005 C++ compliant, but I want to make sure to specify the same type as before, as to not immediately change program behavior (we're going for compatibility at the moment; we'll standardize things later).

    Read the article

  • Fluent NHibernate AutoMap

    - by Markus
    Hi. I have a qouestion regarding the AutoMap xml generation. I have two classes: public class User { virtual public Guid Id { get; private set; } virtual public String Name { get; set; } virtual public String Email { get; set; } virtual public String Password { get; set; } virtual public IList<OpenID> OpenIDs { get; set; } } public class OpenID { virtual public Guid Id { get; private set; } virtual public String Provider { get; set; } virtual public String Ticket { get; set; } virtual public User User { get; set; } } The generated sequences of xml files are: For User class: <bag name="OpenIDs"> <key> <column name="User_Id" /> </key> <one-to-many class="BL_DAL.Entities.OpenID, BL_DAL, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </bag> For OpenID class: <many-to-one class="BL_DAL.Entities.User, BL_DAL, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="User"> <column name="User_id" /> </many-to-one> I don't see the inverse=true attribute for the User mapping. Is it a normal behavior, or I made a mistake somewhere?

    Read the article

  • TCP Socket.Connect is generating false positives

    - by Mark
    I'm experiencing really weird behavior with the Socket.Connect method in C#. I am attempting a TCP Socket.Connect to a valid IP but closed port and the method is continuing as if I have successfully connected. When I packet sniffed what was going on I saw that the app was receiving RST packets from the remote machine. Yet from the tracing that is in place it is clear that the connect method is not throwing an exception. Any ideas what might be causing this? The code that is running is basically this IPEndPoint iep = new IPEndPoint(System.Net.IPAddress.Parse(m_ipAddress), m_port); Socket tcpSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); tcpSocket.Connect(iep); To add to the mystery... when running this code in a stand alone console application, the result is as expected – the connect method throws an exception. However, when running it in the Windows Service deployment we have the connect method does not throw an exception. Edit in response to Mystere Man's answer How would the exception be swallowed? I have a Trace.WriteLine right above the .Connect method and a Trace.WriteLine right under it (not shown in the code sample for readability). I know that both traces are running. I also have a try catch around the whole thing which also does a Trace.Writeline and I don't see that in the log files anywhere. I have also enabled the internal socket tracing as you suggested. I don't see any exceptions. I see what appears to be successful connections. I am trying to identify differences between the windows service app and the diagnostic console app I made. I am running out of ideas though End edit Thanks

    Read the article

  • JPA entitylisteners and @embeddable

    - by seanizer
    I have a class hierarchy of JPA entities that all inherit from a BaseEntity class: @MappedSuperclass @EntityListeners( { ValidatorListener.class }) public abstract class BaseEntity implements Serializable { // other stuff } I want all entities that implement a given interface to be validated automatically on persist and/or update. Here's what I've got. My ValidatorListener: public class ValidatorListener { private enum Type { PERSIST, UPDATE } @PrePersist public void checkPersist(final Object entity) { if (entity instanceof Validateable) { this.check((Validateable) entity, Type.PERSIST); } } @PreUpdate public void checkUpdate(final Object entity) { if (entity instanceof Validateable) { this.check((Validateable) entity, Type.UPDATE); } } private void check(final Validateable entity, final Type persist) { switch (persist) { case PERSIST: if (entity instanceof Persist) { ((Persist) entity).persist(); } if (entity instanceof PersistOrUpdate) { ((PersistOrUpdate) entity).persistOrUpdate(); } break; case UPDATE: if (entity instanceof Update) { ((Update) entity).update(); } if (entity instanceof PersistOrUpdate) { ((PersistOrUpdate) entity).persistOrUpdate(); } break; default: break; } } } and here's my Validateable interface that it checks against (the outer interface is just a marker, the inner contain the methods): public interface Validateable { interface Persist extends Validateable { void persist(); } interface PersistOrUpdate extends Validateable { void persistOrUpdate(); } interface Update extends Validateable { void update(); } } All of this works, however I would like to extend this behavior to Embeddable classes. I know two solutions: call the validation method of the embeddable object manually from the entity validation method: public void persistOrUpdate(){ // validate my own properties first // then manually validate the embeddable property: myEmbeddable.persistOrUpdate(); // this works but I'd like something that I don't have to call manually } use reflection, checking all properties to see if their type is of one of their interface types. This would work, but it's not pretty. Is there a more elegant solution?

    Read the article

  • Problems with video conversions through the web (local host)

    - by ron-d
    Hello, I get the following errors when I attempt video format conversions called from the local host: “An invalid media type was specified” for M4V to WMV conversions. “One or more arguments are invalid” for MP4 to WMV conversions. Here are the details of the problems: I’ve written a dll in C# that accepts videos in the formats AVI, WMV, M4V and MP4 and performs the following actions: Creates a copy of the input video in WMV format . Creates a WAV file of the input video audio portion. Creates a JPG image from a frame of the input video. I attached the dll to an ASP.NET web project that performs the dll actions. When tested through the developer studio, the actions are performed as intended for all formats. When I place the web project in place to be read when the local host is called through the web browser, the following behavior takes place: WMV format: All actions performed as intended. AVI format: Creates WMV file – OK Creates JPG image – OK Creates empty WAV file – problem. M4V format: Creates empty WAV file – problem. Does not create WMV file -problem Does not create JPG file –problem Throws me the error “An invalid media type was specified” MP4 format: Creates empty WAV file – problem. Does not create WMV file -problem Does not create JPG file –problem Throws me the error “One or more arguments are invalid” When I check their security property, all the files have the same permission access parameters (when I check their security property. Can anyone guide me as to how to solve these problems when the web project is called from the local host? Thank you.

    Read the article

  • Best way to associate data files with particular tests in RSpec / Ruby

    - by Bill T
    For my RSpec tests I would to automatically associate data files with each test. To clarify, if my tests each require an xml file as input data and then some xpath statements to validate the responses they get back I would like to externalize the xml and xpath as files and have the testing framework easily associate them with the particular test being run by using the unique ID of the test as the file(s) name. I tried to get this behavior but the solution isn't very clean. I wrote a helper method that takes the value of "description" and combines it with FILE to create a unique identifier which is set into a global variable that other utilities can access. The unique identifier is used to associate the data files I need. I have to call this helper method as the first line of every test, which is ugly. If I have an RSpec example that looks like this: describe "Basic functions of this server I'm testing" do it "should give me back a response" do # Sets a global var to: "my_tests_spec.rb_should_give_me_back_a_response" TestHelper::who_am_i __FILE__, description ... end end Is there some better/cleaner/slicker way I can get an unique ID for each test that I could use to associate data files with? Perhaps something build into RSpec I'm unaware of? Thank you, -Bill

    Read the article

  • Mixing menuItem.setIntent with onOptionsItemSelected doesn't work

    - by superjos
    While extending a sample Android activity that fires some other activities from its menu, I came to have some menu items handled within onOptionsItemSelected, and some menu items (that just fired intents) handled by calling setIntent within onCreateOptionsMenu. Basically something like: @Override public boolean onCreateOptionsMenu(Menu menu) { super.onCreateOptionsMenu(menu); menu.add(0, MENU_ID_1, Menu.NONE, R.string.menu_text_1); menu.add(0, MENU_ID_2, Menu.NONE, R.string.menu_text_2); menu.add(0, MENU_ID_3, Menu.NONE, R.string.menu_text_3). setIntent(new Intent(this, MyActivity_3.class)); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { super.onOptionsItemSelected(item); switch (item.getItemId()) { case (MENU_ID_1): // Process menu command 1 ... return true; case (MENU_ID_2): // Process menu command 2 ... // E.g. also fire Intent for MyActivity_2 return true; default: return false; } } Apparently, in this situation the Intent set on MENU_ID_3 is never fired, or anyway the related activity is never started. Android javadoc at some point goes like <<[if you set an intent on a menu item] and nothing else handles the item, then the default behavior will be to [start the activity with the intent]. What does it actually mean "and nothing else handles the item"? Is it enough to return false from onOptionsItemSelected? I also tried not to call super.onOptionsItemSelected(item) at the beginning and only invoke it in the default switch case, but I had same results. Does anyone have any suggestion? Does Android allow to mix the two type of handling? Thanks for your time everyone.

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >