Search Results

Search found 12222 results on 489 pages for 'initial context'.

Page 411/489 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • How to get the set of beans that are to be created in Spring?

    - by cyborg
    So here's the scenario: I have a Spring XML configuration with some lazy-beans, some not lazy-beans and some beans that depend on other beans. Eventually Spring will resolve all this so that only the beans that are meant to be created are created. The question: how can I programmatically tell what this set is? When I use context.getBean(name) that initializes the bean. BeanDefinition.isLazyInit() will only tell me how I defined the bean. Any other ideas? ETA: In DefaultListableBeanFactory: public void preInstantiateSingletons() throws BeansException { if (this.logger.isInfoEnabled()) { this.logger.info("Pre-instantiating singletons in " + this); } synchronized (this.beanDefinitionMap) { for (Iterator it = this.beanDefinitionNames.iterator(); it.hasNext();) { String beanName = (String) it.next(); RootBeanDefinition bd = getMergedLocalBeanDefinition(beanName); if (!bd.isAbstract() && bd.isSingleton() && !bd.isLazyInit()) { if (isFactoryBean(beanName)) { FactoryBean factory = (FactoryBean) getBean(FACTORY_BEAN_PREFIX + beanName); if (factory instanceof SmartFactoryBean && ((SmartFactoryBean) factory).isEagerInit()) { getBean(beanName); } } else { getBean(beanName); } } } } } The set of instantiable beans is initialized. When initializing this set any beans not in this set referenced by this set will also be created. From looking through the source it does not look like there's going to be any easy way to answer my question.

    Read the article

  • pyOpenSSL and the WantReadError

    - by directedition
    I have a socket server that I am trying to move over to SSL on python 2.5, but I've run into a snag with pyOpenSSL. I can't find any good tutorials on using it, so I'm operating largely on guesses. Here is how my server sets up the socket: ctx = SSL.Context(SSL.SSLv23_METHOD) ctx.use_privatekey_file ("mykey.pem") ctx.use_certificate_file("mycert.pem") sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) addr = ('', int(8081)) sock.bind(addr) sock.listen(5) Here is how it accepts clients: sock.setblocking(0) while True: if len(select([sock], [], [], 0.25)[0]): client_sock, client_addr = sock.accept() client = ClientGen(client_sock) And here is how it sends/receives from the connected sockets: while True: (r, w, e) = select.select([sock], [sock], [], 0.25) if len(r): bytes = sock.recv(1024) if len(w): n_bytes = sock.send(self.message) It's compacted, but you get the general idea. The problem is, once the send/receive loop starts, it dies right away, before anything has been sent or received (that I can see anyway): Traceback (most recent call last): File "ClientGen.py", line 50, in networkLoop n_bytes = sock.send(self.message WantReadError The manual's description of the 'WantReadError' is very vague, saying it can come from just about anywhere. What am I doing wrong?

    Read the article

  • How to use db4o IObjectContainer in a web application ? (Container lifetime ?)

    - by driis
    I am evaluating db4o for persistence for a ASP .NET MVC project. I am wondering how I should use the IObjectContainer in a web context with regards to object lifetime. As I see it, I can do one of the following: Create the IObjectContainer at application startup and keep the same instance for the entire application lifetime. Create one IObjectContainer per request. Start a server, and get a client IObjectContainer for each database interaction. What are the implications of these options, in terms of performance and concurrency ? Since the database is locked when an IObjectContainer is opened, I am pretty sure that option 2) would get me some problems with concurrency - would this also be the case for option 1 ? As I understand it, if I retrieve an object from an IObjectContainer, it must be saved by the same IObjectContainer instance - in order for db4o to identify it as being the same object. Therefore, If I choose option 3), I would have to retrieve the original object, make the necessary changes (copy data from a modified object), and then store it using the same IObjectContainer. Is this true ?

    Read the article

  • Factorial function - design and test.

    - by lukas
    I'm trying to nail down some interview questions, so I stared with a simple one. Design the factorial function. This function is a leaf (no dependencies - easly testable), so I made it static inside the helper class. public static class MathHelper { public static int Factorial(int n) { Debug.Assert(n >= 0); if (n < 0) { throw new ArgumentException("n cannot be lower that 0"); } Debug.Assert(n <= 12); if (n > 12) { throw new OverflowException("Overflow occurs above 12 factorial"); } //by definition if (n == 0) { return 1; } int factorialOfN = 1; for (int i = 1; i <= n; ++i) { //checked //{ factorialOfN *= i; //} } return factorialOfN; } } Testing: [TestMethod] [ExpectedException(typeof(OverflowException))] public void Overflow() { int temp = FactorialHelper.MathHelper.Factorial(40); } [TestMethod] public void ZeroTest() { int factorialOfZero = FactorialHelper.MathHelper.Factorial(0); Assert.AreEqual(1, factorialOfZero); } [TestMethod] public void FactorialOf5() { int factOf5 = FactorialHelper.MathHelper.Factorial(5); Assert.AreEqual(120,factOf5); } [TestMethod] [ExpectedException(typeof(ArgumentException))] public void NegativeTest() { int factOfMinus5 = FactorialHelper.MathHelper.Factorial(-5); } I have a few questions: Is it correct? (I hope so ;) ) Does it throw right exceptions? Should I use checked context or this trick ( n 12 ) is ok? Is it better to use uint istead of checking for negative values? Future improving: Overload for long, decimal, BigInteger or maybe generic method? Thank you

    Read the article

  • Werid onclick behavior of images on home screen widget

    - by kknight
    I wrote a home screen widget with one image on it. When the image is clicked, browser will be opened for a url link. Generally, it is working. But a weird thing is that, when I click background, then click the picture, the browser will not be open. Until I click the second time on the picture, the browser opens. The steps to reproduce is below: Click on the home screen widget background. Click on the image on the home screen. The browser is not opened. Click on the image again. The browser is opened. If I didn't click on the background, the image will react to click very well, i.e. browser will be open when the image is clicked the first time. The widget XML file is as below: <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/widget" android:layout_width="320dip" android:layout_height="200dip" android:background="@drawable/prt_base" > <ImageView android:id="@+id/picture1" android:layout_width="134dip" android:layout_height="102dip" android:layout_marginLeft="62dip" android:layout_marginTop="6dip" android:scaleType="center" android:src="@drawable/picture1" /> </RelativeLayout> The code to set OnClick on the picture1 ImageView is as below: defineIntent = new Intent(Intent.ACTION_VIEW, Uri .parse("http://www.google.com")); pendingIntent = PendingIntent .getActivity(context, 0 /* no requestCode */, defineIntent, 0 /* no flags */); updateViews.setOnClickPendingIntent( picId, pendingIntent); Anyone knows what's wrong? Thanks.

    Read the article

  • Using CGContextDrawTiledImage at different zooms causes massive memory growth

    - by Jacques
    I'm working on app an where there's a view in a zoomable UIScrollView. When the user zooms in or out, I redraw the view that's in the UIScrollView to be nice and sharp. That view has a background image that I draw with CGContextDrawTiledImage. I noticed that memory usage grows every time I switch to a new zoom level. It looks like CGContextDrawTiledImage keeps a cache somewhere of the image scaled to different sizes. So, If I go from 1.0 to 1.1x zoom, memory use grows. Going back to 1.0 doesn't cause it to grow, but then going to 1.05 and then 1.2 causes it to grow twice. Back to 1.1 and no growth. Of course, the zoom level is under user control so I don't have control over how many zoom levels happen. Right now my background image is kind of massive (512x512), so this causes memory usage to grow very quickly. It doesn't show up as a memory leak in Instruments, just additional allocations that never get freed. I've tried to find a way to free the cache that appears to be being created, but no luck. It doesn't seem to respond to low memory warnings, for example. I also tried setting the view's backgroundColor to a UIColor created with colorWithPatternImage, but that doesn't work because I'm doing the scaling by changing the graphics context's CTM, not by setting the view's transform. Any ideas on how to keep memory usage from blowing up?

    Read the article

  • Canvas Check before submission

    - by smokinguns
    I have a page where a user can draw on the canvas and save the image to a file on the server. The canvas has a default black background. Is there a way to check if the user has drawn anything on the canvas before submitting the data URL representation of the image of a canvas with the toDataURL() function? So if the user doesn't draw anything on the canvas(it will be a blank canvas with a black background), the image wont be created on the server. Should I loop through each and every pixel of the canvas to determine this? Here is what I'm doing currently: var currentPixels = context.getImageData(0, 0, 600, 400); for (var y = 0; y < currentPixels.height; y += 1) { for (var x = 0; x < currentPixels.width; x += 1) { for (var c = 0; c < 3; c += 1) { var i = (y*currentPixels.width + x)*4 + c; if(currentPixels.data[i]!=0) break; } } }

    Read the article

  • Selecting an option with given value

    - by Maven
    I am trying to select a particular option from a select list depending on the value, I have following markup: <select name="class" id="class"> <option value="1">679460ED-0B15-4ED9-B3C8-A8C276DF1C82</option> <option value="2">B99BF873-7DF0-4E7F-95FF-3F1FD1A26139</option> <option value="3">1DCD5AD7-F57C-414</option> <option value="4">6B0170AA-F044-4F9C-8BB8-31A51E452CE4</option> <option value="5">C6A8B</option> <option value="6">1BBD6FA4-335A-4D8F-8681-DFED317B8052</option> <option value="7">727D71AB-F7D1-4B83-9D6D-6BEEAAB</option> <option value="8">BC4DE8A2-C864-4C7C-B83C-EE2450AF11B1</option> <option value="9">AIR CONDITIONING SYSTEM</option> <option value="10">POWER GENERATION SYSTEM</option> </select> <script> selectThisValue('#class',3); </script> in .js function selectThisValue(element,value) { console.log(value); var elem = $(element + ' option[value=' + value + ']'); console.log(elem); elem.attr("selected", "selected"); } Results for console.log are as follows: 3 [prevObject: i.fn.i.init[1], context: document, selector: "#class option[value=3]", jquery: "1.10.2", constructor: function…] But this is not working, no errors are given but nothing happens also. Please help identifying the where am I wrong.

    Read the article

  • Entity Framework LINQ Query using Custom C# Class Method - Once yes, once no - because executing on the client or in SQL?

    - by BrooklynDev
    I have two Entity Framework 4 Linq queries I wrote that make use of a custom class method, one works and one does not: The custom method is: public static DateTime GetLastReadToDate(string fbaUsername, Discussion discussion) { return (discussion.DiscussionUserReads.Where(dur => dur.User.aspnet_User.UserName == fbaUsername).FirstOrDefault() ?? new DiscussionUserRead { ReadToDate = DateTime.Now.AddYears(-99) }).ReadToDate; } The linq query that works calls a from after a from, the equivalent of SelectMany(): from g in oc.Users.Where(u => u.aspnet_User.UserName == fbaUsername).First().Groups from d in g.Discussions select new { UnReadPostCount = d.Posts.Where(p => p.CreatedDate > DiscussionRepository.GetLastReadToDate(fbaUsername, p.Discussion)).Count() }; The query that does not work is more like a regular select: from d in oc.Discussions where d.Group.Name == "Student" select new { UnReadPostCount = d.Posts.Where(p => p.CreatedDate > DiscussionRepository.GetLastReadToDate(fbaUsername, p.Discussion)).Count(), }; The error I get is: LINQ to Entities does not recognize the method 'System.DateTime GetLastReadToDate(System.String, Discussion)' method, and this method cannot be translated into a store expression. My question is, why am I able to use my custom GetLastReadToDate() method in the first query and not the second? I suppose this has something to do with what gets executed on the db server and what gets executed on the client? These queries seem to use the GetLastReadToDate() method so similarly though, I'm wondering why would work for the first and not the second, and most importantly if there's a way to factor common query syntax like what's in the GetLastReadToDate() method into a separate location to be reused in several different places LINQ queries. Please note all these queries are sharing the same object context.

    Read the article

  • Battery drains even with app off screen, could it be Location Services doing it?

    - by John Jorsett
    I run my app, which uses GPS and Bluetooth, then hit the back button so it goes off screen. I verified via LogCat that the app's onDestroy was called. OnDestroy removes the location listeners and shuts down my app's Bluetooth service. I look at the phone 8 hours later and half the battery charge has been consumed, and my app was responsible according the phone's Battery Use screen. If I use the phone's Settings menu to Force Stop the app, this doesn't occur. So my question is: do I need to do something more than remove the listeners to stop Location Services from consuming power? That's the only thing I can think of that would be draining the battery to that degree when the app is supposedly dormant. Here's my onStart() where I turn on the location-related stuff and Bluetooth: @Override public void onStart() { super.onStart(); if(D_GEN) Log.d(TAG, "MainActivity onStart, adding location listeners"); // If BT is not on, request that it be enabled. // setupBluetooth() will then be called during onActivityResult if (!mBluetoothAdapter.isEnabled()) { Intent enableIntent = new Intent(BluetoothAdapter.ACTION_REQUEST_ENABLE); startActivityForResult(enableIntent, REQUEST_ENABLE_BT); // Otherwise, setup the Bluetooth session } else { if (mBluetoothService == null) setupBluetooth(); } // Define listeners that respond to location updates mLocationManager = (LocationManager) this.getSystemService(Context.LOCATION_SERVICE); mLocationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, GPS_UPDATE_INTERVAL, 0, this); mLocationManager.addGpsStatusListener(this); mLocationManager.addNmeaListener(this); } And here's my onDestroy() where I remove them: public void onDestroy() { super.onDestroy(); if(D_GEN) Log.d(TAG, "MainActivity onDestroy, removing update listeners"); // Remove the location updates if(mLocationManager != null) { mLocationManager.removeUpdates(this); mLocationManager.removeGpsStatusListener(this); mLocationManager.removeNmeaListener(this); } if(D_GEN) Log.d(TAG, "MainActivity onDestroy, finished removing update listeners"); if(D_GEN) Log.d(TAG, "MainActivity onDestroy, stopping Bluetooth"); stopBluetooth(); if(D_GEN) Log.d(TAG, "MainActivity onDestroy finished"); }

    Read the article

  • threading in c#

    - by I__
    i am using this code: private void Form1_Load(object sender, EventArgs e) { } private void serialPort1_DataReceived(object sender, System.IO.Ports.SerialDataReceivedEventArgs e) { string response = serialPort1.ReadLine(); this.BeginInvoke(new MethodInvoker( () => textBox1.AppendText(response + "\r\n") )); } ThreadStart myThreadDelegate = new ThreadStart(ThreadWork.DoWork); Thread myThread = new Thread(myThreadDelegate); myThread.Start(); but am getting lots of errors: Error 2 The type or namespace name 'ThreadStart' could not be found (are you missing a using directive or an assembly reference?) C:\Users\alexluvsdanielle\AppData\Local\Temporary Projects\WindowsFormsApplication1\Form1.cs 31 44 WindowsFormsApplication1 Error 3 The name 'ThreadWork' does not exist in the current context C:\Users\alexluvsdanielle\AppData\Local\Temporary Projects\WindowsFormsApplication1\Form1.cs 31 56 WindowsFormsApplication1 Error 4 The type or namespace name 'Thread' could not be found (are you missing a using directive or an assembly reference?) C:\Users\alexluvsdanielle\AppData\Local\Temporary Projects\WindowsFormsApplication1\Form1.cs 32 31 WindowsFormsApplication1 Error 5 A field initializer cannot reference the non-static field, method, or property 'WindowsFormsApplication1.Form1.myThreadDelegate' C:\Users\alexluvsdanielle\AppData\Local\Temporary Projects\WindowsFormsApplication1\Form1.cs 32 38 WindowsFormsApplication1 what am i doing wrong?

    Read the article

  • How to transform a production to LL(1) grammar for a list separated by a semicolon?

    - by Subb
    Hi, I'm reading this introductory book on parsing (which is pretty good btw) and one of the exercice is to "build a parser for your favorite language." Since I don't want to die today, I thought I could do a parser for something relatively simple, ie a simplified CSS. Note: This book teach you how to right a LL(1) parser using the recursive-descent algorithm. So, as a sub-exercice, I am building the grammar from what I know of CSS. But I'm stuck on a production that I can't transform in LL(1) : //EBNF block = "{", declaration, {";", declaration}, [";"], "}" //BNF <block> =:: "{" <declaration> "}" <declaration> =:: <single-declaration> <opt-end> | <single-declaration> ";" <declaration> <opt-end> =:: "" | ";" This describe a CSS block. Valid block can have the form : { property : value } { property : value; } { property : value; property : value } { property : value; property : value; } ... The problem is with the optional ";" at the end, because it overlap with the starting character of {";", declaration}, so when my parser meet a semicolon in this context, it doesn't know what to do. The book talk about this problem, but in its example, the semicolon is obligatory, so the rule can be modified like this : block = "{", declaration, ";", {declaration, ";"}, "}" So, Is it possible to achieve what I'm trying to do using a LL(1) parser?

    Read the article

  • Three most critical programming concepts

    - by Todd
    I know this has probably been asked in one form or fashion but I wanted to pose it once again within the context of my situation (and probably others here @ SO). I made a career change to Software Engineering some time ago without having an undergrad or grad degree in CS. I've supplemented my undergrad and grad studies in business with programming courses (VB, Java,C, C#) but never performed academic coursework in the other related disciplines (algorithms, design patterns, discrete math, etc.)...just mostly self-study. I know there are several of you who have either performed interviews and/or made hiring decisions. Given recent trends in demand, what would you say are the three most essential Comp Sci concepts that a developer should have a solid grasp of outside of language syntax? For example, I've seen blog posts of the "Absolute minimum X that every programmer must know" variety...that's what I'm looking for. Again if it's truly a redundancy please feel free to close; my feelings won't be hurt. (Closest ones I could find were http://stackoverflow.com/questions/164048/basic-programming-algorithmic-concepts- which was geared towards a true beginner, and http://stackoverflow.com/questions/648595/essential-areas-of-knowledge-which I didn't feel was concrete enough). Thanks in advance all! T.

    Read the article

  • Show iPad keyboard on select, focus or always (jQuery)

    - by Ryan
    I have a web app that is using jQuery to replace the RETURN key with TAB so that when I user presses return the form is not submitted but rather the cursor moves to the next text field. This works in all browsers but only 1/2 works on the iPad. On the iPad the next field is highlighted but the keyboard is hidden. How can I keep the keyboard visible or force it somehow? Here's my code (thanks to http://thinksimply.com/blog/jquery-enter-tab): function checkForEnter (event) { if (event.keyCode == 13) { currentBoxNumber = textboxes.index(this); if (textboxes[currentBoxNumber + 1] != null) { nextBox = textboxes[currentBoxNumber + 1] nextBox.focus(); nextBox.select(); event.preventDefault(); return false; } } } Drupal.behaviors.formFields = function(context) { $('input[type="text"]').focus(function() { $(this).removeClass("idleField").addClass("focusField"); }); $('input[type="text"]').blur(function() { $(this).removeClass("focusField").addClass("idleField"); }); // replaces the enter/return key function with tab textboxes = $("input.form-text"); if ($.browser.mozilla) { $(textboxes).keypress (checkForEnter); } else { $(textboxes).keydown (checkForEnter); } };

    Read the article

  • Proper QUuid usage in Qt ? (7-Zip DLL usage problems (QLibrary, QUuid GUID conversion, interfaces))

    - by whipsnap
    Hi, I'm trying to write a program that would use 7-Zip DLL for reading files from inside archive files (7z, zip etc). Here's where I'm so far: #include QtCore/QCoreApplication #include QLibrary #include QUuid #include iostream using namespace std; #include "7z910/CPP/7zip/Archive/IArchive.h" #include "7z910/CPP/7zip/IStream.h" #include "MyCom.h" // {23170F69-40C1-278A-1000-000110070000} QUuid CLSID_CFormat7z(0x23170F69, 0x40C1, 0x278A, 0x10, 0x00, 0x00, 0x01, 0x10, 0x07, 0x00, 0x00); typedef int (*CreateObjectFunc)( const GUID *clsID, const GUID *interfaceID, void **outObject); void readFileInArchive() { QLibrary myLib("7z.dll"); CreateObjectFunc myFunction = (CreateObjectFunc)myLib.resolve("CreateObject"); if (myFunction == 0) { cout outArchive; myFunction(&CLSID_CFormat7z, &IID_IOutArchive, (void **)&outArchive); } int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); readFileInArchive(); return a.exec(); } Trying to build that in Qt Creator will lead to following error: cannot convert 'QUuid*' to 'const GUID*' in argument passing How should QUuid be correctly used in this context? Also, being a C++ and Qt newbie I haven't yet quite grasped templates or interfaces, so overall I'm having trouble getting through these first steps. If someone could give tips or even example code on how for example an image file could be extracted from ZIP file (to be shown in Qt GUI later on*), I would highly appreciate that. My main goal at the moment is to write a program with GUI for selecting archive files containing image files (PNG, JPG etc) and displaying those files one at a time in the GUI. A Qt based CDisplayEx in short.

    Read the article

  • Specifying SOAP Headers for a Zend_Soap Service

    - by Stephen
    I have a generally straight forward web service that I've written (converting code to ZF from a Java implementation of the same service and trying to maintain the same wsdl structure as much as possible). The service loads a PHP class, rather than individual functions. The PHP class contains three different functions within it. Everything seems to be working just fine, except that I can't seem to figure out how to specify that a given function parameter should be passed as a SOAP header. I've not seen any mention of SOAP headers in the Server context, only how to pass header parameters with a client to a server. In addition to the standard parameters for the function that would be sent in the SOAP body and detailed in the docblock, I would like to specify two parameters (a username and password) that would be sent in a SOAP header. I have to assume this is possible, but haven't been able to find anything online, nor have I had any responses to a similar post on Zend's forum. Is there something that can be added in the docblock area to specify a parameter as a header (maybe in a similar fashion to using WebParam?)? Any suggestions/examples on how to get this accomplished would be greatly appreciated!

    Read the article

  • Scope of the c++ using directive

    - by ThomasMcLeod
    From section 7.3.4.2 of the c++11 standard: A using-directive specifies that the names in the nominated namespace can be used in the scope in which the using-directive appears after the using-directive. During unqualified name lookup (3.4.1), the names appear as if they were declared in the nearest enclosing namespace which contains both the using-directive and the nominated namespace. [ Note: In this context, “contains” means “contains directly or indirectly”. —end note ] What do the second and third sentences mean exactly? Please give example. Here is the code I am attempting to understand: namespace A { int i = 7; } namespace B { using namespace A; int i = i + 11; } int main(int argc, char * argv[]) { std::cout << A::i << " " << B::i << std::endl; return 0; } It print "7 7" and not "7 18" as I would expect. Sorry for the typo, the program actually prints "7 11".

    Read the article

  • C# XNA: What can cause SpriteBatch.End() to throw a NRE?

    - by Rosarch
    I don't understand what I'm doing wrong here: public void Draw(GameTime gameTime) // in ScreenManager { SpriteBatch.Begin(SpriteBlendMode.AlphaBlend); for (int i = 0; i < Screens.Count; i++) { if (Screens[i].State == Screen.ScreenState.HIDDEN) continue; Screens[i].Draw(gameTime); } SpriteBatch.End(); // null ref exception } SpriteBatch itself is not null. Some more context: public class MasterEngine : Microsoft.Xna.Framework.Game { public MasterEngine() { graphicsDeviceManager = new GraphicsDeviceManager(this); Components.Add(new GamerServicesComponent(this)); // ... spriteBatch = new SpriteBatch(graphicsDeviceManager.GraphicsDevice); screenManager = new ScreenManager(assets, gameEngine, graphicsDeviceManager.GraphicsDevice, spriteBatch); } //... protected override void Draw(GameTime gameTime) { screenManager.Draw(gameTime); // calls the problematic method base.Draw(gameTime); } } Am I failing to initialize something properly? UPDATE: As an experiment, I tried this to the constructor of MasterEngine: spriteBatch = new SpriteBatch(graphicsDeviceManager.GraphicsDevice); spriteBatch.Begin(); spriteBatch.DrawString(assets.GetAsset<SpriteFont>("calibri"), "ftw", new Vector2(), Color.White); spriteBatch.End(); This does not cause a NRE. hmm.... UPDATE 2: This does cause an NRE: protected override void Draw(GameTime gameTime) { spriteBatch.Begin(); spriteBatch.End(); // boned here //screenManager.Draw(gameTime); base.Draw(gameTime); }

    Read the article

  • How do I change the viewport of a window in win32?

    - by Colen
    Hi, I have a window with child windows inside in it. The child windows take up about 1000 pixels of vertical space. However, our users don't always have 1000 pixels of vertical space available - they might have as little as 500 or 600 pixels. I want to be able to display this window at a size of 500 pixels high, and have the user "scroll" up and down the window to see the full contents. The window should always be 500 pixels high, but the view within it should change. Assume I can add a scroll bar somewhere so the user can choose which part of the window he wants to see. Windows will normally paint the window contents from height 0 to height 500; how do I tell it instead to "paint from height 250 to height 750", for example? I know that I can set the viewport with functions like SetViewportOrgEx etc, but those functions require a device context - when do I call them if I want them to be "permanent"? Do I call them when I get the WM_PAINT message from windows? Or at some other time? And which functions from that family do I want to use? Thanks.

    Read the article

  • How would a user stay logged in to a REST-based website?

    - by unforgiven3
    A year or so ago I asked this question: Can you help me understand this? “Common REST Mistakes: Sessions are irrelevant”. My question was essentially this: Okay, I get that HTTP authentication is done automatically on every message - but how? Is the username/password sent with every request? Doesn't that just increase attack surface area? I feel like I'm missing part of the puzzle. The answers I received made perfect sense in the context of a mobile (iPhone, Android, WP7) app - when talking to a REST service, the app would just send user credentials along with each request. That worked great for me. But now, I would like to better understand how one would secure a REST-like website, like StackOverflow itself or something like Reddit. How would things work if it was a user logged in via a web browser instead of logged in via an iPhone app? What happens when a user logs in? Are the credentials saved in the browser somehow? How would the browser know what credentials to send with subsequent REST requests? What if it's a JavaScript call to a webservice? How would the JavaScript call include user credentials? I'll be quite frank: my understanding of security when it comes to websites is pretty limited. I enjoyed working with REST services from an app perspective, but now I want to try and build a website that is based on REST principles, and I'm finding myself to be pretty lost. If there is anything in the above question that is unclear that you'd like me to clarify, please leave a comment and I'll address it.

    Read the article

  • Should we point to an NSManagedObject entity with weak instead of strong pointer?

    - by Jim Thio
    I think because NSManagedObject is managed by the managedObject context the pointer should be weak. Yet it often goes back to 0 in my cases. for (CategoryNearby * CN in sorted) { //[arrayOfItems addObject:[NSString stringWithFormat:@"%@ - %d",CN.name,[CN.order intValue]]]; NearbyShortcutTVC * tvc=[[NearbyShortcutTVC alloc]init]; tvc.categoryNearby =CN; // tvc.titleString=[NSString stringWithFormat:@"%@",CN.name]; // tvc.displayed=CN.displayed; [arrayOfItemsLocal addObject:tvc]; //CN PO(tvc); PO(tvc.categoryNearby); while (false); } self.arrayOfItems = arrayOfItemsLocal; PO(self.categoriesNearbyInArrayOfItems); [self.tableViewa reloadData]; ... Yet somewhere down the line: tvc.categoryNearby becomes nil. I do not know how or when or where it become nil. How do I debug this? Or should the reference be strong instead? This is the interface of NearbyShortcutTVC by the way @interface NearbyShortcutTVC : BGBaseTableViewCell{ } @property (weak, nonatomic) CategoryNearby * categoryNearby; @end To make sure that we're talking about the same object I print all the memory addresses of the NSArray They're both the exact same object. But somehow the categoryNearby property of the object is magically set to null somewhere. self.categoriesNearbyInArrayOfItems: ( 0x883bfe0, 0x8b6d420, 0x8b6f9f0, 0x8b71de0, 0xb073f90, 0xb061a10, 0xb06a880, 0x8b74940, 0x8b77110, 0x8b794e0, 0x8b7bf40, 0x8b7cef0, 0x8b7f4b0, 0x8b81a30, 0x88622d0, 0x8864e60, 0xb05c9a0 ) self.categoriesNearbyInArrayOfItems: ( 0x883bfe0, 0x8b6d420, 0x8b6f9f0, 0x8b71de0, 0xb073f90, 0xb061a10, 0xb06a880, 0x8b74940, 0x8b77110, 0x8b794e0, 0x8b7bf40, 0x8b7cef0, 0x8b7f4b0, 0x8b81a30, 0x88622d0, 0x8864e60, 0xb05c9a0 )

    Read the article

  • Shaping EF LINQ Query Results Using Multi-Table Includes

    - by sisdog
    I have a simple LINQ EF query below using the method syntax. I'm using my Include statement to join four tables: Event and Doc are the two main tables, EventDoc is a many-to-many link table, and DocUsage is a lookup table. My challenge is that I'd like to shape my results by only selecting specific columns from each of the four tables. But, the compiler is giving a compiler is giving me the following error: 'System.Data.Objects.DataClasses.EntityCollection does not contain a definition for "Doc' and no extension method 'Doc' accepting a first argument of type 'System.Data.Objects.DataClasses.EntityCollection' could be found. I'm sure this is something easy but I'm not figuring it out. I haven't been able to find an example of someone using the multi-table include but also shaping the projection. Thx,Mark var qry= context.Event .Include("EventDoc.Doc.DocUsage") .Select(n => new { n.EventDate, n.EventDoc.Doc.Filename, //<=COMPILER ERROR HERE n.EventDoc.Doc.DocUsage.Usage }) .ToList(); EventDoc ed; Doc d = ed.Doc; //<=NO COMPILER ERROR SO I KNOW MY MODEL'S CORRECT DocUsage du = d.DocUsage;

    Read the article

  • Basic question in XSL regarding preceding text

    - by Rachel
    I am new to XSL and i have a basic question on the context of using preceding text. My template match is on the text node. I am iterating over an xml file and within my for loop i am trying to take the preceding text of the text node. Unfortunately preceding::text() is not working if i use it within a for loop. I want to use it within the for loop but how can do it? <xsl:template match="text()"> <xsl:variable name="this" as="text()" select="."/> <xsl:for-each select="$input[@id = generate-id(current())]"> <xsl:variable name="preText" as="xsd:integer" select="sum(preceding::text()[. >> //*[@id=@name]]/string-length(.))"/> ... ... </xsl:for-each> </xsl:template>

    Read the article

  • Is there any point in using a volatile long?

    - by Adamski
    I occasionally use a volatile instance variable in cases where I have two threads reading from / writing to it and don't want the overhead (or potential deadlock risk) of taking out a lock; for example a timer thread periodically updating an int ID that is exposed as a getter on some class: public class MyClass { private volatile int id; public MyClass() { ScheduledExecutorService execService = Executors.newScheduledThreadPool(1); execService.scheduleAtFixedRate(new Runnable() { public void run() { ++id; } }, 0L, 30L, TimeUnit.SECONDS); } public int getId() { return id; } } My question: Given that the JLS only guarantees that 32-bit reads will be atomic is there any point in ever using a volatile long? (i.e. 64-bit). Caveat: Please do not reply saying that using volatile over synchronized is a case of pre-optimisation; I am well aware of how / when to use synchronized but there are cases where volatile is preferable. For example, when defining a Spring bean for use in a single-threaded application I tend to favour volatile instance variables, as there is no guarantee that the Spring context will initialise each bean's properties in the main thread.

    Read the article

  • android - how to cache an image from a remote site

    - by Lynnooi
    Hi, Can anyone please provide me some example on how to save an image i fetch from websites into a cache. I had try to include the following function into my code and call it once i run the activity. public void getRemoteImage(String imageUrl) { imageUrl = "http://marga.mobile9.com/download/thumb/295/sexylady7_xo6npovn.jpg"; URL aURL = null; URLConnection conn = null; Bitmap bmp = null; CacheResult cache_result = CacheManager.getCacheFile(imageUrl, new HashMap()); if (cache_result == null) { try { aURL = new URL(imageUrl); conn = aURL.openConnection(); conn.connect(); InputStream is = conn.getInputStream(); cache_result = new CacheManager.CacheResult(); CacheManager.saveCacheFile(imageUrl, cache_result); } catch (Exception e) { //return null; } } bmp = BitmapFactory.decodeStream(cache_result.getInputStream());*/ Toast.makeText(context,"Please work.. namo namo namo", Toast.LENGTH_SHORT).show(); //return bmp; } However, I got a nullPointerException. Can someone please help me with it as i'm quite new in android.

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >