Search Results

Search found 19480 results on 780 pages for 'do your own homework'.

Page 640/780 | < Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >

  • Systems design question: DB connection management in load-balanced n-tier

    - by aoven
    I'm wondering about the best approach to designing a DB connection manager for a load-balanced n-tier system. Classic n-tier looks like this: Client -> BusinessServer -> DBServer A load-balancing solution as I see it would then look like this: +--> ... +--+ +--> BusinessServer +--+--> SessionServer --+ Client -> Gateway --+--> BusinessServer +--| +--> DBServer +--> BusinessServer +--+--------------------+ +--> ... +--+ As pictured, the business server component is being load-balanced via multiple instances, and a hardware gateway is distributing the load among them. Session server probably needs to be situated outside the load-balancing array, because it manages state, which mustn't be duplicated. Barring any major errors in design so far, what is the best way to implement DB connection management? I've come up with a couple of options, but there may be others I'm not aware of: Introduce a new Broker component between the DBServer and the other components and let it handle the DB connections. The upside is that all the connections can be managed from a single point, which is very convenient. The downside is that now there is an additional "single point of failure" in the system. Other components must go through it for every request that involves DB in some way, which also makes this a bottleneck. Move the DB connection management into BusinessServer and SessionServer components and let each handle its own DB connections. The upside is that there is no additional "single point of failure" or bottleneck components. The downside is that there is also no control over possible conflicts and deadlocks apart from what DBServer itself can provide. What else can be done? FWIW: Technology is .NET, but none of the vendor-specific stacks are used (e.g. no WCF, MSMQ or the like).

    Read the article

  • Python: circular imports needed for type checking

    - by phild
    First of all: I do know that there are already many questions and answers to the topic of the circular imports. The answer is more or less: "Design your Module/Class structure properly and you will not need circular imports". That is true. I tried very hard to make a proper design for my current project, I in my opinion I was successful with this. But my specific problem is the following: I need a type check in a module that is already imported by the module containing the class to check against. But this throws an import error. Like so: foo.py: from bar import Bar class Foo(object): def __init__(self): self.__bar = Bar(self) bar.py: from foo import Foo class Bar(object): def __init__(self, arg_instance_of_foo): if not isinstance(arg_instance_of_foo, Foo): raise TypeError() Solution 1: If I modified it to check the type by a string comparison, it will work. But I dont really like this solution (string comparsion is rather expensive for a simple type check, and could get a problem when it comes to refactoring). bar_modified.py: from foo import Foo class Bar(object): def __init__(self, arg_instance_of_foo): if not arg_instance_of_foo.__class__.__name__ == "Foo": raise TypeError() Solution 2: I could also pack the two classes into one module. But my project has lots of different classes like the "Bar" example, and I want to seperate them into different module files. After my own 2 solutions are no option for me: Has anyone a nicer solution for this problem?

    Read the article

  • Constraining to parent container with MouseDragElementBehavior

    - by anonymous
    Hi all, I just had a question regarding constraining a control's drag and drop movement to its parent canvas. I tried using the ConstrainToParentBounds property on the MouseDragElementBehavior, however, when this is used the drag must be done really slowly or the movement of the control is choppy or stops altogether. So I am attempting to implement my own boundary constraints. I seem to be running into difficulty though. I am still using the MouseDragElementBehavior but am attempting to supplement it by also handling mouseleftbuttondown, mousemove, mouseleftbuttonup events. I know that these are working (haven't been overridden by the MouseDragElementBehavior) as I have tested them using other methods. I will post my current code below: private void Control_MouseMove(object sender, MouseEventArgs e) { MyControl mc = (MyControl)sender; Canvas canvas = (Canvas)mc.parent; GeneralTransform ct = canvas.TransformToVisual(Application.Current.RootVisual as UIElement; Point canvas_offset = ct.Transform(new Point(0,0)); double canvasTop = canvas_offset.Y; double canvasLeft = canvas_offset.X; GeneralTransform gt = mc.TransformToVisual(Application.Current.RootVisual as UIElement); Point offset = gt.Transform(new Point(0,0)); double controlTop = offset.Y; double controlLeft = offset.X; if(isMouseCaptured) { if(controlTop < canvasTop) { mc.Opacity = 1; //to test if conditions are being met, seems to indicate ok mc.setValue(Canvas.TopProperty, canvasTop); } if(controlLeft < canvasLeft) { mc.Opacity = 1; mc.setValue(Canvas.TopProperty, canvasTop); } } } This is what my code looks like at the moment (I realize there is nothing there for right/bottom). I've tried a bunch of different things at this point and none of them seem to give the desired result; the control's movement is still not constrained to the canvas. Any help/pointers would be greatly appreciated. Thanks!

    Read the article

  • LinqToSql Select to a class then do more queries

    - by fyjham
    I have a LINQ query running with multiple joins and I want to pass it around as an IQueryable<T> and apply additional filters in other methods. The problem is that I can't work out how to pass around a var data type and keep it strongly typed, and if I try to put it in my own class (EG: .Select((a,b) => new MyClass(a,b))) I get errors when I try to add later Where clauses because my class has no translations into SQL. Is there any way I can do one of the following: Make my class map to SQL? Make the var data-type implement an interface (So I can pass it round as though it's that)? Something I haven't though of that'll solve my issue? Example: public void Main() { using (DBDataContext context = new DBDataContext()) { var result = context.TableAs.Join( context.TableBs, a => a.BID, b => b.ID, (a,b) => new {A = a, B = b} ); result = addNeedValue(result, 4); } } private ???? addNeedValue(???? result, int value) { return result.Where(r => r.A.Value == value); } PS: I know in my example I can flatten out the function easily, but in the real thing it'd be an absolute mess if I tried.

    Read the article

  • How to extract block of XML from a log file on Linux

    - by dragonmantank
    I have a log file that looks like the following: 2010-05-12 12:23:45 Some sort of log entry 2010-05-12 01:45:12 Request XML: <RootTag> <Element>Value</Element> <Element>Another Value</Element> </RootTag> 2010-05-12 01:45:32 Response XML: <ResponseRoot> <Element>Value</Element> </ResponseRoot> 2010-05-12 01:45:49 Another log entry What I want to do is extract the Request and Response XML (and ultimately dump them into their own single files). I had a similar parser that used egrep but the XML was all on one line, not multiple ones like above. The log files are also somewhat large, hitting 500-600 megs a log. Smaller logs I would read in via a PHP script and use regex matching, but the amount of memory required for such a large file would more than likely kill the script. Is there an easy way using the built-in tools on a Linux box (CentOS in this case) to extract multiple lines or am I going to have to bite the bullet and use Perl or PHP to read in the entire file to extract it?

    Read the article

  • Do complex JOINs cause high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • Execute binary from memory in C# .net with binary protected from a 3rd party software

    - by NoobTom
    i've the following scenario: i've a C# application.exe i pack application.exe inside TheMida, a software anti-piracy/reverse engineering. i encrypt application.exe with aes256. (i wrote my own aes encryption/decryption and it is working) Now, when i want to execute my application i do the following: decrypt application.exe in memory execute the application.exe with the following code: BinaryReader br = new BinaryReader(decOutput); byte[] bin = br.ReadBytes(Convert.ToInt32(decOutput.Length)); decOutput.Close(); br.Close(); // load the bytes into Assembly Assembly a = Assembly.Load(bin); // search for the Entry Point MethodInfo method = a.EntryPoint; if (method != null) { // create an istance of the Startup form Main method object o = a.CreateInstance(method.Name); // invoke the application starting point method.Invoke(o, null); the application does not execute correctly. Now, the problem i think, is that this method is only to execute .NET executable. Since i packed my application.exe inside TheMida this does not work. Is there a workaround to this situation? Any suggestion? Thank you in advance.

    Read the article

  • how would you like computer science classes to be taught?

    - by aaa
    hello I am a graduate student now, and hopefully someday I will teach. my interests are C++, Python, embedded languages, and scientific computing. Meanwhile I daydream about how I would teach. I was not quite happy with my undergraduate university as I found many computer science classes lacking. so I would like to ask you, if you were a student, how would you like your computer science classes to be taught? I understand it is a very subjective question, but nevertheless I think it's important to know what people want. Some specific points I am interested in: should computer languages be taught explicitly, or should students be required to pick up language on their own? what is better for learning, tests, projects, some sort of take-home exam? how do you think classtime should be used? theory, introduction, explanations, etc.? do you think the group projects are important? how much about computer architecture do you want to learn in computer science class, not necessarily assembler class. should particular operating system/editor be mandated or encouraged? Thanks thank you for your comments. Question has been closed because it is a discussion question rather than Q&A. If you know appropriate website for discussions of such sort with low noise ratio, please let me know.

    Read the article

  • C# app running as either Windows Form or as Console Application

    - by Aeolien
    I am looking to have one of my Windows Forms applications be run programmatically—from the command line. In preparation, I have separated the logic in its own class from the Form. Now I am stuck trying to get the application to switch back and forth based on the presence of command line arguments. Here is the code for the main class: static class Program { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main() { string[] args = Environment.GetCommandLineArgs(); if (args.Length > 1) // gets passed its path, by default { CommandLineWork(args); return; } Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } private static void CommandLineWork(string[] args) { Console.WriteLine("It works!"); Console.ReadLine(); } where Form1 is my form and the It works! string is just a placeholder for the actual logic. Right now, when running this from within Visual Studio (with command line arguments), the phrase It works! is printed to the Output. However, when running the /bin/Debug/Program.exe file (or /Release for that matter) the application crashes. Am I going about this the right way? Would it make more sense (i.e. take less developer time) to have my logic class be a DLL that gets loaded by two separate applications? Or is there something entirely different that I'm not aware of? Thanks in advance!

    Read the article

  • iPhone: Leak with UIWebView loading Office documents. Any ideas how to avoid it?

    - by Thomas Tempelmann
    While there are already quite a few posts about leaks around UIWebView, mine is a bit more special, I believe, and thus deserves its own post here. I see a reproducible large leak every time I load a Office document such as a Word or Excel file. For instance, every time I display a 180KB .doc file, I get a 100KB leak. And that happens with both the simulator and an actual device, running OS 3.1.3. The leak is not visible with the Leaks instrument but only by looking at the malloc instances via the ObjectAlloc instrument. Here's a picture from the instruments trace: I've also made a demo project, UIWebView-Leak.zip, so you can verify this yourself. To see the leak, use the ObjectAlloc instrument, switch to the view where you see individual allocation objects, and sort by size so that you see the large ones in a group, just like in my picture above. Then view a Office document a few times and find the Malloc objects that keep staying "Live" even after the actual UIWebView has been freed. Is this a known bug? Or is there any way I can avoid these leaks? I.e, have you successfully shown Office documents on an iPhone withing getting such leaks? Note: I've reported this as a bug to Apple now, too (ID 7950594) I am still waiting for someone (including Apple) to confirm this as a true leak or show why it isn't (i.e. that I do something wrong or make wrong assumptions)

    Read the article

  • FQL query using stream table doesn't accept app access token

    - by tougher
    I've searched stackoverflow all day long to find an answer without luck, so here we go. I'm trying to fetch data from the stream table like this: FQL: SELECT post_id, message, created_time FROM stream WHERE source_id = 131559313586863 URL: http:// graph.facebook.com/fql?q=SELECT+post_id%2C+message%2C+created_time+FROM+stream+WHERE+source_id+%3D+131559313586863&access_token=10669xxxxx74470|PF-7GSdBx0Nxxxxxkdi1KwSQG-w But I get a 400 Bad Request as response with the error message: "An access token is required to request this resource.". I'm fetching an application access token with this url: https:// graph.facebook.com/oauth/access_token?client_id=FACEBOOK_APP_ID&client_secret=FACEBOOK_APP_SECRET&grant_type=client_credentials Facebook state in this blog post that "You will need to pass a valid app or user access token to access this functionality.". Functionality refers to /feed and /posts (the stream table). Futhermore this wiki tells the same story about using the stream table, "From June 3 2011 a token is required to query this table. You can use any application or user token to make the query.". Does anyone see my hopefully obvious flaw? Please note: The profile in the FQL query is public. I need this to run userless though a cronjob. No user interaction is possible. The request works if I replace the app access token with my own user token from https:// developers.facebook.com/tools/explorer Sorry for breaking the URL's. I need more reputation to post more than 2 links :-/

    Read the article

  • PHP/MySQL - Working with two databases, one shared and one local to an instance of application

    - by Extrakun
    The situation: Using a off-the-shelf PHP application, I have to add in a new module for extra functionality. Today, it is made known that eventually four different instances of the application are to be deployed, but the data from the new functionality is to be shared among those 4 instances. Each instance should still have their own database for users, content and etc. So the data for the new functionality goes into a 'shared' database. The data for the application (user login, content, uploads) go into a 'local' database To make things more complex, the new module I am writing will fetch data from the local DB and the shared DB at the same time. A re-write of the base application will take too long. I only have control over the new module which I am writing. The ideal solution: Is there a way to encapsulate 2 databases into one name using MySQL? I do not wish to switch DB connections or specifically name the DB to query from inside my SQL statements. The application uses a DB wrapper, so I am able to change it somehow so I can invisibly attempt to read/write to two different DB. What is the best way to handle this problem?

    Read the article

  • How to get an enum value from an assembly using late binding in C#

    - by tetranz
    Hello I have a C# 3.0 WinForms application which is occasionally required to control Excel with automation. This is working nicely with normal early binding but I've had some problems when people don't have Excel installed but still want to use my app except for the Excel part. Late binding seems to be a solution to this. Late binding is rather tedious in C# 3 but I'm not doing anything particularly difficult. I'm following http://support.microsoft.com/kb/302902 as a starter and it's working out well. My question is how can I use an enum by name? e.g, how can I use reflection to get the value of Microsoft.Office.Interop.Excel.XlFileFormat.xlTextWindows so that I can use it an InvokeMethod call? I know the easiest way is probably to create my own local enum with the same "magic" integer value but it would be nicer to be able to access it by name. The docs often don't list the value so to get it I probably need to have a little early bound test app that can tell me the value. Thanks

    Read the article

  • Using game of life or other virtual environment for artificial (intelligence) life simulation? [clos

    - by Berlin Brown
    One of my interests in AI focuses not so much on data but more on biologic computing. This includes neural networks, mapping the brain, cellular-automata, virtual life and environments. Described below is an exciting project that includes develop a virtual environment for bots to evolve in. "Polyworld is a cross-platform (Linux, Mac OS X) program written by Larry Yaeger to evolve Artificial Intelligence through natural selection and evolutionary algorithms." http://en.wikipedia.org/wiki/Polyworld " Polyworld is a promising project for studying virtual life but it still is far from creating an "intelligent autonomous" agent. Here is my question, in theory, what parameters would you use create an AI environment? Possibly a brain environment? Possibly multiple self contained life organisms that have their own "brain" or life structures. I would like a create a spin on the game of life simulation. What if you have a 64x64 game of life grid. But instead of one grid, you might have N number of grids. The N number of grids are your "life force" If all of the game of life entities die in a particular grid then that entire grid dies. A group of "grids" makes up a life form. I don't have an immediate goal. First, I want to simulate an environment and visualize what is going on in the environment with OpenGL and see if there are any interesting properties to the environment. I then want to add "scarce resources" and see if the AI environment can manage resources adequately.

    Read the article

  • Technical choices in unmarshaling hash-consed data

    - by Pascal Cuoq
    There seems to be quite a bit of folklore knowledge floating about in restricted circles about the pitfalls of hash-consing combined with marshaling-unmarshaling of data. I am looking for citable references to these tidbits. For instance, someone once pointed me to library aterm and mentioned that the authors had clearly thought about this and that the representation on disk was bottom-up (children of a node come before the node itself in the data stream). This is indeed the right way to do things when you need to re-share each node (with a possible identical node already in memory). This re-sharing pass needs to be done bottom-up, so the unmarshaling itself might as well be, too, so that it's possible to do everything in a single pass. I am in the process of describing difficulties encountered in our own context, and the solutions we found. I would appreciate any citable reference to the kind of aforementioned folklore knowledge. Some people obviously have encountered the problems before (the aterm library is only one example). But I didn't find anything in writing. Even the little piece of information I have about aterm is hear-say. I am not worried it's not reliable (you can't make this up), but "personal communication" and "look how it's done in the source code" are considered poor form in citations. I have enough references on hash-consing alone. I am only interested in references where it interferes with other aspects of programming, such as marshaling or distribution.

    Read the article

  • Project Management and Scheduling Techniques

    - by Alec Smart
    Hello, I know this is probably the nth project management question. But am trying to move my team onto a more robust project management technique. Am wondering what is the best technique to use? I know that probably no technique is best, but which are the most popular techniques? Poker planning? Evidence Based Scheduling? COCOMO? Agile? Scrum? XP? Which one to use? Also, suppose I use EBS, wouldn't it be too time consuming to break down every single activity into fine grained tasks? E.g. "Design" is a goal, what kind of fine-grained tasks will I have under it? Is this is a waste of time i.e. dividing work into so many micro parts. Usually when I give my programmers a task, I follow up every week, and they complete quite a lot of the task assigned to them (the tasks are very broad e.g. X module). Is EBS worth it? Are there any white-papers on it so that I can implement it on my own? (instead of using Fogbugz) Most of my projects are web-based projects. Thank you for your time.

    Read the article

  • 'Bank Switching' Sprites on old NES applications

    - by Jeffrey Kern
    I'm currently writing in C# what could basically be called my own interpretation of the NES hardware for an old-school looking game that I'm developing. I've fired up FCE and have been observing how the NES displayed and rendered graphics. In a nutshell, the NES could hold two bitmaps worth of graphical information, each with the dimensions of 128x128. These are called the PPU tables. One was for BG tiles and the other was for sprites. The data had to be in this memory for it to be drawn on-screen. Now, if a game had more graphical data then these two banks, it could write portions of this new information to these banks -overwriting what was there - at the end of each frame, and use it from the next frame onward. So, in old games how did the programmers 'bank switch'? I mean, within the level design, how did they know which graphic set to load? I've noticed that Mega Man 2 bankswitches when the screen programatically scrolls from one portion of the stage to the next. But how did they store this information in the level - what sprites to copy over into the PPU tables, and where to write them at? Another example would be hitting pause in MM2. BG tiles get over-written during pause, and then get restored when the player unpauses. How did they remember which tiles they replaced and how to restore them? If I was lazy, I could just make one huge static bitmap and just grab values that way. But I'm forcing myself to limit these values to create a more authentic experience. I've read the amazing guide on how M.C. Kids was made, and I'm trying to be barebones about how I program this game. It still just boggles my mind how these programmers accomplisehd what they did with what they had. EDIT: The only solution I can think of would be to hold separate tables that state what tiles should be in the PPU at what time, but I think that would be a huge memory resource that the NES wouldn't be able to handle.

    Read the article

  • Save Jquery Object without losing its binding

    - by Ahmad Satiri
    Hi I have object created using jquery where each object has it's own binding. function closeButton(oAny){ var div = create_div(); $(div).attr("id","btn_"+$(oAny).attr("id")); var my_parent = this; $(div).html("<img src='"+ my_parent._base_url +"/assets/images/close.gif'>"); $(div).click(function(){ alert("do some action here"); }); return div; } var MyObject = WindowObject(); var btn = closeButton(MyObject); $(myobject).append(btn); $("body").append(myobject); //at this point button will work as i expected //save to array for future use ObjectCollections[0] = myobject; //remove $(myobject).remove(); $(body).append(ObjectCollections[0]); // at this point button will not work For the first time i can show my object and close button is working as i expected. But if i save myobject to any variable for future use. It will loose its binding. Anybody ever try to do this ? Is there any work around ? or It is definitely a bad idea ? .And thanks for answering my question.

    Read the article

  • spring.net proxy factory with target type needs property virtual ?

    - by Vince
    Hi all, I'm creating spring.net proxy in code by using ProxyFactory object with ProxyTargetType to true to have a proxy on a non interfaced complex object. Proxying seems ok till i call a method on that object. The method references a public property and if this property is not virtual it's value is null. This doesn't happen if i use Spring.Aop.Framework.AutoProxy.InheritanceBasedAopConfigurer in spring config file but in this case i can't use this because spring context doesn't own this object. Is this normal to have such behavior or is there a tweak to perform what i want (proxying object virtual method without having to change properties virtual)? Note that i tried factory.AutoDetectInterfaces and factory.ProxyTargetAttributes values but doesn't help. My proxy creation code: public static T CreateMethodCallStatProxy<T>() { // Proxy factory ProxyFactory factory = new ProxyFactory(); factory.AddAdvice(new CallMonitorTrackerAdvice()); factory.ProxyTargetType = true; // Create instance factory.Target = Activator.CreateInstance<T>(); // Get proxy T proxiedClass = (T)factory.GetProxy(); return proxiedClass; } Thanks for your help

    Read the article

  • OOP - Handling Automated Instances of a Class - PHP

    - by dscher
    This is a topic that, as a beginner to PHP and programming, sort of perplexes me. I'm building a stockmarket website and want users to add their own stocks. I can clearly see the benefit of having each stock be a class instance with all the methods of a class. What I am stumped on is the best way to give that instance a name when I instantiate it. If I have: class Stock() { ....doing stuff..... } what is the best way to give my instances of it a name. Obviously I can write: $newStock = new Stock(); $newStock.getPrice(); or whatever, but if a user adds a stock via the app, where can the name of that instance come from? I guess that there is little harm in always creating a new child with $newStock = new Stock() and then storing that to the DB which leads me to my next question! What would be the best way to retrieve 20 user stocks(for example) into instances of class Stock()? Do I need to instantiate 20 new instances of class Stock() every time the user logs in or is there something I'm missing? I hope someone answers this and more important hope a bunch of people answer this and it somehow helps someone else who is having a hard time wrapping their head around what probably leads to a really elegant solution. Thanks guys!

    Read the article

  • Dynamically adding controls from an Event after Page_Init

    - by GenericTypeTea
    Might seem like a daft title as you shouldn't add dynamic controls after Page_Init if you want to maintain ViewState, but I couldn't think of a better way of explaining the problem. I have a class similar to the following: public class WebCustomForm : WebControl, IScriptControl { internal CustomRender Content { get { object content = this.Page.Session[this.SESSION_CONTENT_TRACKER]; return content as CustomRender; } private set { this.Page.Session[this.SESSION_CONTENT_TRACKER] = value; } } } CustomRender is an abstract class that implements ITemplate that I use to self-contain a CustomForms module I'm in the middle of writing. On the Page_Init of the page that holds the WebCustomForm, I Initialise the control by passing the relevant Ids to it. Then on the overridden OnInit method of the WebCustomForm I call the Instantiate the CustomRender control that's currently active: if (this.Content != null) { this.Content.InstantiateIn(this); } The problem is that my CustomRender controls need the ability to change the CustomRender control of the WebCustomForm. But when the events that fire on the CustomRender fire, the Page_Init event has obviously already gone off. So, my question is, how can I change the content of the WebCustomForm from a dynamically added control within it? The way I see it, I have two options: I separate the CustomRender controls out into their own stand alone control and basically have an aspx page per control and handle the events myself on the page (although I was hoping to just make a control I drop on the page and forget about) I don't use events and just keep requesting the current page, but with different Request Parameters Or I go back to the drawing board with any better suggetions anyone can give me.

    Read the article

  • OpenGL Calls Lock/Freeze

    - by Necrolis
    I am using some dell workstations(running WinXP Pro SP 2 & DeepFreeze) for development, but something was recenlty loaded onto these machines that prevents any opengl call(the call locks) from completing(and I know the code works as I have tested it on 'clean' machines, I also tested with simple opengl apps generated by dev-cpp, which will also lock on the dell machines). I have tried to debug my own apps to see where exactly the gl calls freeze, but there is some global system hook on ZwQueryInformationProcess that messes up calls to ZwQueryInformationThread(used by ExitThread), preventing me from debugging at all(it causes the debugger, OllyDBG, to go into an access violation reporting loop or the program to crash if the exception is passed along). the hook: ntdll.ZwQueryInformationProcess 7C90D7E0 B8 9A000000 MOV EAX,9A 7C90D7E5 BA 0003FE7F MOV EDX,7FFE0300 7C90D7EA FF12 CALL DWORD PTR DS:[EDX] 7C90D7EC - E9 0F28448D JMP 09D50000 7C90D7F1 9B WAIT 7C90D7F2 0000 ADD BYTE PTR DS:[EAX],AL 7C90D7F4 00BA 0003FE7F ADD BYTE PTR DS:[EDX+7FFE0300],BH 7C90D7FA FF12 CALL DWORD PTR DS:[EDX] 7C90D7FC C2 1400 RETN 14 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C the messed up function + call: ntdll.ZwQueryInformationThread 7C90D7F0 8D9B 000000BA LEA EBX,DWORD PTR DS:[EBX+BA000000] 7C90D7F6 0003 ADD BYTE PTR DS:[EBX],AL 7C90D7F8 FE ??? ; Unknown command 7C90D7F9 7F FF JG SHORT ntdll.7C90D7FA 7C90D7FB 12C2 ADC AL,DL 7C90D7FD 14 00 ADC AL,0 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C So firstly, anyone know what if anything would lead to OpenGL calls cause an infinite lock,and if there are any ways around it? and what would be creating such a hook in kernal memory ? Update: After some more fiddling, I have discovered a few more kernal hooks, a lot of them are used to nullify data returned by system information calls(such as the remote debugging port), I also managed to find out the what ever is doing this is using madchook.dll(by madshi) to do this, this dll is also injected into every running process(these seem to be some anti debugging code). Also, on the OpenGL side, it seems Direct X is fine/unaffected(I ran one of the DX 9 demo's without problems), so could one of these kernal hooks somehow affect OpenGL?

    Read the article

  • What would you write in a consitution (law book) for a programmers' country?

    - by Developer Art
    After we have got our great place to talk about professional matters and sozialize online - on SO, I believe the next logical step would be to found our own country! I invite you all to participate and bring together your available resources. We will buy an island, better even a group of island so that we could establish states like .NET territories, Java land, Linux republic etc. We will build a society of programmers (girls - we need you too). To the organizational side, we're going to need some constitution or a law book. I suggest we write it together. I make it a wiki as it should be a cooperative effort. I'll open the work. Section 1. Programmers' rights. Every citizen has a right to an Internet connection 24/7. Every citizen can freely choose the field of interest Section 2. Programmers' obligations. Every citizen must embrace the changing nature of the profession and constanly educate himself Section 3. Law enforcement. Code duplication when can be avoided is punished by limiting the bandwith speed to 64Kbit for a period of one week. Using ugly hacks instead of refactoring code is punished by cutting the Internet connection for a period of one month. Usage of technologies older than 5/10 years is punished by restricting the web access to the sites last updated 5/10 years ago for a period of one month. Please feel free to modify and extend the list. We'll need to have it ready before we proceed formally with the country foundation. A purchase fund will be established shortly. Everyone is invited to participate.

    Read the article

  • What is the preferred way to update database schemas in multiple production environments

    - by rmarimon
    I am about to install some 20 servers with the same web application in multiple locations connected to their own local database. I will be updating the web applications remotely (perhaps using debian's package manager) and I'm sure will eventually need to update the database schemas. Since each server could be eventually be using a different release of the web application, I need a way to apply the incremental changes to the servers. I'm thinking something like this. Let's start with database.schema.1 as the original release of the database and assume this number increases with each new version of the schema. I eventually could end up with database.schema.17 as the current release. For a new installation this would be the schema to install. It seems to me that I would need consecutive translations like database.translation.1.2 which would convert database.schema.1 into database.schema.2, database.translation.2.3 to convert from 2 to 3 and so on until 17. It seems that whenever I change a schema I need to alter the database but perhaps I need to run some script to update the data which might be done with SQL but might require an external non sql script. What is the appropriate way to organize all these files? What is the automatic way to apply those upgrades to the schema? Where do I store the current version number of the schema?

    Read the article

  • wrapping aspx user controls commands in a transaction

    - by Hans Gruber
    I'm working on heavily dynamic and configurable CMS system. Therefore, many pages are composed of a dynamically loaded set of user controls. To enable loose coupling between containers (pages) and children (user controls), all user controls are responsible for their own persistence. Each User Control is wired up to its data/service layer dependencies via IoC. They also implement an IPersistable interface, which allows the container .aspx page to issue a Save command to its children without knowledge of the number or exact nature of these user controls. Note: what follows is only pseudo-code: public class MyUserControl : IPersistable, IValidatable { public void Save() { throw new NotImplementedException(); } public bool IsValid() { throw new NotImplementedException(); } } public partial class MyPage { public void btnSave_Click(object sender, EventArgs e) { foreach (IValidatable control in Controls) { if (!control.IsValid) { throw new Exception("error"); } } foreach (IPersistable control in Controls) { if (!control.Save) { throw new Exception("error"); } } } } I'm thinking of using declarative transactions from the System.EnterpriseService namespace to wrap the btnSave_Click in a transaction in case of an exception, but I'm not sure how this might be achieved or any pitfalls to such an approach.

    Read the article

< Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >