Search Results

Search found 266636 results on 10666 pages for 'back stack'.

Page 324/10666 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • Crash using WscRegisterForChanges.

    - by user335126
    I'm trying to use the WscRegisterForChanges with C++ function in Windows 7. Documentation located here: http://msdn.microsoft.com/en-us/library/bb432507(v=VS.85).aspx My problem is that even though the callback properly executes, the code crashes when it gets to the end of the callback's execution. Here's the code in question. It's very simple, so I'm not sure why it's crashing: #include #include #include void SecurityCenterChangeOccurred(void *param) { printf("Change occurred!\n"); } int main() { HRESULT result = S_OK; HANDLE callbackRegistration = NULL; result = WscRegisterForChanges( NULL, &callbackRegistration, (LPTHREAD_START_ROUTINE)SecurityCenterChangeOccurred, NULL); while(1) { Sleep(100); } return 0; } My call stack looks like this when the crash occurs: 00faf6e8() ntdll.dll!_TppWorkerThread@4() + 0x1293 bytes kernel32.dll!@BaseThreadInitThunk@12() + 0x12 bytes ntdll.dll!___RtlUserThreadStart@8() + 0x27 bytes ntdll.dll!__RtlUserThreadStart@8() + 0x1b bytes If I add ExitThread(0); to the end of SecurityCenterChangeOccurred, I get an error and the following trace (So I don't think I should be using ExitThread): Unhandled exception at 0x7799852b (ntdll.dll) in WscRegisterForChangesCrash.exe: 0xC000071C: An invalid thread, handle %p, is specified for this operation. Possibly, a threadpool worker thread was specified. ntdll.dll!_TpCheckTerminateWorker@4() + 0x3ca2f bytes ntdll.dll!_RtlExitUserThread@4() + 0x30 bytes WscRegisterForChangesCrash.exe!SecurityCenterChangeOccurred(void * param=0x00000000) Line 8 + 0xa bytes C++ wscapi.dll!WorkItemWrapper() + 0x19 bytes ntdll.dll!_RtlpTpWorkCallback@8() + 0xdf bytes ntdll.dll!_TppWorkerThread@4() + 0x1293 bytes kernel32.dll!@BaseThreadInitThunk@12() + 0x12 bytes ntdll.dll!___RtlUserThreadStart@8() + 0x27 bytes ntdll.dll!__RtlUserThreadStart@8() + 0x1b bytes Does anyone have any ideas why this might be happening? To trigger the crash run the program and turn the firewall on or off.

    Read the article

  • x86 Assembly: Before Making a System Call on Linux Should You Save All Registers?

    - by mudge
    I have the below code that opens up a file, reads it into a buffer and then closes the file. The close file system call requires that the file descriptor number be in the ebx register. The ebx register gets the file descriptor number before the read system call is made. My question is should I save the ebx register on the stack or somewhere before I make the read system call, (could int 80h trash the ebx register?). And then restore the ebx register for the close system call? Or is the code I have below fine and safe? I have run the below code and it works, I'm just not sure if it is generally considered good assembly practice or not because I don't save the ebx register before the int 80h read call. ;; open up the input file mov eax,5 ; open file system call number mov ebx,[esp+8] ; null terminated string file name, first command line parameter mov ecx,0o ; access type: O_RDONLY int 80h ; file handle or negative error number put in eax test eax,eax js Error ; test sign flag (SF) for negative number which signals error ;; read in the full input file mov ebx,eax ; assign input file descripter mov eax,3 ; read system call number mov ecx,InputBuff ; buffer to read into mov edx,INPUT_BUFF_LEN ; total bytes to read int 80h test eax,eax js Error ; if eax is negative then error jz Error ; if no bytes were read then error add eax,InputBuff ; add size of input to the begining of InputBuff location mov [InputEnd],eax ; assign address of end of input ;; close the input file ;; file descripter is already in ebx mov eax,6 ; close file system call number int 80h

    Read the article

  • Structure map and generics (in XML config)

    - by James D
    Hi I'm using the latest StructureMap (2.5.4.264), and I need to define some instances in the xml configuration for StructureMap using generics. However I get the following 103 error: Unhandled Exception: StructureMap.Exceptions.StructureMapConfigurationException: StructureMap configuration failures: Error: 103 Source: Requested PluginType MyTest.ITest`1[[MyTest.Test,MyTest]] configured in Xml cannot be found Could not create a Type for 'MyTest.ITest`1[[MyTest.Test,MyTest]]' System.ApplicationException: Could not create a Type for 'MyTest.ITest`1[[MyTest.Test,MyTest]]' ---> System.TypeLoadException: Could not loa d type 'MyTest.ITest`1' from assembly 'StructureMap, Version=2.5.4.264, Culture=neutral, PublicKeyToken=e60ad81abae3c223'. at System.RuntimeTypeHandle._GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) at System.RuntimeType.PrivateGetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& s tackMark) at System.Type.GetType(String typeName, Boolean throwOnError) at StructureMap.Graph.TypePath.FindType() --- End of inner exception stack trace --- at StructureMap.Graph.TypePath.FindType() at StructureMap.Configuration.GraphBuilder.ConfigureFamily(TypePath pluginTypePath, Action`1 action) A simply replication of the code is as follows: public interface ITest<T> { } public class Test { } public class Concrete : ITest<Test> { } Which I then wish to define in the XML configuration something as follows: <DefaultInstance PluginType="MyTest.ITest`1[[MyTest.Test,MyTest]],MyTest" PluggedType="MyTest.Concrete,MyTest" Scope="Singleton" /> I've been racking my brain, however I can't see what I'm doing wrong - I've used Type.GetType to verify the type actually is valid which it is. Anyone have any ideas? Thanks !

    Read the article

  • Populate two column grid with databinding?

    - by Richard
    How do i populate a two column grid with objects from my observable collection? I've tried to achieve this effect by using the tookits wrap panel but the items just stack. <toolkit:WrapPanel Margin="5,0,0,0" Width="400"> <ItemsControl ItemsSource="{Binding Trips}"> <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel Height="236" Width="182"> <Button Style="{StaticResource VasttrafikButtonTrip}"> <StackPanel Width="152" Height="140"> <TextBlock Text="{Binding FromName}" /> <TextBlock FontFamily="Segoe WP Semibold" Text="till" /> <TextBlock Text="{Binding ToName}" /> </StackPanel> </Button> <TextBlock HorizontalAlignment="Left" Width="160" FontSize="16" FontWeight="ExtraBlack" Text="{Binding TravelTimeText}" /> <TextBlock HorizontalAlignment="Left" Width="160" Margin="0,-6,0,0" FontSize="16" Text="{Binding TransferCountText}" /> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </toolkit:WrapPanel>

    Read the article

  • Inline function v. Macro in C -- What's the Overhead (Memory/Speed)?

    - by Jason R. Mick
    I searched Stack Overflow for the pros/cons of function-like macros v. inline functions. I found the following discussion: Pros and Cons of Different macro function / inline methods in C ...but it didn't answer my primary burning question. Namely, what is the overhead in c of using a macro function (with variables, possibly other function calls) v. an inline function, in terms of memory usage and execution speed? Are there any compiler-dependent differences in overhead? I have both icc and gcc at my disposal. My code snippet I'm modularizing is: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = AttractiveTerm * AttractiveTerm; EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); My reason for turning it into an inline function/macro is so I can drop it into a c file and then conditionally compile other similar, but slightly different functions/macros. e.g.: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = pow(SigmaSquared/RadialDistanceSquared,9); EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); (note the difference in the second line...) This function is a central one to my code and gets called thousands of times per step in my program and my program performs millions of steps. Thus I want to have the LEAST overhead possible, hence why I'm wasting time worrying about the overhead of inlining v. transforming the code into a macro. Based on the prior discussion I already realize other pros/cons (type independence and resulting errors from that) of macros... but what I want to know most, and don't currently know is the PERFORMANCE. I know some of you C veterans will have some great insight for me!!

    Read the article

  • Advanced All In One .NET Framework (should i go for a software factory ?)

    - by alfredo dobrekk
    Hi, i m starting a new project that would basically take input from user and save them to database among about 30 screens, and i would like to find a framework that will allow the maximum number of these features out of the box : .net c#. windows form. unit testing continuous integration logging screens with lists, combo boxes, text boxes, add, delete, save, cancel that are easy to update when you add a property to your classes or a field to your database. auto completion on controls to help user find its way use of an orm like nhibernate easy multithreading and display of wait screens for user easy undo redo tabbed child windows search forms ability to grant access to some functionnalities according to user profiles mvp/mvvm or whatever design patterns either some code generation from database to c# classe or generation of database schema from c# classes some kind of database versioning / upgrade to easily update database when i release patches to application once in production automatic control resizing code metrics analysis some code generator i can use against my entities that would generate some rough form i can rearrange after code documentation generator ... At this point i have 3 options : Build from scratch on top of clr :( Find functionnalities among several open source framework and use them as a stack for infrastucture Find a "software factory" I know its lot but i really would like to use existing code to build upon so i can focus on business rules. What open source tools would u use to achieve these ?

    Read the article

  • Android Signal 11 (SIGSEGV)

    - by Naturjoghurt
    I read many posts here and on other sites, but can not find the problem creating my Error: I use an AsyncTask because I want to easily manipulate the UI Thread before and after Execution. In doInBackground I create a ThreadPoolExecutor and execute Runnables. If I only execute 1 Runnable with the Executor, there is no Problem, but if I execute another Runnable I get following Error: 06-26 18:00:42.288: A/libc(25073): Fatal signal 11 (SIGSEGV) at 0x7f486162 (code=1), thread 25106 (pool-1-thread-2) 06-26 18:00:42.304: D/dalvikvm(25073): GC_CONCURRENT freed 119K, 2% free 8908K/9056K, paused 4ms+4ms, total 45ms 06-26 18:00:42.327: I/System.out(25073): In Check All with Prefix: a and Length: 4 06-26 18:00:42.390: I/DEBUG(126): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 06-26 18:00:42.390: I/DEBUG(126): Build fingerprint: 'google/yakju/maguro:4.2.2/JDQ39/573038:user/release-keys' 06-26 18:00:42.390: I/DEBUG(126): Revision: '9' 06-26 18:00:42.390: I/DEBUG(126): pid: 25073, tid: 25106, name: pool-1-thread-2 >>> de.uni_duesseldorf.cn.distributed_computing2 <<< 06-26 18:00:42.390: I/DEBUG(126): signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 7f486162 ... 06-26 18:00:42.538: I/DEBUG(126): memory map around fault addr 7f486162: 06-26 18:00:42.538: I/DEBUG(126): 60292000-60391000 06-26 18:00:42.538: I/DEBUG(126): (no map for address) 06-26 18:00:42.538: I/DEBUG(126): bed14000-bed35000 [stack] I set up the ThreadPoolExecutor like this: // numberOfPackages: Number of Runnables to be executed public void initializeThreadPoolExecutor (int numberOfPackages) { int corePoolSize = Runtime.getRuntime().availableProcessors(); int maxPoolSize = numberOfPackages; long keepAliveTime = 60; final BlockingQueue workingQueue = new LinkedBlockingQueue(); executor = new ThreadPoolExecutor(corePoolSize, maxPoolSize, keepAliveTime, TimeUnit.SECONDS, workingQueue); } I have no clue, why it fails when starting the second Thread. Maybe Memory Leaks? Any Help appreciated. Thanks in Advance

    Read the article

  • Over Optimistic Daily Productivity

    - by Dan Revell
    I'm a junior developer and have been working since I graduated last summer so coming up to a year now. I have this issue that is starting to get to me. Every night I think back to what I did that day, feel bad that I didn't get as much done as I would have liked and then tick off in my head all the things I'll get done the following day. Come the end of the following day I end haven't gotten through half of what I wanted to. This over optimism that I'm suffering from. Might it be just because I'm relatively new to the profession and aren't aware of how long things will actually take me. The work might be quick to think through in my head but all sorts of time sync's involved can bleed away the hours. If not that then perhaps it's the technology stack that I'm working on. SharePoint isn't the easiest thing to develop for and it's certainly something I came into not knowing a whole lot about. If it's because I'm not yet skilled enough to predict how long things will take me, is this trait of over optimistic predictions universal to the profession? I'd appreciate any input from those experienced with working with younger developers and those that might have suffered from this themselves. [EDIT] Perhaps I worded the question badly. I'm interested in just general day to day work rather than overall project completion estimation.

    Read the article

  • MACRO compilation PROBLEM

    - by wildfly
    i was given a primitive task to find out (and to put in cl) how many nums in an array are bigger than the following ones, (meaning if (arr[i] arr[i+1]) count++;) but i've problems as it has to be a macro. i am getting errors from TASM. can someone give me a pointer? SortA macro a, l LOCAL noes irp reg, <si,di,bx> push reg endm xor bx,bx xor si,si rept l-1 ;;also tried rept 3 : wont' compile mov bl,a[si] inc si cmp bl,arr[si] jb noes inc di noes: add di,0 endm mov cx,di irp reg2, <bx,di,si> pop reg2 endm endm dseg segment arr db 10,9,8,7 len = 4 dseg ends sseg segment stack dw 100 dup (?) sseg ends cseg segment assume ds:dseg, ss:sseg, cs:cseg start: mov ax, dseg mov ds,ax sortA arr,len cseg ends end start errors: Assembling file: sorta.asm **Error** sorta.asm(51) REPT(4) Expecting pointer type **Error** sorta.asm(51) REPT(6) Symbol already different kind: NOES **Error** sorta.asm(51) REPT(10) Expecting pointer type **Error** sorta.asm(51) REPT(12) Symbol already different kind: NOES **Error** sorta.asm(51) REPT(16) Expecting pointer type **Error** sorta.asm(51) REPT(18) Symbol already different kind: NOES Error messages: 6

    Read the article

  • What changed in the DataGrid that means it won't work anymore?

    - by Jeff Yates
    I have a Silverlight app with a DataGrid containing some custom columns and all was working well. Then I updated to Silverlight 3 tools for VS 2008 SP1 and rebuilt it. Now it has the following problems: Rows aren't added when the collection is modified. The ItemsSource property is (and always has been) set to an ObservableCollection instance, which notifies when its contents change. This worked fine for Silverlight 2. However, in Silverlight 3 to get this working at all, I now have to null and then re-set ItemsSource - this seems like I'm hiding a bigger issue but I can't work out what that might be. I cannot select a row or a cell anymore. If I'm lucky, I can select one whole row before it stops working. I can't edit anything. I suspect this is related to the previous point. I'll post some source when I am able, but first I have to strip it down to the bare minimum. In the meantime, I was hoping someone might have some idea of what may be going on here. My gut feeling on the second two points is that my bindings are no longer working, but that's just a guess and if it is the case, I have no idea which ones. Thanks for any help anyone might be able to provide. Update So, I finally reduced my problem down to a simple works/doesn't work comparison. The problem seems to occur if I override Equals in my element type. As soon as I do that, something happens strangely in the ObservableCollection that contains that type, it seems, and my application breaks. To make it more interesting, there is a check to make sure that duplicate items don't even get close to being added to the collection. I don't exactly know why ObservableCollection needs to compare equality when inserting items (the stack trace indicates it is using IndexAt) but this seems to cause the issue. So, any thoughts?

    Read the article

  • servicestack Razor view with request and response DTO

    - by user7398
    I'm having a go with the razor functionality in service stack. I have a razor cshtml view working for one of my response DTO's. I need to access some values from the request DTO in the razor view that have been filled in from some fields from the REST route, so i can construct a url to put into the response html page and also label some form labels. Is there anyway of doing this? I don't want to duplicate the property from the request DTO into the response DTO just for this html view. Because i'm trying to emulate an existing REST service of another product, i do not want to emit extra data just for the html view. eg http://localhost/rest/{Name}/details/{Id} eg @inherits ViewPage<DetailsResponse> @{ ViewBag.Title = "todo title"; Layout = "HtmlReport"; } this needs to come from the request dto NOT @Model <a href="/rest/@Model.Name">link to user</a> <a href="/rest/@Model.Name/details/@Model.Id">link to user details</a>

    Read the article

  • How to model in J2EE / JEE?

    - by Harry
    Let's say, I have decided to go with J(2)EE stack for my enterprise application. Now, for domain modelling (or: for designing the M of MVC), which APIs can I safely assume and use, and which I should stay away from... say, via a layer of abstraction? For example, Should I go ahead and litter my Model with calls to Hibernate/JPA API? Or, should I build an abstraction... a persistence layer of my own to avoid hard-coding against these two specific persistence APIs? Why I ask this: Few years ago, there was this Kodo API which got superseded by Hibernate. If one had designed a persistence layer and coded the rest of the model against this layer (instead of littering the Model with calls to specific vendor API), it would have allowed one to (relatively) easily switch from Kodo to Hibernate to xyz. Is it recommended to make aggressive use of the *QL provided by your persistence vendor in your domain model? I'm not aware of any real-world issues (like performance, scalability, portability, etc) arising out of a heavy use of an HQL-like language. Why I ask this: I would like to avoid, as much as possible, writing custom code when the same could be accomplished via query language that is more portable than SQL. Sorry, but I'm a complete newbie to this area. Where could I find more info on this topic? Many thanks in advance. /HS

    Read the article

  • Timing related crash when unloading a DLL?

    - by fbrereto
    I know I'm reaching for straws here, but this one is a mystery... any pointers or help would be most welcome, so I'm appealing to those more intelligent than I: We have a crash exhibited in our release binaries only. The crash takes place as the binary is bringing itself down and terminating sub-libraries upon which it depends. Its ability to be reproduced is dependent on the machine- some are 100% reliable in reproducing the crash, some don't exhibit the issue at all, and some are in between. The crash is deep within one of the sublibraries, and there is a good likelihood the stack is corrupt by the time the rubble can be brought into a debugger (MSVC 2008 SP1) to be examined. Running the binary under the debugger prevents the bug from happening, as does remote debugging, as does (of all things) connecting to the machine via VNC. We have tried to install the Microsoft Driver Development Kit, and doing so also squelches the bug. What would be the next best place to look? What tools would be best in this circumstance? Does it sound like a race condition, or something else?

    Read the article

  • Why is CoRegisterClassObject creating two extra threads?

    - by Stijn Sanders
    I'm trying to fix a problem that only recently happens on a number of machine's on a VPN. They each run a client application I wrote that exposes a COM automation object. For some strange reason I haven't been able to discover yet, one thread in the application takes up all of the available CPU time, slowing other operation on the machine. In observing the application's strange behaviour, I've noticed it's the third thread started, and if I debug on my machine I notice the first call to CoRegisterClassObject created two extra threads. If the second of these two threads is the one that gets into an infinite loop, I'm not at all shure how to fix this. Where could I check next about what's wrong? Could it have started by one of the recent patches rolled out by Microsoft this last 'patch tuesday'? I had a go with ProcessExplorer to extract a stack trace of the thread: ntoskrnl.exe!ExReleaseResourceLite+0x1a3 ntoskrnl.exe!PsGetContextThread+0x329 WLDAP32.dll!Ordinal325+0x1231 WLDAP32.dll!Ordinal325+0x129e WLDAP32.dll!Ordinal325+0x1178 ntdll.dll!LdrInitializeThunk+0x24 ntdll.dll!LdrShutdownThread+0xe9 kernel32.dll!ExitThread+0x3e kernel32.dll!FreeLibraryAndExitThread+0x1e ole32.dll!StringFromGUID2+0x65d kernel32.dll!GetModuleFileNameA+0x1ba

    Read the article

  • Faulty to use memcache together with a php web-browser-game in this way?

    - by Crowye
    Background We are currently working on a strategy web-browser game based on php, html and javascript. The plan is to have 10,000+ users playing within the same world. Currently we are using memcached to: store json static data, language files store changeable serialized php class objects (such as armies, inventorys, unit-containers, buildings, etc) In the back we have a mysql server running and holding all the game data aswell. When a object is loaded through our ObjectLoader it loads in this order: checks a static hashmap in the script for the object checks memcache if it has already been loaded into it otherwise loads from database, and saves it into memcache and the static temp hashmap We have built the whole game using a class-object-oriented approach where functionality is always made between objects. Beause of this we think we have managed to get a nice structure, and with the help of memcached we have received good request times from client-server when interacting with the game. I'm aware that memcache is not synchronized, and also is not commonly used for holding a full game in memory. In the beginning after a server's startup the load times when loading objects into memcache for the first time will be high, but after the server's been online for a while and most loads are from memcache, the loads will be well reduced. Currently we are saving changed objects into memcache and database at the same time. Earlier we had an idea to save objects into db only after a certain time or at intervals, but due to risk inconsistency if the memcache/server went down, we skipped it for now. Client requests to server often return object's status simple json-format without changing the object, which in turn is represented in the browser visually with images and javascript. But from time to time depending on when an object was last updated, it updates them with new information (e.g. a build-queue holding planned buildings time-progress is increased, and/or planned-queue-items-array has changed). Questions: Do you see how this could work or are we walking in blindness here? Do you expect us to have a lot of inconsistency issues if someone loads and updates the a memcache objects while someone else does the same? Is it even doable to do it in the way he have done it? Seems to be working fine atm, but so far we have only been 4 people online at the same time.. Is some other cache program more fit for this class-object approach than memcached? Is there any other tips you have for this situation? UPDATE Since it is simply a "normal webpage" (no applet, flash, etc), we are implementing the game so that the server is the only one holding a "real game-state".. the state of the different javascript-objects on the client is more like a approximative version of the server's game state. From time to time and before you do certain things important things, the client's visual state is updated to the server's state (e.g. the client things he can afford a barracks, asks the server to build a barracks, server updates current resources according to income-data on server and then tries to build a barracks or casts an error-message, and then sends the current server-state on resources, buildings back to the client).. It is not a fast-paced game lika real strategy game. More like a quite slow 3-4 months playtime game, where buildings can take +1 minute up to several days to complete.

    Read the article

  • How to make Spring load a JDBC Driver BEFORE initializing Hibernate's SessionFactory?

    - by Bill_BsB
    I'm developing a Spring(2.5.6)+Hibernate(3.2.6) web application to connect to a custom database. For that I have custom JDBC Driver and Hibernate Dialect. I know for sure that these custom classes work (hard coded stuff on my unit tests). The problem, I guess, is with the order on which things get loaded by Spring. Basically: Custom Database initializes Spring load beans from web.xml Spring loads ServletBeans(applicationContext.xml) Hibernate kicks in: shows version and all the properties correctly loaded. Hibernate's HbmBinder runs (maps all my classes) LocalSessionFactoryBean - Building new Hibernate SessionFactory DriverManagerConnectionProvider - using driver: MyCustomJDBCDriver at CustomDBURL I get a SQLException: No suitable driver found for CustomDBURL Hibernate loads the Custom Dialect My CustomJDBCDriver finally gets registered with DriverManager (log messages) SettingsFactory runs SchemaExport runs (hbm2ddl) I get a SQLException: No suitable driver found for CustomDBURL (again?!) Application get successfully deployed but there are no tables on my custom Database. Things that I tried so far: Different techniques for passing hibernate properties: embedded in the 'sessionFactory' bean, loaded from a hibernate.properties file. Nothing worked but I didn't try with hibernate.cfg.xml file neither with a dataSource bean yet. MyCustomJDBCDriver has a static initializer block that registers it self with the DriverManager. Tried different combinations of lazy initializing (lazy-init="true") of the Spring beans but nothing worked. My custom JDBC driver should be the first thing to be loaded - not sure if by Spring but...! Can anyone give me a solution for this or maybe a hint for what else I could try? I can provide more details (huge stack traces for instance) if that helps. Thanks in advance.

    Read the article

  • Silverlight 4 webclient authentication - anyone have this working yet?

    - by Toran Billups
    So one of the best parts about the new Silverlight 4 beta is that they finally implemented the big missing feature of the networking stack - Network Credentials! In the below I have a working request setup, but for some reason I get a "security error" when the request comes back - is this because twitter.com rejected my api call or something that I'm missing in code? It might be good to point out that when I watch this code execute via fiddler it shows that the xml file for cross domain is pulled down successfully, but that is the last request shown by fiddler ... public void RequestTimelineFromTwitterAPI() { WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential("username", "password"); myService.UseDefaultCredentials = false; myService.OpenReadCompleted += new OpenReadCompletedEventHandler(TimelineRequestCompleted); myService.OpenReadAsync(new Uri("https://twitter.com/statuses/friends_timeline.xml")); } public void TimelineRequestCompleted(object sender, System.Net.OpenReadCompletedEventArgs e) { //anytime I query for e.Result I get a security error }

    Read the article

  • Git rebase and semi-tracked per-developer config files.

    - by dougkiwi
    This is my first SO question and I'm new-ish to Git as well. Background: I am supposed to be the version control guru for Git in my group of about 8 developers. As I don't have a lot of Git experience, this is exciting. I decided we need a shared repository that would be the authoritative master for the production code and the main meeting-point for the development code. As we work for a corporation, we really do need to show an authoritive source for the production code at least. I have instructed the developers to pull-rebase when pulling from the shared repository, then push the commits that they want to share. We have been running into problems with a particular type of file. One of these files, which I currently assume is typical of the problem, is called web.config. We want a version-controlled master web.config for devs to clone, but each dev may make minor edits to this file that they wish to locally save but not share. The problem is this: how do I tell git not to consider local changes or commits to this file to be relevent for rebasing and pushing? Gitignore does not seem to solve the problem, but maybe that's because I put web.config into .gitignore too late? In some simple situations we have stacked local changes, rebased, pushed, and popped the stack, but that doesn't seem to work all of the time. I haven't picked up the pattern quite yet. The published documentation on pull --rebase tends to deal with simplier situations. Or do I have the wrong idea entirely? Are we misusing Git? Dougkiwi

    Read the article

  • hosting simple python scripts in a container to handle concurrency, configuration, caching, etc.

    - by Justin Grant
    My first real-world Python project is to write a simple framework (or re-use/adapt an existing one) which can wrap small python scripts (which are used to gather custom data for a monitoring tool) with a "container" to handle boilerplate tasks like: fetching a script's configuration from a file (and keeping that info up to date if the file changes and handle decryption of sensitive config data) running multiple instances of the same script in different threads instead of spinning up a new process for each one expose an API for caching expensive data and storing persistent state from one script invocation to the next Today, script authors must handle the issues above, which usually means that most script authors don't handle them correctly, causing bugs and performance problems. In addition to avoiding bugs, we want a solution which lowers the bar to create and maintain scripts, especially given that many script authors may not be trained programmers. Below are examples of the API I've been thinking of, and which I'm looking to get your feedback about. A scripter would need to build a single method which takes (as input) the configuration that the script needs to do its job, and either returns a python object or calls a method to stream back data in chunks. Optionally, a scripter could supply methods to handle startup and/or shutdown tasks. HTTP-fetching script example (in pseudocode, omitting the actual data-fetching details to focus on the container's API): def run (config, context, cache) : results = http_library_call (config.url, config.http_method, config.username, config.password, ...) return { html : results.html, status_code : results.status, headers : results.response_headers } def init(config, context, cache) : config.max_threads = 20 # up to 20 URLs at one time (per process) config.max_processes = 3 # launch up to 3 concurrent processes config.keepalive = 1200 # keep process alive for 10 mins without another call config.process_recycle.requests = 1000 # restart the process every 1000 requests (to avoid leaks) config.kill_timeout = 600 # kill the process if any call lasts longer than 10 minutes Database-data fetching script example might look like this (in pseudocode): def run (config, context, cache) : expensive = context.cache["something_expensive"] for record in db_library_call (expensive, context.checkpoint, config.connection_string) : context.log (record, "logDate") # log all properties, optionally specify name of timestamp property last_date = record["logDate"] context.checkpoint = last_date # persistent checkpoint, used next time through def init(config, context, cache) : cache["something_expensive"] = get_expensive_thing() def shutdown(config, context, cache) : expensive = cache["something_expensive"] expensive.release_me() Is this API appropriately "pythonic", or are there things I should do to make this more natural to the Python scripter? (I'm more familiar with building C++/C#/Java APIs so I suspect I'm missing useful Python idioms.) Specific questions: is it natural to pass a "config" object into a method and ask the callee to set various configuration options? Or is there another preferred way to do this? when a callee needs to stream data back to its caller, is a method like context.log() (see above) appropriate, or should I be using yield instead? (yeild seems natural, but I worry it'd be over the head of most scripters) My approach requires scripts to define functions with predefined names (e.g. "run", "init", "shutdown"). Is this a good way to do it? If not, what other mechanism would be more natural? I'm passing the same config, context, cache parameters into every method. Would it be better to use a single "context" parameter instead? Would it be better to use global variables instead? Finally, are there existing libraries you'd recommend to make this kind of simple "script-running container" easier to write?

    Read the article

  • FluentNHibernate error -- "Invalid object name"

    - by goober
    I'm attempting to do the most simple of mappings with FluentNHibernate & Sql2005. Basically, I have a database table called "sv_Categories". I'd like to add a category, setting the ID automatically, and adding the userid and title supplied. Database table layout: CategoryID -- int -- not-null, primary key, auto-incrementing UserID -- uniqueidentifier -- not null Title -- varchar(50) -- not null Simple. My SessionFactory code (which works, as far as I can tell): _SessionFactory = Fluently.Configure().Database( MsSqlConfiguration.MsSql2005 .ConnectionString(c => c.FromConnectionStringWithKey("SVTest"))) .Mappings(x => x.FluentMappings.AddFromAssemblyOf<CategoryMap>()) .BuildSessionFactory(); My ClassMap code: public class CategoryMap : ClassMap<Category> { public CategoryMap() { Id(x => x.ID).Column("CategoryID").Unique(); Map(x => x.Title).Column("Title").Not.Nullable(); Map(x => x.UserID).Column("UserID").Not.Nullable(); } } My Class code: public class Category { public virtual int ID { get; private set; } public virtual string Title { get; set; } public virtual Guid UserID { get; set; } public Category() { // do nothing } } And the page where I save the object: public void Add(Category catToAdd) { using (ISession session = SessionProvider.GetSession()) { using (ITransaction Transaction = session.BeginTransaction()) { session.Save(catToAdd); Transaction.Commit(); } } } I receive the error Invalid object name 'Category'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Invalid object name 'Category'. I think it might be that I haven't told the CategoryMap class to use the "sv_Categories" table, but I'm not sure how to do that. Any help would be appreciated. Thanks!

    Read the article

  • How do C++ compilers actually pass reference parameters?

    - by T.E.D.
    This question came about as a result of some mixed-langauge programming. I had a Fortran routine I wanted to call from C++ code. Fortran passes all its parameters by reference (unless you tell it otherwise). So I thought I'd be clever (bad start right there) in my C++ code and define the Fortran routine something like this: extern "C" void FORTRAN_ROUTINE (unsigned & flag); This code worked for a while but (of course right when I needed to leave) suddenly started blowing up on a return call. Clear indication of a munged call stack. Another engineer came behind me and fixed the problem, declaring that the routine had to be deinfed in C++ as extern "C" void FORTRAN_ROUTINE (unsigned * flag); I'd accept that except for two things. One is that it seems rather counter-intuitive for the compiler to not pass reference parameters by reference, and I can find no documentation anywhere that says that. The other is that he changed a whole raft of other code in there at the same time, so it theoretically could have been another change that fixed whatever the issue was. So the question is, how does C++ actually pass reference parameters? Is it perhaps free to do copy-in, copy-out for small values or something? In other words, are reference parameters utterly useless in mixed-language programming? I'd like to know so I don't make this same code-killing mistake ever again.

    Read the article

  • Scalable Full Text Search With Per User Result Ordering

    - by jeremy
    What options exist for creating a scalable, full text search with results that need to be sorted on a per user basis? This is for PHP/MySQL (Symfony/Doctrine as well, if relevant). In our case, we have a database of workouts that have been performed by users. The workouts that the user has done before should appear at the top of the results. The more frequently they've done the workout, the higher it should appear in search matches. If it helps, you can assume we know the number of times a user has done a workout in advance. Possible Solutions Sphinx - Use Sphinx to implement full text search, do all the querying and sorting in MySQL. This seems promising (and there's a Symfony Plugin!) but I don't know much about it. Lucene - Use Lucene to perform full text search and put the users' completions into the query. As is suggested in this Stack Overflow thread. Alternatively, use Lucene to retrieve the results, then reorder them in PHP. However, both solutions seem clunky and potentially unscalable as a user may have completed hundreds of workouts. Mysql - No native full text support (InnoDB), so we'd have use LIKE or REGEX, which isn't scalable.

    Read the article

  • Determining where in the code an error came from - iPhone

    - by Robert Eisinger
    I'm used to Java programming where an error is thrown and it tells you at what line the error was thrown from which file. But with Objective-C in XCode, I can't ever tell where the error comes from. How can I figure out where the error came from? Here is an example of a crash error: 2011-01-04 10:36:31.645 TestGA[69958:207] *** Terminating app due to uncaught exception 'NSRangeException', reason: '*** -[NSMutableArray objectAtIndex:]: index 0 beyond bounds for empty array' *** Call stack at first throw: ( 0 CoreFoundation 0x01121be9 __exceptionPreprocess + 185 1 libobjc.A.dylib 0x012765c2 objc_exception_throw + 47 2 CoreFoundation 0x011176e5 -[__NSArrayM objectAtIndex:] + 261 3 TestGA 0x000548d8 -[S7GraphView drawRect:] + 5763 4 UIKit 0x003e16eb -[UIView(CALayerDelegate) drawLayer:inContext:] + 426 5 QuartzCore 0x00ec89e9 -[CALayer drawInContext:] + 143 6 QuartzCore 0x00ec85ef _ZL16backing_callbackP9CGContextPv + 85 7 QuartzCore 0x00ec7dea CABackingStoreUpdate + 2246 8 QuartzCore 0x00ec7134 -[CALayer _display] + 1085 9 QuartzCore 0x00ec6be4 CALayerDisplayIfNeeded + 231 10 QuartzCore 0x00eb938b _ZN2CA7Context18commit_transactionEPNS_11TransactionE + 325 11 QuartzCore 0x00eb90d0 _ZN2CA11Transaction6commitEv + 292 12 QuartzCore 0x00ee97d5 _ZN2CA11Transaction17observer_callbackEP19__CFRunLoopObservermPv + 99 13 CoreFoundation 0x01102fbb __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 27 14 CoreFoundation 0x010980e7 __CFRunLoopDoObservers + 295 15 CoreFoundation 0x01060bd7 __CFRunLoopRun + 1575 16 CoreFoundation 0x01060240 CFRunLoopRunSpecific + 208 17 CoreFoundation 0x01060161 CFRunLoopRunInMode + 97 18 GraphicsServices 0x01932268 GSEventRunModal + 217 19 GraphicsServices 0x0193232d GSEventRun + 115 20 UIKit 0x003b842e UIApplicationMain + 1160 21 TestGA 0x00001cd8 main + 102 22 TestGA 0x00001c69 start + 53 23 ??? 0x00000001 0x0 + 1 So from looking at this, where is the error coming from and from which class is it coming from?

    Read the article

  • Segmentation fault on returning from main (very short and simple code, no arrays or pointers)

    - by Gábor Kovács
    I've been wondering why the following trivial code produces a segmentation fault when returning from main(): //Produces "Error while dumping state (probably corrupted stack); Segmentation fault" #include <iostream> #include <fstream> #include <vector> using namespace std; class Test { vector<int> numbers; }; int main() { Test a; ifstream infile; cout << "Last statement..." << endl; // this gets executed return 0; } Interestingly, 1) if only one of the two variables is declared, I don't get the error, 2) if I declare a vector variable instead of an object with a vector member, everything's fine, 3) if I declare an ofstream instead of an ifstream, again, everything works fine. Something appears to be wrong with this specific combination... Could this be a compiler bug? I use gcc version 3.4.4 with cygwin. Thanks for the tips in advance. Gábor

    Read the article

  • What version-control system is most trivial to set up and use for toy projects?

    - by Norman Ramsey
    I teach the third required intro course in a CS department. One of my homework assignments asks students to speed up code they have written for a previous assignment. Factor-of-ten speedups are routine; factors of 100 or 1000 are not unheard of. (For a factor of 1000 speedup you have to have made rookie mistakes with malloc().) Programs are improved by a sequence is small changes. I ask students to record and describe each change and the resulting improvement. While you're improving a program it is also possible to break it. Wouldn't it be nice to back out? You can see where I'm going with this: my students would benefit enormously from version control. But there are some caveats: Our computing environment is locked down. Anything that depends on a central repository is suspect. Our students are incredibly overloaded. Not just classes but jobs, sports, music, you name it. For them to use a new tool it has to be incredibly easy and have obvious benefits. Our students do most work in pairs. Getting bits back and forth between accounts is problematic. Could this problem also be solved by distributed version control? Complexity is the enemy. I know setting up a CVS repository is too baffling---I myself still have trouble because I only do it once a year. I'm told SVN is even harder. Here are my comments on existing systems: I think central version control (CVS or SVN) is ruled out because our students don't have the administrative privileges needed to make a repository that they can share with one other student. (We are stuck with Unix file permissions.) Also, setup on CVS or SVN is too hard. darcs is way easy to set up, but it's not obvious how you share things. darcs send (to send patches by email) seems promising but it's not clear how to set it up. The introductory documentation for git is not for beginners. Like CVS setup, it's something I myself have trouble with. I'm soliciting suggestions for what source-control to use with beginning students. I suspect we can find resources to put a thin veneer over an existing system and to simplify existing documentation. We probably don't have resources to write new documentation. So, what's really easy to setup, commit, revert, and share changes with a partner but does not have to be easy to merge or to work at scale? A key constraint is that programming pairs have to be able to share work with each other and only each other, and pairs change every week. Our infrastructure is Linux, Solaris, and Windows with a netapp filer. I doubt my IT staff wants to create a Unix group for each pair of students. Is there an easier solution I've overlooked? (Thanks for the accepted answer, which beats the others on account of its excellent reference to Git Magic as well as the helpful comments.)

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >