Search Results

Search found 22300 results on 892 pages for 'half bit'.

Page 770/892 | < Previous Page | 766 767 768 769 770 771 772 773 774 775 776 777  | Next Page >

  • Calling a non-activeX DLL in a VB6 application in Vista/Win7

    - by user1490330
    I have a VB6 app that utilizes a non-activeX DLL (non-registering). It's declared via the classic Public Declare Function "Function Name" Lib "Library.DLL" syntax. On my dev machine (XP) it works fine but when I deploy to Vista or Win7 I'm constantly greeted with a Run Time Error 48 - File Not Found for the DLL in question. I have tried copying that DLL to every directory I can think of including every environment path on the test machine and the app path too. These are all 32-bit test environments so it's not a SysWow64 issue. Possibly throwing a wrench into the mix is the fact that the application in question is an Outlook COM Addin. I managed to install VB6 on Win7 and was able to run a tiny sample app that utilizes this DLL (outside of the Outlook process) so I know it works PROVIDED the DLL is located in App path. If I call App.Path from my DLL when I run it on the test environment it shows, to no surprise, my installation directory however the DLL is there. I tried turning off UAC. I tried making the App.Path directory permissions open to everyone, still no dice.

    Read the article

  • Can't understand sessions in Rails

    - by ciss
    Hello everyone. Please don't bit my for my misunderstanding. The sessions are very new for me, and i have some problems. Okay i read many information about sessions and especially rails session. But this don't give me right imagine about sessions. Did i understand right, when users send request to server (get) - Server create a new session (and store this some file in hard drive with session id), session id - is a random generated num? so, server create a new session (and store session on drive) after this server send back answer to client and set session_id in cookies? Ok, i debug some params and see some results: debug(session): {:_csrf_token=>"jeONIfNxFmnpDn/xt6I0icNK1m3EB3CzT9KMntNk7KU=", :session_id=>"06c5628155efaa6446582c491499af6d", "flash"=>{}} debug(cookies): {"remember_user_token"=>"1::3GFRFyXb83lffzwPDPQd", "_blog_session"=>"BAh7CDoQX2NzcmZfdG9rZW4iMWplT05JZk54Rm1ucERuL3h0NkkwaWNOSzFtM0VCM0N6VDlLTW50Tms3S1U9Og9zZXNzaW9uX2lkIiUwNmM1NjI4MTU1ZWZhYTY0NDY1ODJjNDkxNDk5YWY2ZCIKZmxhc2hJQzonQWN0aW9uQ29udHJvbGxlcjo6Rmxhc2g6OkZsYXNoSGFzaHsABjoKQHVzZWR7AA==--348c88b594e98f4bf6389d94383134fbe9b03095"} Okay, i know, what _csrf_token helps to prevent csrf. session_id - is id of the session which stored on hard drive (by default) but what is _blog_session in cookies? also, remeber_user_token containes my id (1::*) and what about second part, what is it? Sorry for this stupid questions, i know what i can easy use any nice auth-plugins (authlogic/clearance/devise), but i want to fully understand sessions. Thank you. (also sorry for my english, this is not my native language)

    Read the article

  • Jquery toggle event is messing with checkbox value

    - by John McCollum
    Hi all, I'm using Jquery's toggle event to do some stuff when a user clicks a checkbox, like this: $('input#myId').toggle( function(){ //do stuff }, function(){ //do other stuff } ); The problem is that the checkbox isn't being ticked when I click on the checkbox. (All the stuff I've put into the toggle event is working properly.) I've tried the following: $('input#myId').attr('checked', 'checked'); and $(this).attr('checked', 'checked'); and even simply return true; But nothing is working. Can anyone tell me where I'm going wrong? Edit - thanks to all who replied. Dreas' answer very nearly worked for me, except for the part that checked the attribute. This works perfectly (although it's a bit hacky) $('input#myInput').change(function () { if(!$(this).hasClass("checked")) { //do stuff if the checkbox isn't checked $(this).addClass("checked"); return; } //do stuff if the checkbox isn't checked $(this).removeClass('checked'); }); Thanks again to all who replied.

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when third-party installers are involved?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • Open closed prinicple, problem

    - by Marcus
    Hi, I'm trying to apply OCP to a code snippet I have that in it's current state is really smelly, but I feel I'm not getting all the way to the end. Current code: public abstract class SomeObject {} public class SpecificObject1 : SomeObject {} public class SpecificObject2 : SomeObject {} // Smelly code public class Model { public void Store(SomeObject someObject) { if (someObject is SpecificObject1) {} else if (someObject is SpecificObject2) {} } } That is really ugly, my new approach looks like this: // No so smelly code public class Model { public void Store(SomeObject someObject) { throw new Expception("Not allowed!"); } public void Store(SpecificObject1 someObject) {} public void Store(SpecificObject2 someObject) {} } When a new SomeObject type comes along I must implement how that specific object is stored, this will break OCP cause I need to alter the Model-class. To move the store logic to SomeObject also feels wrong cause then I will violate SRP (?), becuase in this case the SomeObject is almost like a DTO, it's resposibility it not how to know to store itself. If a new implementation to SomeObject comes along who's store implementation is missing I will get a runtime error due to exception in Store method in Model class, it also feels like a code smell. This is because calling code will in the form of IEnumerable<SomeObject> sequence; I will not know the specific types of the sequence objects. I can't seem to grasp the OCP-concept. Anyone has any concrete examples or links that is a bit more than just some Car/Fruit example?

    Read the article

  • Why are (almost) all the on-line games written in ActionScript (Flash) not Java?

    - by MasterPeter
    I absolutely love good defender games (e.g. Gemcraft, Protector: reclaiming the throne) as they can be intellectually quite challenging; it's like playing chess but a little less thinking a bit more action. Sadly, there are not that many good ones out there and I thought I would create one myself and share it with the rest of the world by making it available on-line. I have never worked with ActionScript but when it comes to on-line games, this is the main choice. I have tried to find a decent 2D game in the form of a Java applet but to no avail. Why is this so? I could write the game, most comfortably, in Delphi for Win32 but then people would need to download the executable, which could deter some form downloading it, and also it would only work on Windows. I am also familiar with Java, having worked with Java for the last four years or so. Although I don't have much experience with games programming. Should I note be deterred by the fact that all online games are written for in Flash and create my defender game as a Java applet, or should I consider learning ActionScript and games development for the ActionScript Virtual Machine (AS3 looks very much like Java... but still, it's an entirely new technology to me and I might never use it professionally.) Could you, please, just answer the the question in the title? Why Flash, not Java applets? Is it only 'politics'?

    Read the article

  • Git is not using the first editor in my $PATH

    - by GuillaumeA
    I am using OS X 10.8, and I used brew to install a more recent version of emacs than the one shipped with OS X. The newer emacs binary is installed in /usr/local/bin (24.2.1), and the old "shipped-with-osx" one in /usr/bin (22.1.1). I updated my $PATH env variable by prepending /usr/local/bin to it. It works fine in my shell (ie. typing emacs runs the 24.2.1 version), but when git opens the editor, the emacs version is 22.1.1. Isn't git supposed to use $PATH to find the editor I want to use ? Additional informations: $ type -a emacs emacs is /usr/local/bin/emacs emacs is /usr/bin/emacs emacs is /usr/local/bin/emacs $ env PATH=/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin SHELL=/bin/zsh PAGER=most EDITOR=emacs -nw _=/usr/bin/env Please note that I'd prefer not to set the absolute path of my editor directly in my git conf, as I use this conf across multiple systems. EDIT: Here's an bit of my .zshrc: # Mac OS X if [ `uname` = "Darwin" ]; then # Brew binaries PATH="/usr/local/bin":"/usr/local/sbin":$PATH else # Everyone else (Linux) # snip fi So, yes, I could add a line export EDITOR='/usr/local/bin emacs -nw' in the first if, but I'd like to understand why git is not using my PATH variable :)

    Read the article

  • Help dealing with data dependency between two registration forms

    - by franko75
    I have a tricky issue here with a registration of both a user and his/her pet. Both the user and the pet are treated as separate entities and both require separate registration forms. However, the user's pet has to be linked to the user via a foreign key in the database. The process is basically that when a new user joins the site, firstly they register their pet, then they register themselves. The reason for this order is to check their pet's eligibility for the site (there are some criteria to be met) first, instead of getting the user to sign up only to then find out their pet is ineligible. It is this ordering of the form submissions which is causing me a bit of a headache, as follows... The site is being developed with an MVC framework, and the User registration process is managed via a method in a User_form controller, while the pet registration process is managed via a method in the Pet_form controller. The pet registration form happens first, and the pet data can be saved without the owner_id at this stage, with the user id possibly being added (e.g by retrieving pet's id from session) following user registration. However, doing it this way could potentially result in redundant data, where pet records would be created in the database, but if the user doesn't actually register themselves too, then the pets will be ownerless records in the DB. Other option is to serialize the new pet's data at the pet registration stage, don't save it to the DB until the user fills out their registration form. Once the user is created, i can pass serialised data AND the owner_id to a method in the Pet Model which can update the DB. However, I also need to set the newly created $pet to $this-pet which I then access for a sequence of other related forms. Should I just set the session variable in the model method? Then in the Pet controller constructor, do a check for pet stored in session, if yes, assign to $this-pet... If this makes any sense to anybody and you have some advice, i'd be grateful to hear it!

    Read the article

  • PHP modifying and combining array

    - by Industrial
    Hi everyone, I have a bit of an array headache going on. The function does what I want, but since I am not yet to well acquainted with PHP:s array/looping functions, so thereby my question is if there's any part of this function that could be improved from a performance-wise perspective? I tried to be as complete as possible in my descriptions in each stage of the functions which shortly described prefixes all keys in an array, fill up eventual empty/non-valid keys with '' and removes the prefixes before returning the array: $var = myFunction ( array('key1', 'key2', 'key3', '111') ); function myFunction ($keys) { $prefix = 'prefix_'; $keyCount = count($keys); // Prefix each key and remove old keys for($i=0;$i<$keyCount; $i++){ $keys[] = $prefix.$keys[$i]; unset($keys[$i]); } // output: array('prefix_key1', 'prefix_key2', 'prefix_key3', '111) // Get all keys from memcached. Only returns valid keys $items = $this->memcache->get($keys); // output: array('prefix_key1' => 'value1', 'prefix_key2' => 'value2', 'prefix_key3'=>'value3) // note: key 111 was not found in memcache. // Fill upp eventual keys that are not valid/empty from memcache $return = $items + array_fill_keys($keys, ''); // output: array('prefix_key1' => 'value1', 'prefix_key2' => 'value2', 'prefix_key3'=>'value3, 'prefix_111' => '') // Remove the prefixes for each result before returning array to application foreach ($return as $k => $v) { $expl = explode($prefix, $k); $return[$expl[1]] = $v; unset($return[$k]); } // output: array('key1' => 'value1', 'key2' => 'value2', 'key3'=>'value3, '111' => '') return $return; } Thanks a lot!

    Read the article

  • IndexOutOfRangeException when a stream is a multiple of the buffer size

    - by dnord
    I don't have a lot of experience with streams and buffers, but I'm having to do it for a project, and I'm stuck on an exception being thrown when the stream I'm reading is a multiple of the buffer size I've chosen. Let me show you: My code starts by reading bufferSize (100, let's say) bytes from the stream: numberOfBytesRead = DataReader.GetBytes(0, index, output, 0, bufferSize); Then, I loop through a while loop: while (numberOfBytesRead == bufferSize) { BufferWriter.Write(output); BufferWriter.Flush(); index += bufferSize; numberOfBytesRead = DataReader.GetBytes(0, index, output, 0, bufferSize); } ... and, once we get to a non-bufferSize read, we know we've hit the end of the stream and can move on. But if the bufferSize is 100, and the stream is 200, we'll read positions 0-99, 100-199, and then the attempt to read 200-299 errors out. I'd like it if it returned 0, but it throws an error. What I'm doing to handle that is, well, a try-catch: catch (System.IndexOutOfRangeException) numberOfBytesRead = 0; ...which ends the loop, and successfully finishes the thing, but we all know I don't want to control code flow with error handling. Is there a better (more standard?) way to handle stream reading when the stream length is unknown? This seems like a small wrinkle in a fairly reasonable strategy for reading streams, but I just don't know if I've got it wrong or what. The specifics of this (which I've cleaned up a little bit for posting) are a MySqlDataReader hitting a LARGEBLOB column. It's working whenever the buffer is larger than the number of returned bytes, or when the number of returned bytes is not a multiple of bufferSize. Because we don't, in that case, throw an IndexOutOfRangeException.

    Read the article

  • PHP in_array() can't even match a single character. Strict is set to true.

    - by solefald
    I've seen a million of these threads here already, and read through every single one already. Including some serious time Googling. Nothing complicated. All I have to do is check if a single character in a loop matches my alphabet array. print_r($alphabet); // all 26 letters Array ( [0] => a [1] => b [2] => c ... [23] => x [24] => y [25] => z ) print_r($emptyLetters); // dynamic array. Array ( [0] => b [1] => s ) foreach($alphabet as $letter): { echo $letter . '<br />' // Correctly prints out every letter from $alphabet. if(in_array($letter, $emptyLetters, true)): // $strict is set // do something endif; } What the hell is going on??? I do not understand what i am doing wrong.... I tried every combination and option possible, but for some reason even array_search() is bit working...

    Read the article

  • Maintenance tool for Application Database

    - by Thierry
    Hello ! Does anybody know about a good tool which help maintaining the database of an application ? I'm working on an application which uses a database (Microsoft Sql Server). When a development requires to change something in the database (e.g., structure, data migration...), we create a script (Transact-SQL script) and add it into our revision control tool (subversion - that tool also contains our code). Each script must add a line in a log table to keep a trace of all the scripts that have been ran into a database. In order to build a database for our application, one needs to run all scripts ordered by their creation date. I'm not really happy with this technique notably because it make application migration a bit hard. If we want to install a new version of the application somewhere, e.g., migrate from version 1.3 to 2.1, we must get all the scripts between these two versions. Then run them and ensure that everything is done in a transaction... For sure we could built home-made tools to help but I wonder if some tools already exists to do that kind of job.

    Read the article

  • Weird compile-time behavior when trying to use primitive type in generics

    - by polygenelubricants
    import java.lang.reflect.Array; public class PrimitiveArrayGeneric { static <T> T[] genericArrayNewInstance(Class<T> componentType) { return (T[]) Array.newInstance(componentType, 0); } public static void main(String args[]) { int[] intArray; Integer[] integerArray; intArray = (int[]) Array.newInstance(int.class, 0); // Okay! integerArray = genericArrayNewInstance(Integer.class); // Okay! intArray = genericArrayNewInstance(int.class); // Compile time error: // cannot convert from Integer[] to int[] integerArray = genericArrayNewInstance(int.class); // Run time error: // ClassCastException: [I cannot be cast to [Ljava.lang.Object; } } I'm trying to fully understand how generics works in Java. Things get a bit weird for me in the 3rd assignment in the above snippet: the compiler is complaining that Integer[] cannot be converted to int[]. The statement is 100% true, of course, but I'm wondering WHY the compiler is making this complaint. If you comment that line, and follow the compiler's "suggestion" as in the 4th assignment, the compiler is actually satisfied!!! NOW the code compiles just fine! Which is crazy, of course, since like the run time behavior suggests, int[] cannot be converted to Object[] (which is what T[] is type-erased into at run time). So my question is: why is the compiler "suggesting" that I assign to Integer[] instead for the 3rd assignment? How does the compiler reason to arrive to that (erroneous!) conclusion?

    Read the article

  • How do I implement a collection in Scala 2.8?

    - by Simon Reinhardt
    In trying to write an API I'm struggling with Scala's collections in 2.8(.0-beta1). Basically what I need is to write something that: adds functionality to immutable sets of a certain type where all methods like filter and map return a collection of the same type without having to override everything (which is why I went for 2.8 in the first place) where all collections you gain through those methods are constructed with the same parameters the original collection had (similar to how SortedSet hands through an ordering via implicits) which is still a trait in itself, independent of any set implementations. Additionally I want to define a default implementation, for example based on a HashSet. The companion object of the trait might use this default implementation. I'm not sure yet if I need the full power of builder factories to map my collection type to other collection types. I read the paper on the redesign of the collections API but it seems like things have changed a bit since then and I'm missing some details in there. I've also digged through the collections source code but I'm not sure it's very consistent yet. Ideally what I'd like to see is either a hands-on tutorial that tells me step-by-step just the bits that I need or an extensive description of all the details so I can judge myself which bits I need. I liked the chapter on object equality in "Programming in Scala". :-) But I appreciate any pointers to documentation or examples that help me understand the new collections design better.

    Read the article

  • Sorted queue with dropping out elements

    - by ffriend
    I have a list of jobs and queue of workers waiting for these jobs. All the jobs are the same, but workers are different and sorted by their ability to perform the job. That is, first person can do this job best of all, second does it just a little bit worse and so on. Job is always assigned to the person with the highest skills from those who are free at that moment. When person is assigned a job, he drops out of the queue for some time. But when he is done, he gets back to his position. So, for example, at some moment in time worker queue looks like: [x, x, .83, x, .7, .63, .55, .54, .48, ...] where x's stand for missing workers and numbers show skill level of left workers. When there's a new job, it is assigned to 3rd worker as the one with highest skill of available workers. So next moment queue looks like: [x, x, x, x, .7, .63, .55, .54, .48, ...] Let's say, that at this moment worker #2 finishes his job and gets back to the list: [x, .91, x, x, .7, .63, .55, .54, .48, ...] I hope the process is completely clear now. My question is what algorithm and data structure to use to implement quick search and deletion of worker and insertion back to his position. For the moment the best approach I can see is to use Fibonacci heap that have amortized O(log n) for deleting minimal element (assigning job and deleting worker from queue) and O(1) for inserting him back, which is pretty good. But is there even better algorithm / data structure that possibly take into account the fact that elements are already sorted and only drop of the queue from time to time?

    Read the article

  • WPF C# Client/Server announcement system

    - by manemawanna
    I'm currently in the process of creating an announcement system at my place of work. The role of this system will be to replace all users email due to people misusing it and generally abusing the facility. The system will consist of: Web Portal: Will allow staff to enter any important announcements (this will be restricted via AD). SQL Server 2k5 DB: Will hold the announcements along with records of staff members and if they've read the announcements etc. Front End: Created in WPF & C# which is nearly complete, it will display the announcements to the users. Web Page: Client will contact every so often, which will return an xml file for the client to read. However my boss has now shifted the goal posts and would like the announcements to appear to the user once they are written to the database, rather than waiting on the client to contact the webpage. So now I'm a bit unsure as to how to go about this. I have one idea where I would create a small server application to monitor for new announcements then contact the clients to inform them to approach the website for the information they need. But I'm just looking to see if theres a better or more efficient way to do this or if someone else has a more appropriate idea or suggestion.

    Read the article

  • XML File as Excel file.

    - by FrustratedWithFormsDesigner
    I have a number of reports that I run against my database that need to eventually go to the end-users as Excel spreadsheets. Initially, I was creating text reports, but the steps to convert the text to a spreadsheet were a bit cumbersome. There were too many steps to import text to the spreadsheet, and multi-line text rows were imported as individual rows in Excel (which was incorrect). Currently, I am generating simple XML saving the file with an ".xls" extension. This works better, but there is still the problem of Excel prompting the user with an XML import dialogue every time they open the file, and then having to save a new file if they add notes or change the layout to the file (which they almost certainly will be doing). Sample "xls" file: <?xml version="1.0" standalone="yes"?> <report_rows> <row> <NAME>Test Data</NAME> <COUNT>345</COUNT> </row> <!-- many more row elements... --> </report_rows> Is there any way to add markup to the file to hint to Excel how it should import and handle the file? Ideally, the end user should be able to open and save the file like any othe spreadsheet they create directly from Excel. Is this even possible? UPDATE: We are running Office 2003 here. UPDATE: The XML is generated from a sqlplus script, no option to use C#/.NET here.

    Read the article

  • filling colors on a map - PHP

    - by jeremy
    I am trying to determine how to fill colors onto a map - such as the "Risk" board game map. I've done this before with HTML tables, by pulling an HTML color code from a SQL table and then just using it to fill the cell the color I want it. But for a non-square map, I'm not sure where to look. I have created a very simple two color map - its white with black borders. My desired result is having the 'regions' on the map shaded with a color, based on data in a sql table (just like the "fill" button in Paint). This looks like what I need: http://php.net/manual/en/function.imagefilltoborder.php and now.. how to define the borders... At the moment I have tried nothing, because the question was: how do I have PHP fill parts of an image? I have tried making an image in Paint, and then scratching my head wondering how to fill parts of it. Having stumbled upon a link, let me focus this a bit more: It appears that with imagefilltoborder that I can put an image on my server, perhaps one that looks like a black and white version of the RISK map - black borders and white everything else. Some questions: Is it correct that the 'border' variable should use the color of my border (whatever value black is) so that the code can "see" where the border is? Is it correct that I'll just need to figure out X,Y coords to begin the fill? Does this work if I have 10 different spots to fill on the map? Can I use varying colors from code or pulled from SQL to assign different colors to those 10 spots, and use 10 different X,Y coords to get them all?

    Read the article

  • EF4 + STE: Reattaching via a WCF Service? Using a new objectcontext each and every time?

    - by Martin
    Hi there, I am planning to use WCF (not ria) in conjunction with Entity Framework 4 and STE (Self tracking entitites). If i understnad this correctly my WCF should return an entity or collection of entities (using LIST for example and not IQueryable) to the client (in my case silverlight) The client then can change the entity or update it. At this point i believe it is self tracking???? This is where i sort of get a bit confused as there are a lot of reported problems with STEs not tracking.. Anyway... Then to update i just need to send back the entity to my WCF service on another method to do the update. I should be creating a new OBJECTCONTEXT everytime? In every method? If i am creaitng a new objectcontext everytime in everymethod on my WCF then don't i need to re-attach the STE to the objectcontext? So basically this alone wouldn't work?? using(var ctx = new MyContext()) { ctx.Orders.ApplyChanges(order); ctx.SaveChanges(); } Or should i be creating the object context once in the constructor of the WCF service so that 1 call and every additional call using the same wcf instance uses the same objectcontext? I could create and destroy the wcf service in each method call from the client - hence creating in effect a new objectcontext each time. I understand that it isn't a good idea to keep the objectcontext alive for very long. Any insight or information would be gratefully appreciated thanks

    Read the article

  • no instance of overloaded function getline c++

    - by Dave
    I'm a bit confused as to what i have incorrect with my script that is causing this error. I have a function which calls a fill for game settings but it doesn't like my getline. Also i should mention these are the files i have included for it: #include <fstream> #include <cctype> #include <map> #include <iostream> #include <string> #include <algorithm> #include <vector> using namespace std;' This is what i have: std::map<string, string> loadSettings(std::string file){ ifstream file(file); string line; std::map<string, string> config; while(std::getline(file, line)) { int pos = line.find('='); if(pos != string::npos) { string key = line.substr(0, pos); string value = line.substr(pos + 1); config[trim(key)] = trim(value); } } return (config); } The function is called like this from my main.cpp //load settings for game std::map<string, string> config = loadSettings("settings.txt"); //load theme for game std::map<string, string> theme = loadSettings("theme.txt"); Where did i go wrong ? Please help! The error: settings.h(61): error C2784: 'std::basic_istream<_Elem,_Traits> &std::getline(std::basic_istream<_Elem,_Traits> &&,std::basic_string<_Elem,_Traits,_Alloc> &)' : could not deduce template argument for 'std::basic_istream<_Elem,_Traits> &&' from 'std::string'

    Read the article

  • declarative_authorization permissions on roles

    - by William
    Hey all, I'm trying to add authorization to a rather large app that already exists, but I have to obfuscate the details a bit. Here's the background: In our app we have a number or roles that are hierarchical, roughly like this: BasicUser -> SuperUser -> Admin -> SuperAdmin For authorization each User model instance has an attribute 'role' which corresponds to the above. We have a RESTful controller "Users" that is namespaced under Backoffice. So in short it's Backoffice::UsersController. class Backoffice::UsersController < ApplicationController filter_access_to :all #... RESTful actions + some others end So here's the problem: We want users to be able to give permissions for users to edit users but ONLY if they have a 'smaller' role than they currently have. I've created the following in authorization_rules.rb authorization do role :basic_user do has_permission_on :backoffice_users, :to => :index end role :super_user do includes :basic_user has_permission_on :backoffice_users, :to => :edit do if_attribute :role => is_in { %w(basic_user) } end end role :admin do includes :super_user end role :super_admin do includes :admin end end And unfortunately that's as far as I got, the rule doesn't seem to get applied. If I comment the rule out, nobody can edit If I leave the rule in you can edit everybody I've also tried a couple of variations on the if_attribute: if_attribute :role => is { 'basic_user' } if_attribute :role => 'basic_user' and they get the same effect. Does anybody have any suggestions?

    Read the article

  • How to keep windows from paging block of memory

    - by photo_tom
    We are working on a Vista/Windows 7 applicaiton that will be running in 64 bit mode using VS2008/C++. We will be needing to cache hundreds of 2-3 mb blobs of data in RAM for performance reasons up to some memory limit. Our usage profile is such that we cannot read the data in fast enough if it is all on the the disk. Cached Memory usage will be larger than 1gb memory used. For this to work well, we need to ensure that Windows does not page this memory out as it will defeat the purpose of why we are doing this. I've done a fair amount of research and cannot find documenation that states exactly how to do this. I've seen several references that infer memory mapped files work this way. Is there an expert who can clarify this for me? I'm aware there are other programs that we could adapt to do this, for example, splitting the blobs and loading into memcache or inmemory databases, but they all have too many problems with performance or code complexity. Suggestions?

    Read the article

  • ResXResourceWriter Chops off end of file

    - by Aaron Salazar
    I'm trying to create a .resx file from a dictionary of strings. Luckily the .Net framework has a ResXResourceWriter class. The only problem is that when I create the file using the code below, the last few lines of my generated resx file are missing. I checked my Dictionary and it contains all the string pairs that I expect. public void ExportToResx(Dictionary<string,string> resourceDictionary, string fileName) { var writer = new ResXResourceWriter(fileName); foreach (var resource in resourceDictionary) { writer.AddResource(resource.Key, resource.Value); } } Unfortunately, it is a little difficult to show the entire resx file since it has 2198 (should have 2222) lines but here is the last little bit: ... 2195 <data name="LangAlign_ReportIssue" xml:space="preserve"> 2196 <value>Report an Issue</value> 2197 </data> 2198 <data name="LangAlign_Return BTW, notice that the file cuts off right at the end of that "n" in "LangAlign_Return". The string should read "LangAlign_ReturnToWorkspace". The file should also end at line 2222.

    Read the article

  • JQuery with css3 keydown keyCode = 37 and 39

    - by rayrule
    I have tested both ways. jquery animation and css3 transition, and css3 is a little bit faster. But i have a problem with the following code: $(document).keydown(function(e){ if (e.keyCode == 39) { var DocHeight = $(document).height(); $('.container').css("margin-top","-="+DocHeight) } }); if i hit twice on keyCode 39 (arrow to the right) than my transition is outer space. Does anyone has an solution for this thing? outer space maybe not the correct word. But the problem is. if i hit twice the arrow key i'll get the last request, in other words... animation is started, and another animation start from the position that i don't want. example: hit #1 margin-top is at 0px and goes to 1024px. but when i hit it twice the margin-top is at 23px, and it stops at 1047px. This is not what i want. It has to stop at 1024px. I hope so.

    Read the article

  • Memory management of objects returned by methods (iOS / Objective-C)

    - by iOSNewb
    I am learning Objective-C and iOS programming through the terrific iTunesU course posted by Stanford (http://www.stanford.edu/class/cs193p/cgi-bin/drupal/) Assignment 2 is to create a calculator with variable buttons. The chain of commands (e.g. 3+x-y) is stored in a NSMutableArray as "anExpression", and then we sub in random values for x and y based on an NSDictionary to get a solution. This part of the assignment is tripping me up: The final two [methods] “convert” anExpression to/from a property list: + (id)propertyListForExpression:(id)anExpression; + (id)expressionForPropertyList:(id)propertyList; You’ll remember from lecture that a property list is just any combination of NSArray, NSDictionary, NSString, NSNumber, etc., so why do we even need this method since anExpression is already a property list? (Since the expressions we build are NSMutableArrays that contain only NSString and NSNumber objects, they are, indeed, already property lists.) Well, because the caller of our API has no idea that anExpression is a property list. That’s an internal implementation detail we have chosen not to expose to callers. Even so, you may think, the implementation of these two methods is easy because anExpression is already a property list so we can just return the argument right back, right? Well, yes and no. The memory management on this one is a bit tricky. We’ll leave it up to you to figure out. Give it your best shot. Obviously, I am missing something with respect to memory management because I don't see why I can't just return the passed arguments right back. Thanks in advance for any answers!

    Read the article

< Previous Page | 766 767 768 769 770 771 772 773 774 775 776 777  | Next Page >