Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 537/677 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • NSTableView Drag and Drop not working

    - by macatomy
    Hi, I'm trying to set up very basic drag and drop for my NSTableView. The table view has a single column (with a custom cell). The column is bound to an NSArrayController, and the array controller's content array is bound to an NSArray on my controller object. The data displays fine in the table. I connected the dataSource and delegate outlets of the table view to my controller object, and then implemented these methods: - (BOOL)tableView:(NSTableView *)aTableView writeRowsWithIndexes:(NSIndexSet *)rowIndexes toPasteboard:(NSPasteboard *)pboard { NSLog(@"dragging"); return YES; } - (NSDragOperation)tableView:(NSTableView*)tv validateDrop:(id <NSDraggingInfo>)info proposedRow:(NSInteger)row proposedDropOperation:(NSTableViewDropOperation)op { return NSDragOperationEvery; } - (BOOL)tableView:(NSTableView *)aTableView acceptDrop:(id <NSDraggingInfo>)info row:(NSInteger)row dropOperation:(NSTableViewDropOperation)operation { return YES; } I also registered the drag types in -awakeFromNib: #define MyDragType @"MyDragType" - (void)awakeFromNib { [super awakeFromNib]; [_myTable registerForDraggedTypes:[NSArray arrayWithObjects:MyDragType, nil]]; } The problem is that the -tableView:writeRowsWithIndexes:toPasteboard: method is never called. I've looked at a bunch of examples and I can't figure out anything I'm doing wrong. Could the problem be that I'm using a custom cell? Is there something I'm supposed to override in the cell subclass to enable this functionality? EDIT: Confirmed. Switching the custom cell for a regular NSTextFieldCell made dragging work. Now, how do I make drag and drop work with my custom cell?

    Read the article

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • SqlBackup from my client app

    - by Robbert Dam
    I have an client app that runs on several machines, and connects to a single SQL database. On the client app, the user has the possiblity to use a local SQL (CE) database or connect to a remote SQL database. There is also a box where a backup location can be set. For the backup of the remote SQL database I use the following code: var bdi = new BackupDeviceItem(backupFile, DeviceType.File); var backup = new Backup { Database = "AppDb", Initialize = true }; backup.Devices.Add(bdi); var server = new Server(connection); backup.SqlBackup(server); When developing my application, I wasn't aware that the given backupFile is written on the machine where the SQL server runs! And not on the client machine. What I want is "dump" all the data from my database to a local file. (Why? Because users may enter a network location in the "backup location" box, and backing up SQL immediately to a network location fails. So need to write local first and then copy to the network in that case.) I there an alternative to the method above?

    Read the article

  • Passing custom Python objects to nosetests

    - by Rob
    I am attempting to re-organize our test libraries for automation and nose seems really promising. My question is, what is the best strategy for passing Python objects into nose tests? Our tests are organized in a testlib with a bunch of modules that exercise different types of request operations. Something like this: testlib \-testmoda \-testmodb \-testmodc In some cases the test modules (i.e. testmoda) is nothing but test_something(), test_something2() functions while in some cases we have a TestModB class in testmob with the test_anotherthing1(), test_anotherthing2() functions. The cool thing is that nose easily finds both. Most of those test functions are request factory stuff that can easily share a single connection to our server farm. Thus we do a lot of test_something1(cnn), TestModB.test_anotherthing2(cnn), etc. Currently we don't use nose, instead we have a hodge-podge of homegrown driver scripts with hard-coded lists of tests to execute. Each of those driver scripts creates its own connection object. Maintaining those scripts and the connection minutia is painful. I'd like to take free advantage of nose's beautiful discovery functionality, passing in a connection object of my choosing. Thanks in advance! Rob P.S. The connection objects are not pickle-able. :(

    Read the article

  • Multicore solr on Ubuntu 10.04 working for anyone?

    - by coleifer
    Following instructions from the two sites below, I've installed tomcat6 and solr 1.4 http://gist.github.com/204638 https://wiki.fourkitchens.com/display/TECH/Solr+1.4+on+Ubuntu+9.10+and+CentOS+5 I have successfully got it up and running on a server running 9.04 with multicore support, but on the 10.04 I can't seem to get it to work. I am able to reach localhost:xxxx/solr/ on the 10.04 box and see a single link to the Solr Admin, but following the link takes me to a 404 page with the following output: /solr/admin/ HTTP Status 404 - missing core name in path The requested resource (missing core name in path) is not available I am also unable to access /solr/site1/ as I would except - it similarly returns a 404 <!-- from /var/solr/solr.xml, site dirs exist --> <cores adminPath="/admin/cores"> <core name="site1" instanceDir="site1" /> <core name="site2" instanceDir="site2" /> </cores> <!-- from /etc/tomcat6/Catalina/localhost/solr.xml --> <Context docBase="/var/solr/solr.war" debug="0" privileged="true" allowLinking="true" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/var/solr" override="true" /> </Context>

    Read the article

  • Do servlet containers prevent web applications from causing each other interference and how do they do it?

    - by chrisbunney
    I know that a servlet container, such as Apache Tomcat, runs in a single instance of the JVM, which means all of its servlets will run in the same process. I also know that the architecture of the servlet container means each web application exists in its own context, which suggests it is isolated from other web applications. As depicted here: Accepting that each web application is isolated, I would expect that you could create 2 copies of an identical web application, change the names and context paths of each (as well as any other relevant configuration), and run them in parallel without one affecting the other. The answers to this question appear to support this view. However, a colleague disagrees based on their experience of attempting just that. They took a web application and tried to run 2 separate instances (with different names etc) in the same servlet container and experienced issues with the 2 instances conflicting (I'm unable to elaborate more as I wasn't involved in that work). Based on this, they argue that since the web applications run in the same process space, they can't be isolated and things such as class attributes would end up being inadvertently shared. This answer appears to suggest the same thing The two views don't seem to be compatible, so I ask you: Do servlet containers prevent web applications deployed to the same container from conflicting with each other? If yes, How do they do this? If no, Why does interference occur? and finally, Under what circumstances could separate web applications conflict and cause each other interference?, perhaps scenarios involving resources on the file system, native code, or database connections?

    Read the article

  • Code Golf: Shortest Turing-complete interpreter.

    - by ilya n.
    I've just tried to create the smallest possible language interpreter. Would you like to join and try? Rules of the game: You should specify a programming language you're interpreting. If it's a language you invented, it should come with a list of commands in the comments. Your code should start with example program and data assigned to your code and data variables. Your code should end with output of your result. It's preferable that there are debug statements at every intermediate step. Your code should be runnable as written. You can assume that data are 0 and 1s (int, string or boolean, your choice) and output is a single bit. The language should be Turing-complete in the sense that for any algorithm written on a standard model, such as Turing machine, Markov chains, or similar of your choice, it's reasonably obvious (or explained) how to write a program that after being executred by your interpreter performs the algorithm. The length of the code is defined as the length of the code after removal of input part, output part, debug statements and non-necessary whitespaces. Please add the resulting code and its length to the post. You can't use functions that make compiler execute code for you, such as eval(), exec() or similar. This is a Community Wiki, meaning neither the question nor answers get the reputation points from votes. But vote anyway!

    Read the article

  • Use the repository pattern when using PLINQO generated data?

    - by Chad
    I'm "upgrading" an MVC app. Previously, the DAL was a part of the Model, as a series of repositories (based on the entity name) using standard LINQ to SQL queries. Now, it's a separate project and is generated using PLINQO. Since PLINQO generates query extensions based on the properties of the entity, I started using them directly in my controller... and eliminated the repositories all together. It's working fine, this is more a question to draw upon your experience, should I continue down this path or should I rebuild the repositories (using PLINQO as the DAL within the repository files)? One benefit of just using the PLINQO generated data context is that when I need DB access, I just make one reference to the the data context. Under the repository pattern, I had to reference each repository when I needed data access, sometimes needing to reference multiple repositories on a single controller. The big benefit I saw on the repositories, were aptly named query methods (i.e. FindAllProductsByCategoryId(int id), etc...). With the PLINQO code, it's _db.Product.ByCatId(int id) - which isn't too bad either. I like both, but where it gets "harrier" is when the query uses predicates. I can roll that up into the repository query method. But on the PLINQO code, it would be something like _db.Product.Where(x = x.CatId == 1 && x.OrderId == 1); I'm not so sure I like having code like that in my controllers. Whats your take on this?

    Read the article

  • wordpress conditional statements

    - by codedude
    I am using this code in wordpress to display different content when different pages are loaded. I have 5 pages on the site called Home, Bio, Work, Contact and Notes. The Notes page is being used as a blog. Here is the code I am using. <?php if (is_page('contact')) { ?> get in touch with me <?php } elseif (is_single()) { ?> a note from me <?php } elseif (is_page('notes')) { ?> the notes of me <?php } else { ?> the <?php the_title(); ?> of me <?php } ?> So if it the contact page, it displays "get in touch with me" and if it is a single blog post page it displays "a note from me". However this is where I have a problem. The next statement should display "the notes of me" when it is on the Notes page. However, this does not happen. Instead it shows the default content which is in the "else" statement. any idea on why this is happening?

    Read the article

  • Declaration of arrays before "normal" variables in c?

    - by bjarkef
    Hi We are currently developing an application for a msp430 MCU, and are running into some weird problems. We discovered that declaring arrays withing a scope after declaration of "normal" variables, sometimes causes what seems to be undefined behavior. Like this: foo(int a, int *b); int main(void) { int x = 2; int arr[5]; foo(x, arr); return 0; } foo sometimes is passed a pointer as the second variable, that does not point to the arr array. We verify this by single stepping through the program, and see that the value of the arr variable in the main scope is not the same as the value of the b pointer variable in the foo scope. And no, this is not really reproduceable, we have just observed this behavior once in a while. Changing the example seems to solve the problem, like this: foo(int a, int *b); int main(void) { int arr[5]; int x = 2; foo(x, arr); return 0; } Does anybody have any input or hints as to why we experience this behavior? Or similar experiences? The MSP430 programming guide specifies that code should conform to the ANSI C89 spec. and so I was wondering if it says that arrays has to be declared before non-array variables? Any input on this would be appreciated.

    Read the article

  • Ember model is gone when I use the renderTemplate hook

    - by Mickael Caruso
    I have a single template - editPerson.hbs <form role="form"> FirstName: {{input type="text" value=model.firstName }} <br/> LastName: {{input type="text" value=model.lastName }} </form> I want to render this template when the user wants to edit an existing person or create a new person. So, I set up routes: App.Router.map(function(){ this.route("createPerson", { path: "/person/new" }); this.route("editPerson", { path: "/person/:id}); // other routes not show for brevity }); So, I define two routes - one for create and one for edit: App.CreatePersonRoute = Ember.Route.extend({ renderTemplate: function(){ this.render("editPerson", { controller: "editPerson" }); }, model: function(){ return {firstName: "John", lastName: "Smith" }; } }); App.EditPersonRoute = Ember.Route.extend({ model: function(id){ return {firstName: "John Existing", lastName: "Smith Existing" }; } }); So, I hard-code the models. I'm concerned about the createPerson route. I'm telling it to render the editPersonTemplate and to use the editPerson controller (which I don't show because I don't think it matters - but I made one, though.) When I use renderTemplate, I lose the model John Smith, which in turn, won't display on the editTemplate on the web page. Why? I "fixed" this by creating a separate and identical (to editPerson.hbs) createPerson.hbs, and removing the renderTemplate hook in the CreatePerson. It works as expected, but I find it somewhat troubling to have a separate and identical template for the edit and create cases. I looked everywhere for how to properly do this, and I found no answers.

    Read the article

  • Text Parsing - My Parser Skipping commands

    - by The.Anti.9
    I'm trying to parse text-formatting. I want to mark inline code, much like SO does, with backticks (`). The rule is supposed to be that if you want to use a backtick inside of an inline code element, You should use double backticks around the inline code. like this: `` mark inline code with backticks ( ` ) `` My parser seems to skip over the double backticks completely for some reason. Heres the code for the function that does the inline code parsing: private string ParseInlineCode(string input) { for (int i = 0; i < input.Length; i++) { if (input[i] == '`' && input[i - 1] != '\\') { if (input[i + 1] == '`') { string str = ReadToCharacter('`', i + 2, input); while (input[i + str.Length + 2] != '`') { str += ReadToCharacter('`', i + str.Length + 3, input); } string tbr = "``" + str + "``"; str = str.Replace("&", "&amp;"); str = str.Replace("<", "&lt;"); str = str.Replace(">", "&gt;"); input = input.Replace(tbr, "<code>" + str + "</code>"); i += str.Length + 13; } else { string str = ReadToCharacter('`', i + 1, input); input = input.Replace("`" + str + "`", "<code>" + str + "</code>"); i += str.Length + 13; } } } return input; } If I use single backticks around something, it wraps it in the <code> tags correctly.

    Read the article

  • Using of Templated Helpers in MVC 2.0 : How can use the name of the property that I'm rendering insi

    - by Andrey Tagaew
    Hi. I'm reviewing new features of ASP.NET MVC 2.0. During the review i found really interesting using Templated Helpers. As they described it, the primary reason of using them is to provide common way of how some datatypes should be rendered. Now i want to use this way in my project for DateTime datatype My project was written for the MVC 1.0 so generating of editbox is looking like this: <%= Html.TextBox("BirthDate", Model.BirthDate, new { maxlength = 10, size = 10, @class = "BirthDate-date" })%> <script type="text/javascript"> $(document).ready(function() { $(".BirthDate-date").datepicker({ showOn: 'button', buttonImage: '<%=Url.Content("~/images/i_calendar.gif") %>', buttonImageOnly: true }); }); </script> Now i want to use Template Helper, so i want to have above code once i type next sentence: <%=Html.EditorFor(f=>f.BirthDate) %> According to the manual I create DataTime.ascx partial view inside Shared/EditorTemplates folder. I put there above code and stacked with the problem. How can i pass the name of the property that I'm rendering with template helper? As you can see from my example, i really need it, since I'm using the name of the property to specify data value and parameter name that will be send during the POST requsest. Also, I'm using it to generate class name for JS calendar building. I tried to remove my partial class for template helper and made MVC to generate its default behavior. Here what it generated for me: <input type="text" value="04/29/2010" name="LoanApplicationDays" id="LoanApplicationDays" class="text-box single-line"> As you can see, it used the name of the property for "name" and "id" attributes. This example let me to presume that Template Helper knows about the name of the property. So, there should be some way of how to use it in custom implementation. Thanks for your help!

    Read the article

  • Microsoft OLE DB Provider for SQL Server error '80040e14' Could not find stored procedure

    - by BBlake
    I am migrating a classic ASP web app to new servers. The database back end is migrating from SQL Server 2000 to SQL Server 2008, and the app is moving from Win2000 x86 to Win2003R2 x64. I am getting the above error on every single stored procedure call within the application. I have verified: Yes, the SQL user is set up, using correct username and password Yes, the SQL user has execute permissions on the stored procedures in the database Yes, I have updated the TypeLib references to the new UUID Yes, I have logged into the database via SSMS with the SQL user id and it can see and execute the stored procedures just fine in SSMS, but not from the web app. Yes, the SQL user has the database set as its default database. The most frustrating thing is it works fine on the DEV server, but not on the production server. I have gone through every IIS setting 5 or 6 times and the web app is set up precisely the same in both environments. The only difference is the database server name in the connection string (DEV vs prod) EDIT: I have also tried pointing the prod web box at the dev database server and get the same error so I'm fairly sure the issue isn't on the database side.

    Read the article

  • Duck type testing with C# 4 for dynamic objects.

    - by Tracker1
    I'm wanting to have a simple duck typing example in C# using dynamic objects. It would seem to me, that a dynamic object should have HasValue/HasProperty/HasMethod methods with a single string parameter for the name of the value, property, or method you are looking for before trying to run against it. I'm trying to avoid try/catch blocks, and deeper reflection if possible. It just seems to be a common practice for duck typing in dynamic languages (JS, Ruby, Python etc.) that is to test for a property/method before trying to use it, then falling back to a default, or throwing a controlled exception. The example below is basically what I want to accomplish. If the methods described above don't exist, does anyone have premade extension methods for dynamic that will do this? Example: In JavaScript I can test for a method on an object fairly easily. //JavaScript function quack(duck) { if (duck && typeof duck.quack === "function") { return duck.quack(); } return null; //nothing to return, not a duck } How would I do the same in C#? //C# 4 dynamic Quack(dynamic duck) { //how do I test that the duck is not null, //and has a quack method? //if it doesn't quack, return null }

    Read the article

  • Map element position in data file to class property

    - by Augusto
    I need to read/write files, following a format provided by a third party specification. The specification itself is pretty simple: it says the position and the size of the data that will be saved in the file. For example: Position Size Description -------------------------------------------------- 0001 10 Device serial number 0011 02 Hour 0013 02 Minute 0015 02 Second 0017 02 Day 0019 02 Month 0021 02 Year The list is very long, it has about 400 elements. But lots of them can be combined. For example, hour, minute, second, day, month and year can be combined in a single DateTime object. I've splitted the elements into about 4 categories, and created separeted classes for holding the data. So, instead of a big structure representing the data, I have some smaller classes. I've also created different classes for reading and writing the data. The problem is: how to map the positions in the file to the objects properties, so that I don't need to repeat the values in the reading/writing class? I could use some custom attributes and retrieve them via reflection. But since the code will be running on devices with small memory and processor, it would be nice to find another way. My current read code looks like this: public void Read() { DataFile dataFile = new DataFile(); // the arguments are: position, size dataFile.SerialNumber = ReadLong(1, 10); //... } Any ideas on this one? Thanks!

    Read the article

  • cfml error with appication.cfc page

    - by tibin mathew
    Hi, I have some problem with my cfml website. I have used the below code in application.cfc file to connect with the dsn. But when ever i put this in my server, i'm getting error. i cant browse even a single test.cfm page. Is there any mistake in that code , any syntax error or something like that, will it be some problem with the dsn <cfset this.name = "0307de6.netsolhost.com"> <cfset this.sessionmanagement = true> <cfset this.loginstorage="session"> <cfset this.sessiontimeout = CreateTimeSpan(0,0,30,0)> <cfset this.applicationtimeout = CreateTimeSpan(2,0,0,0)> <cffunction name="onApplicationStart"> <cfscript> application.DSN = "hirerodsn"; application.dbUserName = "myusr"; application.dbPassword = "myd69!"; </cfscript> </cffunction> <cffunction name="onRequestStart"> <cfscript> request.DSN = "hirerodsn"; request.dbUserName = "myusr"; request.dbPassword = "myd69!"; </cfscript> </cffunction> please anyone help me

    Read the article

  • DNS-Based Environment Determination

    - by zvolkov
    Found the following here. The questions is: where can I find more details on how exactly implement this on Windows? Any guide or how-to anybody? Or maybe you can provide your invaluable suggestions? Specifically, how do I make so that "all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com" (I'm a dev, not a DNS specialist, but our IT Support has refused to help on this:() Use DNS Based Environment Determination for your servers. Do this by initially splitting your top level domain into a number of sub domains depending on their function, and then creating DNS Service Names in each of the sub domains pointing to the relevant server for that service. Based on the list above we would then have: * clientdb.prod.example.com for Production * clientdb.perf.example.com for Performance Testing * clientdb.qa.example.com for QA * clientdb.dev.example.com for Development Servers then resolve entries in their relevant sub domain by function. That is, all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com. This allows you to have a single configuration entry for your client database hostname (clientdb) that would resolve correctly in all environments. This technique has the added advantage of still having global services defined in a common top level domain. Here's one related (but not equivalent) SO question: http://stackoverflow.com/questions/774490/dns-resolving-based-on-client-ip This seems to be related to Providing "split horizon" DNS service. Reading that, I see that I will probably need separate DNS Server for each environment. Is this true or does Windows support some form of "tagging" the records to be visible depending on the requestor's IP? Also, cross-posted on ServerFault

    Read the article

  • How do I correctly use Unity to pass a ConnectionString to my repository classes?

    - by GenericTypeTea
    I've literally just started using the Unity Application Blocks Dependency Injection library from Microsoft, and I've come unstuck. This is my IoC class that'll handle the instantiation of my concrete classes to their interface types (so I don't have to keep called Resolve on the IoC container each time I want a repository in my controller): public class IoC { public static void Intialise(UnityConfigurationSection section, string connectionString) { _connectionString = connectionString; _container = new UnityContainer(); section.Configure(_container); } private static IUnityContainer _container; private static string _connectionString; public static IMovementRepository MovementRepository { get { return _container.Resolve<IMovementRepository>(); } } } So, the idea is that from my Controller, I can just do the following: _repository = IoC.MovementRepository; I am currently getting the error: Exception is: InvalidOperationException - The type String cannot be constructed. You must configure the container to supply this value. Now, I'm assuming this is because my mapped concrete implementation requires a single string parameter for its constructor. The concrete class is as follows: public sealed class MovementRepository : Repository, IMovementRepository { public MovementRepository(string connectionString) : base(connectionString) { } } Which inherits from: public abstract class Repository { public Repository(string connectionString) { _connectionString = connectionString; } public virtual string ConnectionString { get { return _connectionString; } } private readonly string _connectionString; } Now, am I doing this the correct way? Should I not have a constructor in my concrete implementation of a loosely coupled type? I.e. should I remove the constructor and just make the ConnectionString property a Get/Set so I can do the following: public static IMovementRepository MovementRepository { get { return _container.Resolve<IMovementRepository>( new ParameterOverrides { { "ConnectionString", _connectionString } }.OnType<IMovementRepository>() ); } } So, I basically wish to know how to get my connection string to my concrete type in the correct way that matches the IoC rules and keeps my Controller and concrete repositories loosely coupled so I can easily change the DataSource at a later date.

    Read the article

  • Multiple user roles in Ruby on Rails

    - by aguynamedloren
    I am building an inventory management application with four different user types: admin, employee, manufacturer, transporter. I haven't started coding yet, but this is what I'm thinking.. Manufacturers and transporters are related with has_many :through many-to-many association with products as follows: class Manufacturer < ActiveRecord::Base has_many :products has_many :transporters, :through => :products end class Product < ActiveRecord::Base belongs_to :manufacturer belongs_to :transporter end class Transporter < ActiveRecord::Base has_many :products has_many :manufacturers, :through => :products end All four user types will be able to login, but they will have different permissions and views, etc. I don't think I can put them in the same table (Users), however, because they will have different requirements, ie: vendors and manufacturers must have a billing address and contact info (through validations), but admins and employees should not have these fields. If possible, I would like to have a single login screen as opposed to 4 different screens. I'm not asking for the exact code to build this, but I'm having trouble determining the best way to make it happen. Any ideas would be greatly appreciated - thanks!

    Read the article

  • How do I use a modalViewController Identically in Two Controllers?

    - by Theory
    I'm using the Three20 TTMessageController in my app. I've figured out how to use it, adding on a bunch of other stuff (including TTMessageControllerDelegate methods and ABPeoplePickerNavigationControllerDelegate methods). It works great for me, after a bit of a struggle to figure it out. The trouble I'm having now is a design issue: I want to use it identically in two different places, including with the same delegate methods. My current approach is that I've put all the code into a single class inheriting from NSObject, called ComposerProxy, and I'm just having the two controllers that use it use the proxy, like so: ComposerProxy *proxy = [[ComposerProxy alloc] initWithController:this]; [proxy go]; The go method constructs the TTMessageController, configures it, adds it to a UINavigationController, and presents it: [self.controller presentModalViewController: navController animated: YES]; This works great, as I have all my code nicely encapsulated in ComposerProxy and I need only the above two lines anywhere I want to use it. The downside, though, is that I can't dealloc the proxy variable without getting crashes. I can't autorelease it, either: same problem. So I'm wondering if my proxy approach is a poor one. How does one normally encapsulate a bunch of behaviors like this without requiring a lot of duplicate code in the classes that use it? Do I need to add a delegate class to my ComposerProxy and make the controller responsible for dismissing the modal view controller in a hypothetical composerDidFinish method or some such? Many TIA!

    Read the article

  • Why would some $_POST variables go missing for a form with PHP?

    - by Chad Johnson
    Sometimes, multiple times a day in fact, users of my web application are submitting a certain form which has about a dozen form fields, half of which are hidden fields, and half of the $_POST data is simply not present in the processing script. Note that the fields that are not present are at the very bottom of the form. I know this because this raises a fatal error, and an email is dispatched to me which includes the post data. And of course, neither I nor any of the developers on my team can reproduce the problem. Flash is involved in the process, as I'm using a library called Uploadify to display a progress meter. Here is the flow...does anyone have ANY ideas at all why some of the post data would be getting wiped out? User visits edit screen for a page in the CMS I am using. Record id for the page is put into a form as a hidden value. User clicks the Uploadify browse button and selects a file (only single file selection is allowed). User clicks Submit button for my form. jQuery intercepts the form submit action, triggers Uploadify to start uploading, and returns false for the submit action (manually cancelling the form submit event so that Uploadify can take over). Uploadify uploads to a custom process script. Uploadify finishes uploading and triggers the Javascript completion callback. The Javascript callback calls $('#myForm').submit() to submit the form. This happens on multiple browsers (Firefox 3.5, 3.6, Safari, Internet Explorer 7, 8) and multiple platforms (Mac OS 10.5, 10.6 and Windows XP, 7).

    Read the article

  • Merging: hg/git vs. svn

    - by stmax
    I often read that hg (and git and...) are better at merging than svn but I have never seen practical examples of where hg/git can merge something where svn fails (or where svn needs manual intervention). Could you post a few step-by-step lists of branch/modify/commit/...-operations that show where svn would fail while hg/git happily moves on? Practical, not highly exceptional cases please... Some background: we have a few dozen developers working on projects using svn, with each project (or group of similar projects) in its own repo. We know how to apply release- and feature-branches so we don't run into problems very often (i.e. we've been there, but we've learned to overcome joel's problems of "one programmer causing trauma to the whole team" or "needing six developers for two weeks to reintegrate a branch"). We have release-branches that are very stable and only used to apply bugfixes. We have trunks that should be stable enough to be able to create a release within one week. And we have feature-branches that single developers or groups of developers can work on. Yes, they are deleted after reintegration so they don't clutter up the repository. ;) So I'm still trying to find the advantages of hg/git over svn. I'd love to get some hands-on experience, but there aren't any bigger projects we could move to hg/git yet, so I'm stuck with playing with small artifical projects that only contain a few made up files. And I'm looking for a few cases where you can feel the impressive power of hg/git, since so far I have often read about them but failed to find them myself.

    Read the article

  • Users in database server or database tables

    - by Batcat
    Hi all, I came across an interesting issue about client server application design. We have this browser based management application where it has many users using the system. So obvisously within that application we have an user management module within it. I have always thought having an user table in the database to keep all the login details was good enough. However, a senior developer said user management should be done in the database server layer if not then is poorly designed. What he meant was, if a user wants to use the application then a user should be created in the user table AND in the database server as a user account as well. So if I have 50 users using my applications, then I should have 50 database server user logins. I personally think having just one user account in the database server for this database was enough. Just grant this user with the allowed privileges to operate all the necessary operation need by the application. The users that are interacting with the application should have their user accounts created and managed within the database table as they are more related to the application layer. I don't see and agree there is need to create a database server user account for every user created for the application in the user table. A single database server user should be enough to handle all the query sent by the application. Really hope to hear some suggestions / opinions and whether I'm missing something? performance or security issues? Thank you very much.

    Read the article

  • help with Firefox extension

    - by Johnny Grass
    I'm writing a Firefox extension that creates a socket server which will output the active tab's URL when a client makes a connection to it. I have the following code in my javascript file: var serverSocket; function startServer() { var listener = { onSocketAccepted : function(socket, transport) { try { var outputString = gBrowser.currentURI.spec + "\n"; var stream = transport.openOutputStream(0,0,0); stream.write(outputString,outputString.length); stream.close(); } catch(ex2){ dump("::"+ex2); } }, onStopListening : function(socket, status){} }; try { serverSocket = Components.classes["@mozilla.org/network/server-socket;1"] .createInstance(Components.interfaces.nsIServerSocket); serverSocket.init(7055,true,-1); serverSocket.asyncListen(listener); } catch(ex){ dump(ex); } document.getElementById("status").value = "Started"; } startServer(); As it is, it works for multiple tabs in a single window. If I open multiple windows, it ignores the additional windows. I think it is creating a server socket for each window, but since they are using the same port, the additional sockets fail to initialize. I need it to create a server socket when the browser launches and continue running when I close the windows (Mac OS X). As it is, when I close a window but Firefox remains running, the socket closes and I have to restart firefox to get it up an running. How do I go about that?

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >