Search Results

Search found 19878 results on 796 pages for 'multiple dispatch'.

Page 742/796 | < Previous Page | 738 739 740 741 742 743 744 745 746 747 748 749  | Next Page >

  • Importing Conditionally Compiled Functions From a Perl Module

    - by Robert S. Barnes
    I have a set of logging and debugging functions which I want to use across multiple modules / objects. I'd like to be able to turn them on / off globally using a command line switch. The following code does this, however, I would like to be able to omit the package name and keep everything in a single file. This is related to two previous questions I asked, here and here. #! /usr/bin/perl -w use strict; use Getopt::Long; { package LogFuncs; use threads; use Time::HiRes qw( gettimeofday ); # provide tcpdump style time stamp sub _gf_time { my ( $seconds, $microseconds ) = gettimeofday(); my @time = localtime($seconds); return sprintf( "%02d:%02d:%02d.%06ld", $time[2], $time[1], $time[0], $microseconds ); } sub logerr; sub compile { my %params = @_; *logerr = $params{do_logging} ? sub { my $msg = shift; warn _gf_time() . " Thread " . threads->tid() . ": $msg\n"; } : sub { }; } } { package FooObj; sub new { my $class = shift; bless {}, $class; }; sub foo_work { my $self = shift; # do some foo work LogFuncs::logerr($self); } } { package BarObj; sub new { my $class = shift; my $data = { fooObj => FooObj->new() }; bless $data, $class; } sub bar_work { my $self = shift; $self->{fooObj}->foo_work(); LogFuncs::logerr($self); } } my $do_logging = 0; GetOptions( "do_logging" => \$do_logging, ); LogFuncs::compile(do_logging => $do_logging); my $bar = BarObj->new(); LogFuncs::logerr("Created $bar"); $bar->bar_work();

    Read the article

  • Trying to change variables in a singleton using a method

    - by Johnny Cox
    I am trying to use a singleton to store variables that will be used across multiple view controllers. I need to be able to get the variables and also set them. How do I call a method in a singleton to change the variables stored in the singleton. total+=1079; [var setTotal:total]; where var is a static Singleton *var = nil; I need to update the total and send to the setTotal method inside the singleton. But when I do this the setTotal method never gets accessed. The get methods work but the setTotal method does not. Please let me know what should. Below is some of my source code // // Singleton.m // Rolo // // Created by on 6/28/12. // Copyright (c) 2012 Johnny Cox. All rights reserved. // #import "Singleton.h" @implementation Singleton @synthesize total,tax,final; #pragma mark Singleton Methods + (Singleton *)sharedManager { static Singleton *sharedInstance = nil; static dispatch_once_t onceToken; dispatch_once(&onceToken, ^{ sharedInstance = [[Singleton alloc] init]; // Do any other initialisation stuff here }); return sharedInstance; } +(void) setTotal:(double) tot { Singleton *shared = [Singleton sharedManager]; shared.total = tot; NSLog(@"hello"); } +(double) getTotal { Singleton *shared = [Singleton sharedManager]; NSLog(@"%f",shared.total); return shared.total; } +(double) getTax { Singleton *shared = [Singleton sharedManager]; NSLog(@"%f",shared.tax); return shared.tax; } @end // // Singleton.h // Rolo // // Created by on 6/28/12. // Copyright (c) 2012 Johnny Cox. All rights reserved. // #import <Foundation/Foundation.h> @interface Singleton : NSObject @property (nonatomic, assign) double total; @property (nonatomic, assign) double tax; @property (nonatomic, assign) double final; + (id)sharedManager; +(double) getTotal; +(void) setTotal; +(double) getTax; @end

    Read the article

  • Java language features which have no equivalent in C#

    - by jthg
    Having mostly worked with C#, I tend to think in terms of C# features which aren't available in Java. After working extensively with Java over the last year, I've started to discover Java features that I wish were in C#. Below is a list of the ones that I'm aware of. Can anyone think of other Java language features which a person with a C# background may not realize exists? The articles http://www.25hoursaday.com/CsharpVsJava.html and http://en.wikipedia.org/wiki/Comparison_of_Java_and_C_Sharp give a very extensive list of differences between Java and C#, but I wonder whether I missed anything in the (very) long articles. I can also think of one feature (covariant return type) which I didn't see mentioned in either article. Please limit answers to language or core library features which can't be effectively implemented by your own custom code or third party libraries. Covariant return type - a method can be overridden by a method which returns a more specific type. Useful when implementing an interface or extending a class and you want a method to override a base method, but return a type more specific to your class. Enums are classes - an enum is a full class in java, rather than a wrapper around a primitive like in .Net. Java allows you to define fields and methods on an enum. Anonymous inner classes - define an anonymous class which implements a method. Although most of the use cases for this in Java are covered by delegates in .Net, there are some cases in which you really need to pass multiple callbacks as a group. It would be nice to have the choice of using an anonymous inner class. Checked exceptions - I can see how this is useful in the context of common designs used with Java applications, but my experience with .Net has put me in a habit of using exceptions only for unrecoverable conditions. I.E. exceptions indicate a bug in the application and are only caught for the purpose of logging. I haven't quite come around to the idea of using exceptions for normal program flow. strictfp - Ensures strict floating point arithmetic. I'm not sure what kind of applications would find this useful. fields in interfaces - It's possible to declare fields in interfaces. I've never used this. static imports - Allows one to use the static methods of a class without qualifying it with the class name. I just realized today that this feature exists. It sounds like a nice convenience.

    Read the article

  • Nested Sortable JQuery list doesn't work in IE while it does in FF

    - by Ben Fransen
    Hello everybody, While I'm using this site quite often as a resource for jQuery related problems I can't seem to find an answer this time. So here is my first post. During daytime at work I'm developing an informationsystem for generating MS Word documents. One of the modules I'm developing is for defining a default chapterselection for the table of contents and selecting texts by the given chapters. A user can submit new chapters and if necessary, add them as childchapter to a parent, flag them as 'adopt in TOC' and link a default text (from another module) to the chapter. My chapterlist is retreived recursively from a MySQL table and could look something like this: <ul class="sortableChapterlist"> <li>1</li> <li>2 <ul class="sortableChapterlist"> <li>2.1</li> <li>2.2</li> <li>2.3 <ul class="sortableChapterlist"> <li>2.3.1</li> <li>2.3.2</li> </ul> </li> </ul> </li> </ul> In FireFox the code works like a charm but IE (7) doesn't seem to like it that much. I'm only able to correctly drag arround the mainchapters. When attempting to drag a subchapter, no matter it's level, the correspondending mainchapter lifts up with some child, never all. This is the jQuery code I'm using to accomplish the task: $(function(){ $(".sortableChapterlist").sortable({ opacity: 0.7, helper: 'clone', cursor: 'move' }); $(".sortableChapterlist").selectable(); $(".sortableChapterlist").disableSelection(); }); Does anybody have some ideas about this? I'm guessing IE kinda falls over the multiple class reference "chapter_list" in combination with jQuery trying to handle the draggable/sortable. Kinds regards from Holland, Ben Fransen

    Read the article

  • Filtering a dropdown in Angular IE11 issue

    - by Brian S.
    I have a requirement for a select html element that can be duplicated multiple times on a page. The options for these select elements all come from a master list. All of the select elements can only show all of the items in the master list that have not been selected in any of the other select elements unless they just were duplicated. So I wrote a custom filter to do this in Angular and it seems to work just fine provided you are not using IE11. In IE when you select a new item from a duplicated select element, it seems to select the option after the one you selected even though the model still has the correct one set. I realize this sounds convoluted, so I created a jFiddle example. Using IE 11 try these steps: Select Bender Click the duplicate link Select Fry Notice that the one that is selected is Leela but the model still has Fry (id:2) as the one selected Now if you do the same thing in Chrome everything works as expected. Can anyone tell me how I might get around this or what I might be doing wrong? Here is the relevant Angular code: myapp.controller('Ctrl', function ($scope) { $scope.selectedIds = [{}]; $scope.allIds = [{ name: 'Bender', value: 1}, {name: 'Fry', value: 2}, {name: 'Leela', value: 3 }]; $scope.dupDropDown = function(currentDD) { var newDD = angular.copy(currentDD); $scope.selectedIds.push(newDD); } }); angular.module('appFilters',[]).filter('ddlFilter', function () { return function (allIds, currentItem, selectedIds) { //console.log(currentItem); var listToReturn = allIds.filter(function (anIdFromMasterList) { if (currentItem.id == anIdFromMasterList.value) return true; var areThereAny = selectedIds.some(function (aSelectedId) { return aSelectedId.id == anIdFromMasterList.value; }); return !areThereAny; }); return listToReturn; } }); And here is the relevant HTML <div ng-repeat="aSelection in selectedIds "> <a href="#" ng-click="dupDropDown(aSelection)">Duplicate</a> <select ng-model="aSelection.id" ng-options="a.value as a.name for a in allIds | ddlFilter:aSelection:selectedIds"> <option value="">--Select--</option> </select> </div>

    Read the article

  • How Do I Prevent Rails From Treating Updated Nested Attributes Differently From New Nested Attribute

    - by James
    I am using rails3 beta3 and couchdb via couchrest. I am not using active record. I want to add multiple "Sections" to a "Guide" and add and remove sections dynamically via a little javascript. I have looked at all the screencasts by Ryan Bates and they have helped immensely. The only difference is that I want to save all the sections as an array of sections instead of individual sections. Basically like this: "sections" => [{"title" => "Foo1", "content" => "Bar1"}, {"title" => "Foo2", "content" => "Bar2"}] So, basically I need the params hash to look like that when the form is submitted. When I create my form I am doing the following: <%= form_for @guide, :url => { :action => "create" } do |f| %> <%= render :partial => 'section', :collection => @guide.sections %> <%= f.submit "Save" %> <% end %> And my section partial looks like this: <%= fields_for "sections[]", section do |guide_section_form| %> <%= guide_section_form.text_field :section_title %> <%= guide_section_form.text_area :content, :rows => 3 %> <% end %> Ok, so when I create the guide with sections, it is working perfectly as I would like. The params hash is giving me a sections array just like I would want. The problem comes when I want edit guide/sections and save them again because rails is inserting the id of the guide in the id and name of each form field, which is screwing up the params hash on form submission. Just to be clear, here is the raw form output for a new resource: <input type="text" size="30" name="sections[][section_title]" id="sections__section_title"> <textarea rows="3" name="sections[][content]" id="sections__content" cols="40"></textarea> And here is what it looks like when editing an existing resource: <input type="text" value="Foo1" size="30" name="sections[cd2f2759895b5ae6cb7946def0b321f1][section_title]" id="sections_cd2f2759895b5ae6cb7946def0b321f1_section_title"> <textarea rows="3" name="sections[cd2f2759895b5ae6cb7946def0b321f1][content]" id="sections_cd2f2759895b5ae6cb7946def0b321f1_content" cols="40">Bar1</textarea> How do I force rails to always use the new resource behavior and not automatically add the id to the name and value. Do I have to create a custom form builder? Is there some other trick I can do to prevent rails from putting the id of the guide in there? I have tried a bunch of stuff and nothing is working. Thanks in advance!

    Read the article

  • How do I prevent the concurrent execution of a javascript function?

    - by RyanV
    I am making a ticker similar to the "From the AP" one at The Huffington Post, using jQuery. The ticker rotates through a ul, either by user command (clicking an arrow) or by an auto-scroll. Each list-item is display:none by default. It is revealed by the addition of a "showHeadline" class which is display:list-item. HTML for the UL Looks like this: <ul class="news" id="news"> <li class="tickerTitle showHeadline">Test Entry</li> <li class="tickerTitle">Test Entry2</li> <li class="tickerTitle">Test Entry3</li> </ul> When the user clicks the right arrow, or the auto-scroll setTimeout goes off, it runs a tickForward() function: function tickForward(){ var $active = $('#news li.showHeadline'); var $next = $active.next(); if($next.length==0) $next = $('#news li:first'); $active.stop(true, true); $active.fadeOut('slow', function() {$active.removeClass('showHeadline');}); setTimeout(function(){$next.fadeIn('slow', function(){$next.addClass('showHeadline');})}, 1000); if(isPaused == true){ } else{ startScroll() } }; This is heavily inspired by Jon Raasch's A Simple jQuery Slideshow. Basically, find what's visible, what should be visible next, make the visible thing fade and remove the class that marks it as visible, then fade in the next thing and add the class that makes it visible. Now, everything is hunky-dory if the auto-scroll is running, kicking off tickForward() once every three seconds. But if the user clicks the arrow button repeatedly, it creates two negative conditions: Rather than advance quickly through the list for just the number of clicks made, it continues scrolling at a faster-than-normal rate indefinitely. It can produce a situation where two (or more) list items are given the .showHeadline class, so there's overlap on the list. I can see these happening (especially #2) because the tickForward() function can run concurrently with itself, producing different sets of $active and $next. So I think my question is: What would be the best way to prevent concurrent execution of the tickForward() method? Some things I have tried or considered: Setting a Flag: When tickForward() runs, it sets an isRunning flag to true, and sets it back to false right before it ends. The logic for the event handler is set to only call tickForward() if isRunning is false. I tried a simple implementation of this, and isRunning never appeared to be changed. The jQuery queue(): I think it would be useful to queue up the tickForward() commands, so if you clicked it five times quickly, it would still run as commanded but wouldn't run concurrently. However, in my cursory reading on the subject, it appears that a queue has to be attached to the object its queue applies to, and since my tickForward() method affects multiple lis, I don't know where I'd attach it.

    Read the article

  • Paypal IPN Handler for Cart

    - by kimmothy16
    Hey everyone! I'm using a Paypal add to cart button for a simple ecommerce website. I had a Paypal IPN handler written in PHP that was successfully working for 'Buy it Now' buttons and purchasing one item at a time. I would run a database query each time to update the store's inventory to reflect the purchase. Now I'm upgrading this store to a 'cart' version so people could check out with multiple items at a time. Would anyone be able to tell me, in general, how my IPN handler would need to be altered to accommodate this? I'm unsure of what a response from Paypal looks like for a cart purchase as opposed to a buy it now purchase. Thanks, any help or examples of working IPN cart scripts would be very appreciated! My current code is below.. // Paypal POSTs HTML FORM variables to this page // we must post all the variables back to paypal exactly unchanged and add an extra parameter cmd with value _notify-validate // initialise a variable with the requried cmd parameter $req = 'cmd=_notify-validate'; // go through each of the POSTed vars and add them to the variable foreach ($_POST as $key => $value) { $value = urlencode(stripslashes($value)); $req .= "&$key=$value"; } // post back to PayPal system to validate $header .= "POST /cgi-bin/webscr HTTP/1.0\r\n"; $header .= "Content-Type: application/x-www-form-urlencoded\r\n"; $header .= "Content-Length: " . strlen($req) . "\r\n\r\n"; // In a live application send it back to www.paypal.com // but during development you will want to uswe the paypal sandbox // comment out one of the following lines $fp = fsockopen ('www.sandbox.paypal.com', 80, $errno, $errstr, 30); //$fp = fsockopen ('www.paypal.com', 80, $errno, $errstr, 30); // or use port 443 for an SSL connection //$fp = fsockopen ('ssl://www.paypal.com', 443, $errno, $errstr, 30); if (!$fp) { // HTTP ERROR } else { fputs ($fp, $header . $req); while (!feof($fp)) { $res = fgets ($fp, 1024); if (strcmp ($res, "VERIFIED") == 0) { $item_name = stripslashes($_POST['item_name']); $item_number = $_POST['item_number']; $item_id = $_POST['custom']; $payment_status = $_POST['payment_status']; $payment_amount = $_POST['mc_gross']; //full amount of payment. payment_gross in US $payment_currency = $_POST['mc_currency']; $txn_id = $_POST['txn_id']; //unique transaction id $receiver_email = $_POST['receiver_email']; $payer_email = $_POST['payer_email']; $size = $_POST['option_selection1']; $item_id = $_POST['item_id']; $business = $_POST['business']; if ($payment_status == 'Completed') { // UPDATE THE DATABASE }

    Read the article

  • Silverlight ViewBase in separate assembly - possible?

    - by Mark
    I have all my views in a project inheriting from a ViewBase class that inherits from UserControl. In my XAML I reference it thus: <f:ViewBase x:Class="Forte.UI.Modules.Configure.Views.AddNewEmployeeView" xmlns:f="clr-namespace:Forte.UI.Modules.Configure.Views" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" It works fine. Now I have moved the ViewBase to another project (so I can refernce it from multiple projects) so I reference it like: <f:ViewBase x:Class="Forte.UI.Modules.Configure.Views.AddNewEmployeeView" xmlns:f="clr-namespace:Forte.UI.Modules.Common.Views;assembly=Forte.UI.Modules.Common" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" This works fine when I run from the IDE but when I run the same sln from MSBuild it gives a warning: "H:\dev\ExternalCopy\Code\UI\Modules\Configure\Forte.UI.Modules.Configure.csproj" (default target) (10:12) - (ValidateXaml target) - H:\dev\ExternalCopy\Code\UI\Modules\Configure\Views\AddNewEmployee\AddNewEmployeeView.xaml(1,2,1,2): warning : The tag 'ViewBase' does not exist in XML namespace 'clr-namespace:Forte.UI.Modules.Common.Views;assembly=Forte.UI.Modules.Common'. Then fails with: "H:\dev\ExternalCopy\Code\UI\Modules\Configure\Forte.UI.Modules.Configure.csproj" (default target) (10:12) - (ValidateXaml target) - C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): error MSB4018: The "ValidateXaml" task failed unexpectedly.\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: System.NullReferenceException: Object reference not set to an instance of an object.\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at MS.MarkupCompiler.ValidationPass.ValidateXaml(String fileName, Assembly[] assemb lies, Assembly callingAssembly, TaskLoggingHelper log, Boolean shouldThrow)\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute()\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute()\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Silverlight.Build.Tasks.ValidateXaml.Execute()\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Build.BuildEngine.TaskEngine.ExecuteInstantiatedTask(EngineProxy engin eProxy, ItemBucket bucket, TaskExecutionMode howToExecuteTask, ITask task, Boolean& taskResult) Any ideas what might be causing this behaviour? Using Silverlight 3 Here is a cut down version of the MSBuild file that fails to build the sln that builds fine in the IDE (sorry couldn't get it to format here): <?xml version="1.0" encoding="utf-8" ?> <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="Compile"> <ItemGroup> <ProjectToBuild Include="..\UI\Forte.UI.sln"> <Properties>Configuration=Debug</Properties> </ProjectToBuild> </ItemGroup> <Target Name="Compile"> <MSBuild Projects="@(ProjectToBuild)"></MSBuild> </Target> </Project> Thanks for any help!

    Read the article

  • Using XMLDecoder to cast Encoded XML to List<>

    - by Ender
    I am writing an application that reads in a large number of basic user details in the following format; once read in it then allows the user to search for a user's details using their email: NAME ROLE EMAIL --------------------------------------------------- Joe Bloggs Manager [email protected] John Smith Consultant [email protected] Alan Wright Tester [email protected] ... The problem I am suffering is that I need to store a large number of details of all people that have worked at the company. The file containing these details will be written on a yearly basis simply for reporting purposes, but the program will need to be able to access these details quickly. The way I aim to access these files is to have a program that asks the user for the name of the unique email of the member of staff and for the program to then return the name and the role from that line of the file. I've played around with text files, but am struggling with how I would handle multiple columns of data when it comes to searching this large file. What is the best format to store such data in? A text file? XML? The size doesn't bother me, but I'd like to be able to search it as quickly as possible. The file will need to contain a lot of entries, probably over the 10K mark over time. EDIT: I've decided to go with the XML serialisation method. I've managed to get the code for Encoding working perfectly, but the Decoding code below does not work. XMLDecoder d = new XMLDecoder( new BufferedInputStream(new FileInputStream("data.xml"))); List<Employee> list = (List<Employee>) d.readObject(); d.close(); for(Employee x : list) { if(x.getEmail().equals(userInput)) { // do stuff } } When the program hits List<Employee> list = (List<Employee>) d.readObject(); an exception is thrown claiming that "Employee cannot be cast to java.util.List". I've added a bounty to this and anyone that can help me solve this problem once and for all will get lots of lovely points. EDIT 2: I've looked a bit more into the problem and have come across Serialization as a potential answer. If anyone can look into this for me as I've no experience with Serialization or Deserialization I'd be very grateful. It can provide an Object with no problems whatsoever, but I really need to return it in the same format as it went in (List). EDIT 3: Ugh, this problem is really starting to drive me crazy and to be honest I'm starting to think that it's an unsolvable problem. If possible, could someone take a look at the code and help provide a solution for me?

    Read the article

  • C# and NpgsqlDataAdapter returning a single string instead of a data table

    - by tme321
    I have a postgresql db and a C# application to access it. I'm having a strange error with values I return from a NpgsqlDataAdapter.Fill command into a DataSet. I've got this code: NpgsqlCommand n = new NpgsqlCommand(); n.Connection = connector; // a class member NpgsqlConnection DataSet ds = new DataSet(); DataTable dt = new DataTable(); // DBTablesRef are just constants declared for // the db table names and columns ArrayList cols = new ArrayList(); cols.Add(DBTablesRef.all); //all is just * ArrayList idCol = new ArrayList(); idCol.Add(DBTablesRef.revIssID); ArrayList idVal = new ArrayList(); idVal.Add(idNum); // a function parameter // Select builder and Where builder are just small // functions that return an sql statement based // on the parameters. n is passed to the where // builder because the builder uses named // parameters and sets them in the NpgsqlCommand // passed in String select = SelectBuilder(DBTablesRef.revTableName, cols) + WhereBuilder(n,idCol, idVal); n.CommandText = select; try { NpgsqlDataAdapter da = new NpgsqlDataAdapter(n); ds.Reset(); // filling DataSet with result from NpgsqlDataAdapter da.Fill(ds); // C# DataSet takes multiple tables, but only the first is used here dt = ds.Tables[0]; } catch (Exception e) { Console.WriteLine(e.ToString()); } So my problem is this: the above code works perfectly, just like I want it to. However, if instead of doing a select on all (*) if I try to name individual columns to return from the query I get the information I asked for, but rather than being split up into seperate entries in the data table I get a string in the first index of the data table that looked something like: "(0,5,false,Bob Smith,7)" And the data is correct, I would be expecting 0, then 5, then a boolean, then some text etc. But I would (obviously) prefer it to not be returned as just one big string. Anyone know why if I do a select on * I get a datatable as expected, but if I do a select on specific columns I get a data table with one entry that is the string of the values I'm asking for?

    Read the article

  • Project References DLL version hell

    - by Mr Shoubs
    We're having problems getting visual studio to pick up the latest version of a DLL from one of our projects. We have multiple class library projects (e.g. BusinessLogic, ReportData) and a number of web services, each has a reference to a Connectivity DLL we've written (this ref to the connectivity DLL is the problem). We always point references to the DLL in the bin/debug folder, (which is where we always build to for any given project) and all custom DLL references have CopyLocal = True and SpecificVersion = False ReportData has a reference to business logic (which also has a reference to connectivity - I don't see why this should cause a problem, but thought it is worth mentioning) The weird thing is, when you click "Add Reference" and browse to Connectivity/bin/debug - you hover the mouse over the DLL file, the correct (latest) version is shown (version and file version are always incremented together), but when you click ok, a previous version number is pulled though. Even when I look in the current projects debug folder (where copy local would put the DLL after compiling) that shows the latest version number. - NO WHERE does can I find the previous version of the DLL outside of visual studio, but in that project references it has the old version - even though the path is correct. I'm at a loss as to where it might be getting the old versions from. Or even why it wants that one. This is possibly the most frustraighting problem I have ever come across. Does anyone know how to ensure the latest version is pulled through (preferably automatically or on compile). EDIT: Although not exactly the scenario I'm dealing with I was reading this article and somewhere it mentions about CLR ignoring revision numbers. Understandable (even though this hasn't been a problem before - we're on revision 39), so I thought I would update the build number, still didn't work. In a vain attempt I though I would update the minor version number and see if that made any difference. I'm not saying this is the answer as I have to check quite a few things first, but on the face of it, this seems to have solved my problem... Further edit: In other class libraries this seems to have solved the problem, however in a test windows application it still pulls a previous version through :( If I increment the minor version number again, the same problem come back and I am left with the wrong version being pulled though. Further Edit - I created an entirly new project, added a reference and still had the exact same problem. This suggests the problem is restriced to the project I am referencing. Wish I knew why! Anyone had this problem before and know how to get around it? HELP!

    Read the article

  • Modeling distribution of performance measurements

    - by peterchen
    How would you mathematically model the distribution of repeated real life performance measurements - "Real life" meaning you are not just looping over the code in question, but it is just a short snippet within a large application running in a typical user scenario? My experience shows that you usually have a peak around the average execution time that can be modeled adequately with a Gaussian distribution. In addition, there's a "long tail" containing outliers - often with a multiple of the average time. (The behavior is understandable considering the factors contributing to first execution penalty). My goal is to model aggregate values that reasonably reflect this, and can be calculated from aggregate values (like for the Gaussian, calculate mu and sigma from N, sum of values and sum of squares). In other terms, number of repetitions is unlimited, but memory and calculation requirements should be minimized. A normal Gaussian distribution can't model the long tail appropriately and will have the average biased strongly even by a very small percentage of outliers. I am looking for ideas, especially if this has been attempted/analysed before. I've checked various distributions models, and I think I could work out something, but my statistics is rusty and I might end up with an overblown solution. Oh, a complete shrink-wrapped solution would be fine, too ;) Other aspects / ideas: Sometimes you get "two humps" distributions, which would be acceptable in my scenario with a single mu/sigma covering both, but ideally would be identified separately. Extrapolating this, another approach would be a "floating probability density calculation" that uses only a limited buffer and adjusts automatically to the range (due to the long tail, bins may not be spaced evenly) - haven't found anything, but with some assumptions about the distribution it should be possible in principle. Why (since it was asked) - For a complex process we need to make guarantees such as "only 0.1% of runs exceed a limit of 3 seconds, and the average processing time is 2.8 seconds". The performance of an isolated piece of code can be very different from a normal run-time environment involving varying levels of disk and network access, background services, scheduled events that occur within a day, etc. This can be solved trivially by accumulating all data. However, to accumulate this data in production, the data produced needs to be limited. For analysis of isolated pieces of code, a gaussian deviation plus first run penalty is ok. That doesn't work anymore for the distributions found above. [edit] I've already got very good answers (and finally - maybe - some time to work on this). I'm starting a bounty to look for more input / ideas.

    Read the article

  • How can I easily maintain a cross-file JavaScript Library Development Environment

    - by John
    I have been developing a new JavaScript application which is rapidly growing in size. My entire JavaScript Application has been encapsulated inside a single function, in a single file, in a way like this: (function(){ var uniqueApplication = window.uniqueApplication = function(opts){ if (opts.featureOne) { this.featureOne = new featureOne(opts.featureOne); } if (opts.featureTwo) { this.featureTwo = new featureTwo(opts.featureTwo); } if (opts.featureThree) { this.featureThree = new featureThree(opts.featureThree); } }; var featureOne = function(options) { this.options = options; }; featureOne.prototype.myFeatureBehavior = function() { //Lots of Behaviors }; var featureTwo = function(options) { this.options = options; }; featureTwo.prototype.myFeatureBehavior = function() { //Lots of Behaviors }; var featureThree = function(options) { this.options = options; }; featureThree.prototype.myFeatureBehavior = function() { //Lots of Behaviors }; })(); In the same file after the anonymous function and execution I do something like this: (function(){ var instanceOfApplication = new uniqueApplication({ featureOne:"dataSource", featureTwo:"drawingCanvas", featureThree:3540 }); })(); Before uploading this software online I pass my JavaScript file, and all it's dependencies, into Google Closure Compiler, using just the default Compression, and then I have one nice JavaScript file ready to go online for production. This technique has worked marvelously for me - as it has created only one global footprint in the DOM and has given me a very flexible framework to grow each additional feature of the application. However - I am reaching the point where I'd really rather not keep this entire application inside one JavaScript file. I'd like to move from having one large uniqueApplication.js file during development to having a separate file for each feature in the application, featureOne.js - featureTwo.js - featureThree.js Once I have completed offline development testing, I would then like to use something, perhaps Google Closure Compiler, to combine all of these files together - however I want these files to all be compiled inside of that scope, as they are when I have them inside one file - and I would like for them to remain in the same scope during offline testing too. I see that Google Closure Compiler supports an argument for passing in modules but I haven't really been able to find a whole lot of information on doing something like this. Anybody have any idea how this could be accomplished - or any suggestions on a development practice for writing a single JavaScript Library across multiple files that still only leaves one footprint on the DOM?

    Read the article

  • Singleton: How should it be used

    - by Loki Astari
    Edit: From another question I provided an answer that has links to a lot of questions/answers about singeltons: More info about singletons here: So I have read the thread Singletons: good design or a crutch? And the argument still rages. I see Singletons as a Design Pattern (good and bad). The problem with Singleton is not the Pattern but rather the users (sorry everybody). Everybody and their father thinks they can implement one correctly (and from the many interviews I have done, most people can't). Also because everybody thinks they can implement a correct Singleton they abuse the Pattern and use it in situations that are not appropriate (replacing global variables with Singletons!). So the main questions that need to be answered are: When should you use a Singleton How do you implement a Singleton correctly My hope for this article is that we can collect together in a single place (rather than having to google and search multiple sites) an authoritative source of when (and then how) to use a Singleton correctly. Also appropriate would be a list of Anti-Usages and common bad implementations explaining why they fail to work and for good implementations their weaknesses. So get the ball rolling: I will hold my hand up and say this is what I use but probably has problems. I like "Scott Myers" handling of the subject in his books "Effective C++" Good Situations to use Singletons (not many): Logging frameworks Thread recycling pools /* * C++ Singleton * Limitation: Single Threaded Design * See: http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf * For problems associated with locking in multi threaded applications * * Limitation: * If you use this Singleton (A) within a destructor of another Singleton (B) * This Singleton (A) must be fully constructed before the constructor of (B) * is called. */ class MySingleton { private: // Private Constructor MySingleton(); // Stop the compiler generating methods of copy the object MySingleton(MySingleton const& copy); // Not Implemented MySingleton& operator=(MySingleton const& copy); // Not Implemented public: static MySingleton& getInstance() { // The only instance // Guaranteed to be lazy initialized // Guaranteed that it will be destroyed correctly static MySingleton instance; return instance; } }; OK. Lets get some criticism and other implementations together. :-)

    Read the article

  • Problem with Keyboard hook proc

    - by Steve
    The background: My form has a TWebBrowser. I want to close the form with ESC but the TWebBrowser eats the keystrokes - so I decided to go with a keyboard hook. The problem is that the Form can be open in multiple instances at the same time. No matter what I do, in some situations, if there are two instances open of my form, closing one of them closes the other as well. I've attached some sample code. Any ideas on what causes the issue? var EmailDetailsForm: TEmailDetailsForm; KeyboardHook: HHook; implementation function KeyboardHookProc(Code: Integer; wParam, lParam: LongInt): LongInt; stdcall; var hWnd: THandle; I: Integer; F: TForm; begin if Code < 0 then Result := CallNextHookEx(KeyboardHook, Code, wParam, lParam) else begin case wParam of VK_ESCAPE: if (lParam and $80000000) <> $00000000 then begin hWnd := GetForegroundWindow; for I := 0 to Screen.FormCount - 1 do begin F := Screen.Forms[I]; if F.Handle = hWnd then if F is TEmailDetailsForm then begin PostMessage(hWnd, WM_CLOSE, 0, 0); Result := HC_SKIP; break; end; end; //for end; //if else Result := CallNextHookEx(KeyboardHook, Code, wParam, lParam); end; //case end; //if end; function TEmailDetailsForm.CheckInstance: Boolean; var I, J: Integer; F: TForm; begin Result := false; J := 0; for I := 0 to Screen.FormCount - 1 do begin F := Screen.Forms[I]; if F is TEmailDetailsForm then begin J := J + 1; if J = 2 then begin Result := true; break; end; end; end; end; procedure TEmailDetailsForm.FormCreate(Sender: TObject); begin if not CheckInstance then KeyboardHook := SetWindowsHookEx(WH_KEYBOARD, @KeyboardHookProc, 0, GetCurrentThreadId()); end; procedure TEmailDetailsForm.FormDestroy(Sender: TObject); begin if not CheckInstance then UnHookWindowsHookEx(KeyboardHook); end;

    Read the article

  • Data denormalization and C# objects DB serialization

    - by Robert Koritnik
    I'm using a DB table with various different entities. This means that I can't have an arbitrary number of fields in it to save all kinds of different entities. I want instead save just the most important fields (dates, reference IDs - kind of foreign key to various other tables, most important text fields etc.) and an additional text field where I'd like to store more complete object data. the most obvious solution would be to use XML strings and store those. The second most obvious choice would be JSON, that usually shorter and probably also faster to serialize/deserialize... And is probably also faster. But is it really? My objects also wouldn't need to be strictly serializable, because JsonSerializer is usually able to serialize anything. Even anonymous objects, that may as well be used here. What would be the most optimal solution to solve this problem? Additional info My DB is highly normalised and I'm using Entity Framework, but for the purpose of having external super-fast fulltext search functionality I'm sacrificing a bit DB denormalisation. Just for the info I'm using SphinxSE on top of MySql. Sphinx would return row IDs that I would use to fast query my index optimised conglomerate table to get most important data from it much much faster than querying multiple tables all over my DB. My table would have columns like: RowID (auto increment) EntityID (of the actual entity - but not directly related because this would have to point to different tables) EntityType (so I would be able to get the actual entity if needed) DateAdded (record timestamp when it's been added into this table) Title Metadata (serialized data related to particular entity type) This table would be indexed with SPHINX indexer. When I would search for data using this indexer I would provide a series of EntityIDs and a limit date. Indexer would have to return a very limited paged amount of RowIDs ordered by DateAdded (descending). I would then just join these RowIDs to my table and get relevant results. So this won't actually be full text search but a filtering search. Getting RowIDs would be very fast this way and getting results back from the table would be much faster than comparing EntityIDs and DateAdded comparisons even though they would be properly indexed.

    Read the article

  • Accurately display upload progress in Silverilght upload

    - by Matt
    I'm trying to debug a file upload / download issue I'm having. I've got a Silverlight file uploader, and to transmit the files I make use of the HttpWebRequest class. So I create a connection to my file upload handler on the server and begin transmitting. While a file uploads I keep track of total bytes written to the RequestStream so I can figure out a percentage. Now working at home I've got a rather slow connection, and I think Silverlight, or the browser, is lying to me. It seems that my upload progress logic is inaccurate. When I do multiple file uploads (24 images of 3-6mb big in my testing), the logic reports the files finish uploading but I believe that it only reflects the progress of written bytes to the RequestStream, not the actual amount of bytes uploaded. What is the most accurate way to measure upload progress. Here's the logic I'm using. public void Upload() { if( _TargetFile != null ) { Status = FileUploadStatus.Uploading; Abort = false; long diff = _TargetFile.Length - BytesUploaded; UriBuilder ub = new UriBuilder( App.siteUrl + "upload.ashx" ); bool complete = diff <= ChunkSize; ub.Query = string.Format( "{3}name={0}&StartByte={1}&Complete={2}", fileName, BytesUploaded, complete, string.IsNullOrEmpty( ub.Query ) ? "" : ub.Query.Remove( 0, 1 ) + "&" ); HttpWebRequest webrequest = ( HttpWebRequest ) WebRequest.Create( ub.Uri ); webrequest.Method = "POST"; webrequest.BeginGetRequestStream( WriteCallback, webrequest ); } } private void WriteCallback( IAsyncResult asynchronousResult ) { HttpWebRequest webrequest = ( HttpWebRequest ) asynchronousResult.AsyncState; // End the operation. Stream requestStream = webrequest.EndGetRequestStream( asynchronousResult ); byte[] buffer = new Byte[ 4096 ]; int bytesRead = 0; int tempTotal = 0; Stream fileStream = _TargetFile.OpenRead(); fileStream.Position = BytesUploaded; while( ( bytesRead = fileStream.Read( buffer, 0, buffer.Length ) ) != 0 && tempTotal + bytesRead < ChunkSize && !Abort ) { requestStream.Write( buffer, 0, bytesRead ); requestStream.Flush(); BytesUploaded += bytesRead; tempTotal += bytesRead; int percent = ( int ) ( ( BytesUploaded / ( double ) _TargetFile.Length ) * 100 ); UploadPercent = percent; if( UploadProgressChanged != null ) { UploadProgressChangedEventArgs args = new UploadProgressChangedEventArgs( percent, bytesRead, BytesUploaded, _TargetFile.Length, _TargetFile.Name ); SmartDispatcher.BeginInvoke( () => UploadProgressChanged( this, args ) ); } } //} // only close the stream if it came from the file, don't close resizestream so we don't have to resize it over again. fileStream.Close(); requestStream.Close(); webrequest.BeginGetResponse( ReadCallback, webrequest ); }

    Read the article

  • Create a screen like the iPhone home screen with Scrollview and Buttons

    - by Anthony Chan
    Hi, I'm working on a project and need to create a screen similar to the iPhone home screen: A scrollview with multiple pages A bunch of icons When not in edit mode, swipe through different pages (even I started the touch on an icon) When not in edit mode, tap an icon to do something When in edit mode, drag the icon to swap places, and even swap to different pages When in edit mode, tap an icon to remove it Previously I read from several forums that I have to subclass UIScrollview in order to have touch input for the UIViews on top of it. So I subclassed it overriding the methods to handle touches: - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { //If not dragging, send event to next responder if (!self.dragging) [self.nextResponder touchesBegan:touches withEvent:event]; else [super touchesBegan:touches withEvent:event]; } In general I've override the touchesBegan:, touchesMoved: and touchesEnded: methods similarly. Then in the view controller, I added to following code: - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; UIView *hitView = (UIView *)touch.view; if ([hitView isKindOfClass:[UIView class]]) { [hitView doSomething]; NSLog(@"touchesBegan"); } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { // Some codes to move the icons NSLog(@"touchesMoved"); } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { NSLog(@"touchesEnded"); } When I run the app, I have the touchesBegan method detected correctly. However, when I tried to drag the icon, the icon just moved a tiny bit and then the page started to scroll. In console, it logged with 2 or 3 "touchesMoved" message only. However, I learned from another project that it should logged tonnes of "touchesMoved" message as long as I'm still dragging on the screen. (I'm suspecting I have the delaysContentTouches set to YES, so it delays a little bit when I tried to drag the icons. After that minor delay, it sends to signal back to the scrollview to scroll through the page. Please correct me if I'm wrong.) So if any help on the code to perform the above tasks would be greatly appreciated. I've stuck in this place for nearly a week with no hope. Thanks a lot.

    Read the article

  • Objective-c - How to serialize audio file into small packets that can be played?

    - by vfn
    Hi there, So, I would like to get a sound file and convert it in packets, and send it to another computer. I would like that the other computer be able to play the packets as they arrive. I am using AVAudioPlayer to try to play this packets, but I couldn't find a proper way to serialize the data on the peer1 that the peer2 can play. The scenario is, peer1 has a audio file, split the audio file in many small packets, put them on a NSData and send them to peer2. Peer 2 receive the packets and play one by one, as they arrive. Does anyone have know how to do this? or even if it is possible? EDIT: Here it is some piece of code to illustrate what I would like to achieve. // This code is part of the peer1, the one who sends the data - (void)sendData { int packetId = 0; NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:@"myAudioFile" ofType:@"wav"]; NSData *soundData = [[NSData alloc] initWithContentsOfFile:soundFilePath]; NSMutableArray *arraySoundData = [[NSMutableArray alloc] init]; // Spliting the audio in 2 pieces // This is only an illustration // The idea is to split the data into multiple pieces // dependin on the size of the file to be sent NSRange soundRange; soundRange.length = [soundData length]/2; soundRange.location = 0; [arraySoundData addObject:[soundData subdataWithRange:soundRange]]; soundRange.length = [soundData length]/2; soundRange.location = [soundData length]/2; [arraySoundData addObject:[soundData subdataWithRange:soundRange]]; for (int i=0; i // This is the code on peer2 that would receive an play the piece of audio on each packet - (void) receiveData:(NSData *)data { NSKeyedUnarchiver* unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:data]; if ([unarchiver containsValueForKey:PACKET_ID]) NSLog(@"DECODED PACKET_ID: %i", [unarchiver decodeIntForKey:PACKET_ID]); if ([unarchiver containsValueForKey:PACKET_SOUND_DATA]) { NSLog(@"DECODED sound"); NSData *sound = (NSData *)[unarchiver decodeObjectForKey:PACKET_SOUND_DATA]; if (sound == nil) { NSLog(@"sound is nil!"); } else { NSLog(@"sound is not nil!"); AVAudioPlayer *audioPlayer = [AVAudioPlayer alloc]; if ([audioPlayer initWithData:sound error:nil]) { [audioPlayer prepareToPlay]; [audioPlayer play]; } else { [audioPlayer release]; NSLog(@"Player couldn't load data"); } } } [unarchiver release]; } So, here is what I am trying to achieve...so, what I really need to know is how to create the packets, so peer2 can play the audio. It would be a kind of streaming. Yes, for now I am not worried about the order that the packet are received or played...I only need to get the sound sliced and them be able to play each piece, each slice, without need to wait for the whole file be received by peer2. Thanks!

    Read the article

  • Better way to "find parent" and if statements

    - by Luke Abell
    I can't figure out why this isn't working: $(document).ready(function() { if ($('.checkarea.unchecked').length) { $(this).parent().parent().parent().parent().parent().parent().parent().removeClass('checked').addClass('unchecked'); } else { $(this).parent().parent().parent().parent().parent().parent().parent().removeClass('unchecked').addClass('checked'); } }); Here's a screenshot of the HTML structure: http://cloud.lukeabell.com/JV9N (Updated with correct screenshot) Also, there has to be a better way to find the parent of the item (there are multiple of these elements on the page, so I need it to only effect the one that is unchecked) Here's some other code that is involved that might be important: $('.toggle-open-area').click(function() { if($(this).parent().parent().parent().parent().parent().parent().parent().hasClass('open')) { $(this).parent().parent().parent().parent().parent().parent().parent().removeClass('open').addClass('closed'); } else { $(this).parent().parent().parent().parent().parent().parent().parent().removeClass('closed').addClass('open'); } }); $('.checkarea').click(function() { if($(this).hasClass('unchecked')) { $(this).removeClass('unchecked').addClass('checked'); $(this).parent().parent().parent().parent().parent().parent().parent().removeClass('open').addClass('closed'); } else { $(this).removeClass('checked').addClass('unchecked'); $(this).parent().parent().parent().parent().parent().parent().parent().removeClass('closed').addClass('open'); } }); (Very open to improvements for that section as well) Thank you so much! Here's a link to where this is all happening: http://linkedin.guidemytech.com/sign-up-for-linkedin-step-2-set-up-linkedin-student/ Update: I've improved the code from the comments, but still having issues with that first section not working. $(document).ready(function() { if ($('.checkarea.unchecked').length) { $(this).parents('.whole-step').removeClass('checked').addClass('unchecked'); } else { $(this).parents('.whole-step').removeClass('unchecked').addClass('checked'); } }); -- $('.toggle-open-area').click(function() { if($(this).parent().parent().parent().parent().parent().parent().parent().hasClass('open')) { $(this).parents('.whole-step').removeClass('open').addClass('closed'); } else { $(this).parents('.whole-step').removeClass('closed').addClass('open'); } }); $('.toggle-open-area').click(function() { $(this).toggleClass('unchecked checked'); $(this).closest(selector).toggleClass('open closed'); }); $('.checkarea').click(function() { if($(this).hasClass('unchecked')) { $(this).removeClass('unchecked').addClass('checked'); $(this).parent().parent().parent().parent().parent().parent().parent().removeClass('open').addClass('closed'); } else { $(this).removeClass('checked').addClass('unchecked'); $(this).parent().parent().parent().parent().parent().parent().parent().removeClass('closed').addClass('open'); } });

    Read the article

  • How should I provide access to this custom DAL?

    - by Casey
    I'm writing a custom DAL (VB.NET) for an ordering system project. I'd like to explain how it is coded now, and receive some alternate ideas to make coding against the DAL easier/more readable. The DAL is part of an n-tier (not n-layer) application, where each tier is in it's own assembly/DLL. The DAL consists of several classes that have specific behavior. For instance, there is an Order class that is responsible for retrieving and saving orders. Most of the classes have only two methods, a "Get" and a "Save," with multiple overloads for each. These classes are marked as Friend and are only visible to the DAL (which is in it's own assembly). In most cases, the DAL returns what I will call a "Data Object." This object is a class that contains only data and validation, and is located in a common assembly that both the BLL and DAL can read. To provide public access to the DAL, I currently have a static (module) class that has many shared members. A simplified version looks something like this: Public Class DAL Private Sub New End Sub Public Shared Function GetOrder(OrderID as String) as OrderData Dim OrderGetter as New OrderClass Return OrderGetter.GetOrder(OrderID) End Function End Class Friend Class OrderClass Friend Function GetOrder(OrderID as string) as OrderData End Function End Class The BLL would call for an order like this: DAL.GetOrder("123456") As you can imagine, this gets cumbersome very quickly. I'm mainly interested in structuring access to the DAL so that Intellisense is very intuitive. As it stands now, there are too many methods/functions in the DAL class with similar names. One idea I had is to break down the DAL into nested classes: Public Class DAL Private Sub New End Sub Public Class Orders Private Sub New End Sub Public Shared Function Get(OrderID as string) as OrderData End Function End Class End Class So the BLL would call like this: DAL.Orders.Get("12345") This cleans it up a bit, but it leaves a lot of classes that only have references to other classes, which I don't like for some reason. Without resorting to passing DB specific instructions (like where clauses) from BLL to DAL, what is the best or most common practice for providing a single point of access for the DAL?

    Read the article

  • PHP Doctrine frustration: loading models doesn't work..?

    - by ropstah
    I'm almost losing it, i really hope someone can help me out! I'm using Doctrine with CodeIgniter. Everything is setup correctly and works until I generate the classes and view the website. Fatal error: Class 'BaseObjecten' not found in /var/www/vhosts/domain.com/application/models/Objecten.php on line 13 I'm using the following bootstrapper (as CodeIgniter plugin): <?php // system/application/plugins/doctrine_pi.php // load Doctrine library require_once BASEPATH . '/plugins/Doctrine/lib/Doctrine.php'; // load database configuration from CodeIgniter require_once APPPATH.'/config/database.php'; // this will allow Doctrine to load Model classes automatically spl_autoload_register(array('Doctrine', 'autoload')); // we load our database connections into Doctrine_Manager // this loop allows us to use multiple connections later on foreach ($db as $connection_name => $db_values) { // first we must convert to dsn format $dsn = $db[$connection_name]['dbdriver'] . '://' . $db[$connection_name]['username'] . ':' . $db[$connection_name]['password']. '@' . $db[$connection_name]['hostname'] . '/' . $db[$connection_name]['database']; Doctrine_Manager::connection($dsn,$connection_name); } // CodeIgniter's Model class needs to be loaded require_once BASEPATH.'/libraries/Model.php'; // telling Doctrine where our models are located Doctrine::loadModels(APPPATH.'/models'); // (OPTIONAL) CONFIGURATION BELOW // this will allow us to use "mutators" Doctrine_Manager::getInstance()->setAttribute( Doctrine::ATTR_AUTO_ACCESSOR_OVERRIDE, true); // this sets all table columns to notnull and unsigned (for ints) by default Doctrine_Manager::getInstance()->setAttribute( Doctrine::ATTR_DEFAULT_COLUMN_OPTIONS, array('notnull' => true, 'unsigned' => true)); // set the default primary key to be named 'id', integer, 4 bytes Doctrine_Manager::getInstance()->setAttribute( Doctrine::ATTR_DEFAULT_IDENTIFIER_OPTIONS, array('name' => 'id', 'type' => 'integer', 'length' => 4)); ?> Anyone? p.s. I also tried adding the following right after // (OPTIONAL CONFIGURATION) Doctrine_Manager::getInstance()->setAttribute(Doctrine::ATTR_MODEL_LOADING, Doctrine::MODEL_LOADING_CONSERVATIVE); spl_autoload_register(array('Doctrine', 'modelsAutoload'));

    Read the article

  • Git for Websites / post-receive / Separation of Test and Production Sites

    - by Walt W
    Hi all, I'm using Git to manage my website's source code and deployment, and currently have the test and live sites running on the same box. Following this resource http://toroid.org/ams/git-website-howto originally, I came up with the following post-receive hook script to differentiate between pushes to my live site and pushes to my test site: while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git --work-tree=c:/temp/BLAH checkout -f master echo "Updated master" ;; refs/heads/testbranch ) git --work-tree=c:/temp/BLAH2 checkout -f testbranch echo "Updated testbranch" ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" However, I have doubts that this is actually safe :) I'm by no means a Git expert, but I am guessing that Git probably keeps track of the current checked-out branch head, and this approach probably has the potential to confuse it to no end. So a few questions: IS this safe? Would a better approach be to have my base repository be the test site repository (with corresponding working directory), and then have that repository push changes to a new live site repository, which has a corresponding working directory to the live site base? This would also allow me to move the production to a different server and keep the deployment chain intact. Is there something I'm missing? Is there a different, clean way to differentiate between test and production deployments when using Git for managing websites? As an additional note in light of Vi's answer, is there a good way to do this that would handle deletions without mucking with the file system much? Thank you, -Walt PS - The script I came up with for the multiple repos (and am using unless I hear better) is as follows: sitename=`basename \`pwd\`` while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git checkout -q -f master if [ $? -eq 0 ]; then echo "Test Site checked out properly" else echo "Failed to checkout test site!" fi ;; refs/heads/live-site ) git push -q ../Live/$sitename live-site:master if [ $? -eq 0 ]; then echo "Live Site received updates properly" else echo "Failed to push updates to Live Site" fi ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" And then the repo in ../Live/$sitename (these are "bare" repos with working trees added after init) has the basic post-receive: git checkout -f if [ $? -eq 0 ]; then echo "Live site `basename \`pwd\`` checked out successfully" else echo "Live site failed to checkout" fi

    Read the article

  • Fixed point math in c#?

    - by x4000
    Hi there, I was wondering if anyone here knows of any good resources for fixed point math in c#? I've seen things like this (http://2ddev.72dpiarmy.com/viewtopic.php?id=156) and this (http://stackoverflow.com/questions/79677/whats-the-best-way-to-do-fixed-point-math), and a number of discussions about whether decimal is really fixed point or actually floating point (update: responders have confirmed that it's definitely floating point), but I haven't seen a solid C# library for things like calculating cosine and sine. My needs are simple -- I need the basic operators, plus cosine, sine, arctan2, PI... I think that's about it. Maybe sqrt. I'm programming a 2D RTS game, which I have largely working, but the unit movement when using floating-point math (doubles) has very small inaccuracies over time (10-30 minutes) across multiple machines, leading to desyncs. This is presently only between a 32 bit OS and a 64 bit OS, all the 32 bit machines seem to stay in sync without issue, which is what makes me think this is a floating point issue. I was aware from this as a possible issue from the outset, and so have limited my use of non-integer position math as much as possible, but for smooth diagonal movement at varying speeds I'm calculating the angle between points in radians, then getting the x and y components of movement with sin and cos. That's the main issue. I'm also doing some calculations for line segment intersections, line-circle intersections, circle-rect intersections, etc, that also probably need to move from floating-point to fixed-point to avoid cross-machine issues. If there's something open source in Java or VB or another comparable language, I could probably convert the code for my uses. The main priority for me is accuracy, although I'd like as little speed loss over present performance as possible. This whole fixed point math thing is very new to me, and I'm surprised by how little practical information on it there is on google -- most stuff seems to be either theory or dense C++ header files. Anything you could do to point me in the right direction is much appreciated; if I can get this working, I plan to open-source the math functions I put together so that there will be a resource for other C# programmers out there. UPDATE: I could definitely make a cosine/sine lookup table work for my purposes, but I don't think that would work for arctan2, since I'd need to generate a table with about 64,000x64,000 entries (yikes). If you know any programmatic explanations of efficient ways to calculate things like arctan2, that would be awesome. My math background is all right, but the advanced formulas and traditional math notation are very difficult for me to translate into code.

    Read the article

< Previous Page | 738 739 740 741 742 743 744 745 746 747 748 749  | Next Page >