Search Results

Search found 8379 results on 336 pages for 'article'.

Page 326/336 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • WPF TabControl - how to preserve control state within tab items (MVVM pattern)

    - by Tim Coulter
    I am a newcomer to WPF, attempting to build a project that follows the recommendations of Josh Smith's excellent article describing The Model-View-ViewModel Design Pattern. Using Josh's sample code as a base, I have created a simple application that contains a number of "workspaces", each represented by a tab in a TabControl. In my application, a workspace is a document editor that allows a hierarchical document to be manipulated via a TreeView control. Although I have succeeded in opening multiple workspaces and viewing their document content in the bound TreeView control, I find that the TreeView "forgets" its state when switching between tabs. For example, if the TreeView in Tab1 is partially expanded, it will be shown as fully collapsed after switching to Tab2 and returning to Tab1. This behaviour appears to apply to all aspects of control state for all controls. After some experimentation, I have realized that I can preserve state within a TabItem by explicitly binding each control state property to a dedicated property on the underlying ViewModel. However, this seems like a lot of additional work, when I simply want all my controls to remember their state when switching between workspaces. I assume I am missing something simple, but I am not sure where to look for the answer. Any guidance would be much appreciated. Thanks, Tim Update: As requested, I will attempt to post some code that demonstrates this problem. However, since the data that underlies the TreeView is complex, I will post a simplified example that exhibits the same symtoms. Here is the XAML from the main window: <TabControl IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding Path=Docs}"> <TabControl.ItemTemplate> <DataTemplate> <ContentPresenter Content="{Binding Path=Name}" /> </DataTemplate> </TabControl.ItemTemplate> <TabControl.ContentTemplate> <DataTemplate> <view:DocumentView /> </DataTemplate> </TabControl.ContentTemplate> </TabControl> The above XAML correctly binds to an ObservableCollection of DocumentViewModel, whereby each member is presented via a DocumentView. For the simplicity of this example, I have removed the TreeView (mentioned above) from the DocumentView and replaced it with a TabControl containing 3 fixed tabs: <TabControl> <TabItem Header="A" /> <TabItem Header="B" /> <TabItem Header="C" /> </TabControl> In this scenario, there is no binding between the DocumentView and the DocumentViewModel. When the code is run, the inner TabControl is unable to remember its selection when the outer TabControl is switched. However, if I explicitly bind the inner TabControl's SelectedIndex property ... <TabControl SelectedIndex="{Binding Path=SelectedDocumentIndex}"> <TabItem Header="A" /> <TabItem Header="B" /> <TabItem Header="C" /> </TabControl> ... to a corresponding dummy property on the DocumentViewModel ... public int SelecteDocumentIndex { get; set; } ... the inner tab is able to remember its selection. I understand that I can effectively solve my problem by applying this technique to every visual property of every control, but I am hoping there is a more elegant solution.

    Read the article

  • WCF GZip Compression Request/Response Processing

    - by IanT8
    How do I get a WCF client to process server responses which have been GZipped or Deflated by IIS? On IIS, I've followed the instructions here on how to make IIS 6 gzip all responses (where the request contained "Accept-Encoding: gzip, deflate") emitted by .svc wcf services. On the client, I've followed the instructions here and here on how to inject this header into the web request: "Accept-Encoding: gzip, deflate". Fiddler2 shows the response is binary and not plain old Xml. The client crashes with an exception which basically says there's no Xml header, which ofcourse is true. In my IClientMessageInspector, the app crashes before AfterReceiveReply is called. Some further notes: (1) I can't change the WCF service or client as they are supplied by a 3rd party. I can however attach behaviors and/or message inspectors via configuration if this is the right direction to take. (2) I don't want to compress/uncompress just the soap body, but the entire message. Any ideas/solutions? * SOLVED * It was not possible to write a WCF extension to achieve these goals. Instead I followed this CodeProject article which advocate a helper class: public class CompressibleHttpRequestCreator : IWebRequestCreate { public CompressibleHttpRequestCreator() { } WebRequest IWebRequestCreate.Create(Uri uri) { HttpWebRequest httpWebRequest = Activator.CreateInstance(typeof(HttpWebRequest), BindingFlags.CreateInstance | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance, null, new object[] { uri, null }, null) as HttpWebRequest; if (httpWebRequest == null) { return null; } httpWebRequest.AutomaticDecompression =DecompressionMethods.GZip | DecompressionMethods.Deflate; return httpWebRequest; } } and also, an addition to the application configuration file: <configuration> <system.net> <webRequestModules> <remove prefix="http:"/> <add prefix="http:" type="Pajocomo.Net.CompressibleHttpRequestCreator, Pajocomo" /> </webRequestModules> </system.net> </configuration> What seems to be happening is that WCF eventually asks some factory or other deep down in system.net to provide an HttpWebRequest instance, and we provide the helper that will be asked to create the required instance. In the WCF client configuration file, a simple basicHttpBinding is all that is required, without the need for any custom extensions. When the application runs, the client Http request contains the header "Accept-Encoding: gzip, deflate", the server returns a gzipped web response, and the client transparently decompresses the http response before handing it over to WCF. When I tried to apply this technique to Web Services I found that it did NOT work. Although the helper class was executed in the same was as when used by the WCF client, the http request did not contain the "Accept-Encoding: ..." header. To make this work for Web Services, I had to edit the Web Proxy class, and add this method: protected override System.Net.WebRequest GetWebRequest(Uri uri) { System.Net.HttpWebRequest rq = (System.Net.HttpWebRequest)base.GetWebRequest(uri); rq.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate; return rq; } Note that it did not matter whether the CompressibleHttpRequestCreator and block from the application config file were present or not. For web services, only overriding GetWebRequest in the Web Service Proxy worked.

    Read the article

  • FluentNHibernate Unit Of Work / Repository Design Pattern Questions

    - by Echiban
    Hi all, I think I am at a impasse here. I have an application I built from scratch using FluentNHibernate (ORM) / SQLite (file db). I have decided to implement the Unit of Work and Repository Design pattern. I am at a point where I need to think about the end game, which will start as a WPF windows app (using MVVM) and eventually implement web services / ASP.Net as UI. Now I already created domain objects (entities) for ORM. And now I don't know how should I use it outside of ORM. Questions about it include: Should I use ORM entity objects directly as models in MVVM? If yes, do I put business logic (such as certain values must be positive and be greater than another Property) in those entity objects? It is certainly the simpler approach, and one I am leaning right now. However, will there be gotchas that would trash this plan? If the answer above is no, do I then create a new set of classes to implement business logic and use those as Models in MVVM? How would I deal with the transition between model objects and entity objects? I guess a type converter implementation would work well here. Now I followed this well written article to implement the Unit Of Work pattern. However, due to the fact that I am using FluentNHibernate instead of NHibernate, I had to bastardize the implementation of UnitOfWorkFactory. Here's my implementation: using System; using FluentNHibernate.Cfg; using FluentNHibernate.Cfg.Db; using NHibernate; using NHibernate.Cfg; using NHibernate.Tool.hbm2ddl; namespace ELau.BlindsManagement.Business { public class UnitOfWorkFactory : IUnitOfWorkFactory { private static readonly string DbFilename; private static Configuration _configuration; private static ISession _currentSession; private ISessionFactory _sessionFactory; static UnitOfWorkFactory() { // arbitrary default filename DbFilename = "defaultBlindsDb.db3"; } internal UnitOfWorkFactory() { } #region IUnitOfWorkFactory Members public ISession CurrentSession { get { if (_currentSession == null) { throw new InvalidOperationException(ExceptionStringTable.Generic_NotInUnitOfWork); } return _currentSession; } set { _currentSession = value; } } public ISessionFactory SessionFactory { get { if (_sessionFactory == null) { _sessionFactory = BuildSessionFactory(); } return _sessionFactory; } } public Configuration Configuration { get { if (_configuration == null) { Fluently.Configure().ExposeConfiguration(c => _configuration = c); } return _configuration; } } public IUnitOfWork Create() { ISession session = CreateSession(); session.FlushMode = FlushMode.Commit; _currentSession = session; return new UnitOfWorkImplementor(this, session); } public void DisposeUnitOfWork(UnitOfWorkImplementor adapter) { CurrentSession = null; UnitOfWork.DisposeUnitOfWork(adapter); } #endregion public ISession CreateSession() { return SessionFactory.OpenSession(); } public IStatelessSession CreateStatelessSession() { return SessionFactory.OpenStatelessSession(); } private static ISessionFactory BuildSessionFactory() { ISessionFactory result = Fluently.Configure() .Database( SQLiteConfiguration.Standard .UsingFile(DbFilename) ) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<UnitOfWorkFactory>()) .ExposeConfiguration(BuildSchema) .BuildSessionFactory(); return result; } private static void BuildSchema(Configuration config) { // this NHibernate tool takes a configuration (with mapping info in) // and exports a database schema from it _configuration = config; new SchemaExport(_configuration).Create(false, true); } } } I know that this implementation is flawed because a few tests pass when run individually, but when all tests are run, it would fail for some unknown reason. Whoever wants to help me out with this one, given its complexity, please contact me by private message. I am willing to send some $$$ by Paypal to someone who can address the issue and provide solid explanation. I am new to ORM, so any assistance is appreciated.

    Read the article

  • Paying great programmers more than average programmers

    - by Kelly French
    It's fairly well recognized that some programmers are up to 10 times more productive than others. Joel mentions this topic on his blog. There is a whole blog devoted to the idea of the "10x productive programmer". In years since the original study, the general finding that "There are order-of-magnitude differences among programmers" has been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al 2000). Fred Brooks mentions the wide range in the quality of designers in his "No Silver Bullet" article, The differences are not minor--they are rather like the differences between Salieri and Mozart. Study after study shows that the very best designers produce structures that are faster, smaller, simpler, cleaner, and produced with less effort. The differences between the great and the average approach an order of magnitude. The study that Brooks cites is: H. Sackman, W.J. Erikson, and E.E. Grant, "Exploratory Experimental Studies Comparing Online and Offline Programming Performance," Communications of the ACM, Vol. 11, No. 1 (January 1968), pp. 3-11. The way programmers are paid by employers these days makes it almost impossible to pay the great programmers a large multiple of what the entry-level salary is. When the starting salary for a just-graduated entry-level programmer, we'll call him Asok (From Dilbert), is $40K, even if the top programmer, we'll call him Linus, makes $120K that is only a multiple of 3. I'd be willing to be that Linus does much more than 3 times what Asok does, so why wouldn't we expect him to get paid more as well? Here is a quote from Stroustrup: "The companies are complaining because they are hurting. They can't produce quality products as cheaply, as reliably, and as quickly as they would like. They correctly see a shortage of good developers as a part of the problem. What they generally don't see is that inserting a good developer into a culture designed to constrain semi-skilled programmers from doing harm is pointless because the rules/culture will constrain the new developer from doing anything significantly new and better." This leads to two questions. I'm excluding self-employed programmers and contractors. If you disagree that's fine but please include your rationale. It might be that the self-employed or contract programmers are where you find the top-10 earners, but please provide a explanation/story/rationale along with any anecdotes. [EDIT] I thought up some other areas in which talent/ability affects pay. Financial traders (commodities, stock, derivatives, etc.) designers (fashion, interior decorators, architects, etc.) professionals (doctor, lawyer, accountant, etc.) sales Questions: Why aren't the top 1% of programmers paid like A-list movie stars? What would the industry be like if we did pay the "Smart and gets things done" programmers 6, 8, or 10 times what an intern makes? [Footnote: I posted this question after submitting it to the Stackoverflow podcast. It was included in episode 77 and I've written more about it as a Codewright's Tale post 'Of Rockstars and Bricklayers'] Epilogue: It's probably unfair to exclude contractors and the self-employed. One aspect of the highest earners in other fields is that they are free-agents. The competition for their skills is what drives up their earning power. This means they can not be interchangeable or otherwise treated as a plug-and-play resource. I liked the example in one answer of a major league baseball team trying to field two first-basemen. Also, something that Joel mentioned in the Stackoverflow podcast (#77). There are natural dynamics to shrink any extreme performance/pay ranges between the highs and lows. One is the peer pressure of organizations to pay within a given range, another is the likelyhood that the high performer will realize their undercompensation and seek greener pastures.

    Read the article

  • WCF Data Service BeginSaveChanges not saving changes in Silverlight app

    - by Enigmativity
    I'm having a hell of a time getting WCF Data Services to work within Silverlight. I'm using the VS2010 RC. I've struggled with the cross domain issue requiring the use of clientaccesspolicy.xml & crossdomain.xml files in the web server root folder, but I just couldn't get this to work. I've resorted to putting both the Silverlight Web App & the WCF Data Service in the same project to get past this issue, but any advice here would be good. But now that I can actually see my data coming from the database and being displayed in a data grid within Silverlight I thought my troubles were over - but no. I can edit the data and the in-memory entity is changing, but when I call BeginSaveChanges (with the appropriate async EndSaveChangescall) I get no errors, but no data updates in the database. Here's my WCF Data Services code: public class MyDataService : DataService<MyEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.All); config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } protected override void OnStartProcessingRequest(ProcessRequestArgs args) { base.OnStartProcessingRequest(args); HttpContext context = HttpContext.Current; HttpCachePolicy c = HttpContext.Current.Response.Cache; c.SetCacheability(HttpCacheability.ServerAndPrivate); c.SetExpires(HttpContext.Current.Timestamp.AddSeconds(60)); c.VaryByHeaders["Accept"] = true; c.VaryByHeaders["Accept-Charset"] = true; c.VaryByHeaders["Accept-Encoding"] = true; c.VaryByParams["*"] = true; } } I've pinched the OnStartProcessingRequest code from Scott Hanselman's article Creating an OData API for StackOverflow including XML and JSON in 30 minutes. Here's my code from my Silverlight app: private MyEntities _wcfDataServicesEntities; private CollectionViewSource _customersViewSource; private ObservableCollection<Customer> _customers; private void UserControl_Loaded(object sender, RoutedEventArgs e) { if (!System.ComponentModel.DesignerProperties.GetIsInDesignMode(this)) { _wcfDataServicesEntities = new MyEntities(new Uri("http://localhost:7156/MyDataService.svc/")); _customersViewSource = this.Resources["customersViewSource"] as CollectionViewSource; DataServiceQuery<Customer> query = _wcfDataServicesEntities.Customer; query.BeginExecute(result => { _customers = new ObservableCollection<Customer>(); Array.ForEach(query.EndExecute(result).ToArray(), _customers.Add); Dispatcher.BeginInvoke(() => { _customersViewSource.Source = _customers; }); }, null); } } private void button1_Click(object sender, RoutedEventArgs e) { _wcfDataServicesEntities.BeginSaveChanges(r => { var response = _wcfDataServicesEntities.EndSaveChanges(r); string[] results = new[] { response.BatchStatusCode.ToString(), response.IsBatchResponse.ToString() }; _customers[0].FinAssistCompanyName = String.Join("|", results); }, null); } The response string I get back data binds to my grid OK and shows "-1|False". My intent is to get a proof-of-concept working here and then do the appropriate separation of concerns to turn this into a simple line-of-business app. I've spent hours and hours on this. I'm being driven insane. Any ideas how to get this working?

    Read the article

  • cmake doesn't work in windows XP

    - by Runner
    I'm new with cmake,just installed it and following this article: http://www.cs.swarthmore.edu/~adanner/tips/cmake.php D:\Works\c\cmake\build>cmake .. -- Building for: NMake Makefiles CMake Warning at CMakeLists.txt:2 (project): To use the NMake generator, cmake must be run from a shell that can use the compiler cl from the command line. This environment does not contain INCLUDE, LIB, or LIBPATH, and these must be set for the cl compiler to work. -- The C compiler identification is unknown -- The CXX compiler identification is unknown CMake Warning at D:/Tools/CMake 2.8/share/cmake-2.8/Modules/Platform/Windows-cl.cmake:32 (ENABLE_LANGUAGE): To use the NMake generator, cmake must be run from a shell that can use the compiler cl from the command line. This environment does not contain INCLUDE, LIB, or LIBPATH, and these must be set for the cl compiler to work. Call Stack (most recent call first): D:/Tools/CMake 2.8/share/cmake-2.8/Modules/CMakeCInformation.cmake:58 (INCLUDE) CMakeLists.txt:2 (project) CMake Error: your RC compiler: "CMAKE_RC_COMPILER-NOTFOUND" was not found. Please set CMAKE_RC_COMPILER to a valid compiler path or name. -- Check for CL compiler version -- Check for CL compiler version - failed -- Check if this is a free VC compiler -- Check if this is a free VC compiler - yes -- Using FREE VC TOOLS, NO DEBUG available -- Check for working C compiler: cl CMake Warning at CMakeLists.txt:2 (PROJECT): To use the NMake generator, cmake must be run from a shell that can use the compiler cl from the command line. This environment does not contain INCLUDE, LIB, or LIBPATH, and these must be set for the cl compiler to work. CMake Warning at D:/Tools/CMake 2.8/share/cmake-2.8/Modules/Platform/Windows-cl.cmake:32 (ENABLE_LANGUAGE): To use the NMake generator, cmake must be run from a shell that can use the compiler cl from the command line. This environment does not contain INCLUDE, LIB, or LIBPATH, and these must be set for the cl compiler to work. Call Stack (most recent call first): D:/Tools/CMake 2.8/share/cmake-2.8/Modules/CMakeCInformation.cmake:58 (INCLUDE) CMakeLists.txt:2 (PROJECT) CMake Error at D:/Tools/CMake 2.8/share/cmake-2.8/Modules/CMakeRCInformation.cmake:22 (GET_FILENAME_COMPONENT): get_filename_component called with incorrect number of arguments Call Stack (most recent call first): D:/Tools/CMake 2.8/share/cmake-2.8/Modules/Platform/Windows-cl.cmake:32 (ENABLE_LANGUAGE) D:/Tools/CMake 2.8/share/cmake-2.8/Modules/CMakeCInformation.cmake:58 (INCLUDE) CMakeLists.txt:2 (PROJECT) CMake Error: CMAKE_RC_COMPILER not set, after EnableLanguage CMake Error: your C compiler: "cl" was not found. Please set CMAKE_C_COMPILER to a valid compiler path or name. CMake Error: Internal CMake error, TryCompile configure of cmake failed -- Check for working C compiler: cl -- broken CMake Error at D:/Tools/CMake 2.8/share/cmake-2.8/Modules/CMakeTestCCompiler.cmake:52 (MESSAGE): The C compiler "cl" is not able to compile a simple test program. It fails with the following output: CMake will not be able to correctly generate this project. Call Stack (most recent call first): CMakeLists.txt:2 (project) CMake Error: your C compiler: "cl" was not found. Please set CMAKE_C_COMPILER to a valid compiler path or name. CMake Error: your CXX compiler: "cl" was not found. Please set CMAKE_CXX_COMPILER to a valid compiler path or name. -- Configuring incomplete, errors occurred! What are the complete requirement to use cmake successfully in windows XP?I've already installed Visual Studio under D:\Tools\Microsoft Visual Studio 9.0

    Read the article

  • Perl Scripts Seem to be Caching?

    - by Ben Liyanage
    I'm running a perl script via mod_perl with Apache. I am trying to parse parameters in a restful fashion (aka GET www.domain.com/rest.pl/Object/ID). If I specify the ID like this: GET www.domain.com/rest.pl/Object/1234 I will get object 1234 back as a result as expected. However, if I specify an incorrect hacked url like this GET www.domain.com/rest.pl/Object/123 I will also get back object 1234. I am pretty sure that the issue in this question is happening so I packaged up my method and invoked it from my core script out of the package. Even after that I am still seeing the threads returning seemingly cached data. One of the recommendations in the aforementioned article is to replace my with our. My impression from reading up on our vs my was that our is global and my is local. My assumption is the the local variable gets reinitialized each time. That said, like in my code example below I am resetting the variables each time with new values. Apache Perl Configuration is set up like this: PerlModule ModPerl::Registry <Directory /var/www/html/perl/> SetHandler perl-script PerlHandler ModPerl::Registry Options +ExecCGI </Directory> Here is the perl script getting invoked directly: #!/usr/bin/perl -w use strict; use warnings; use Rest; Rest::init(); Here is my package Rest. This file contains various functions for handling the rest requests. #!/usr/bin/perl -w package Rest; use strict; # Other useful packages declared here. our @EXPORT = qw( init ); our $q = CGI->new; sub GET($$) { my ($path, $code) = @_; return unless $q->request_method eq 'GET' or $q->request_method eq 'HEAD'; return unless $q->path_info =~ $path; $code->(); exit; } sub init() { eval { GET qr{^/ZonesByCustomer/(.+)$} => sub { Rest::Zone->ZonesByCustomer($1); } } } # packages must return 1 1; As a side note, I don't completely understand how this eval statement works, or what string is getting parsed by the qr command. This script was largely taken from this example for setting up a rest based web service with perl. I suspect it is getting pulled from that magical $_ variable, but I'm not sure if it is, or what it is getting populated with (clearly the query string or url, but it only seems to be part of it?). Here is my package Rest::Zone, which will contain the meat of the functional actions. I'm leaving out the specifics of how I'm manipulating the data because it is working and leaving this as an abstract stub. #!/usr/bin/perl -w package Rest::Zone; sub ZonesByCustomer($) { my ($className, $ObjectID) = @_; my $q = CGI->new; print $q->header(-status=>200, -type=>'text/xml',expires => 'now', -Cache_control => 'no-cache', -Pragma => 'no-cache'); my $objectXML = "<xml><object>" . $ObjectID . "</object></xml>"; print $objectXML; } # packages must return 1 1; What is going on here? Why do I keep getting stale or cached values?

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    A long time ago in a galaxy far, far away... Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A New Hope Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. The episode continues in a remote shell known as bash, with both databases running, and the installation of a set of tools with a rather unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? How do you ensure the schema name comes through when transforming the data from MySQL to PostgreSQL inserts? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • Bootstrap Backbone Marionette Modal

    - by Greg Pagendam-Turner
    I'm trying to create a dialog in backbone and Marionette based on this article: http://lostechies.com/derickbailey/2012/04/17/managing-a-modal-dialog-with-backbone-and-marionette/ I have a fiddle here: http://jsfiddle.net/netroworx/ywKSG/ HTML: <script type="text/template" id="edit-dialog"> <div class="modal-header"> <button type="button" class="close" data-dismiss="modal">×</button> <h3 id="actionTitle">Create a New Action</h3> </div> <div class="modal-body"> <input type="hidden" id="actionId" name="actionId" /> <table> <tbody> <tr> <td>Goal: </td> <td> <input type="text" id="goal" name="goal" > <input type="hidden" id="goalid" name="goalid" > <a tabindex="-1" title="Show All Items" class="ui-button ui-widget ui-state-default ui-button-icon-only ui-corner-right ui-combobox-toggle ui-state-hover" role="button" aria-disabled="false" > <span class="ui-button-icon-primary ui-icon ui-icon-triangle-1-s"></span><span class="ui-button-text"></span> </a> </td> </tr> <tr> <td>Action name: </td> <td> <input type="text" id="actionName" name="actionName"> </td> </tr> <tr> <td>Target date:</td> <td> <input type="text" id="actionTargetDate" name="actionTargetDate"/> </td> </tr> <tr id="actionActualDateRow"> <td>Actual date:</td> <td> <input type="text" id="actionActualDate" name="actionActualDate"/> </td> </tr> </tbody> </table> </div> <div class="modal-footer"> <a href="#" class="btn" data-dismiss="modal">Close</a> <a href='#' class="btn btn-primary" id="actionActionLabel">Create Action</a> </div> </script> <div id="modal"></div> <a href="#" id="showModal">Show Modal</a> Javascript: var ActionEditView = Backbone.Marionette.ItemView.extend({ template: "#edit-dialog" }); function showModal() { var view = new ActionEditView(); view.render(); var $modalEl = $("#modal"); $modalEl.html(view.el); $modalEl.modal(); } $('#showModal').click(showModal); When I click on the show modal link the html pane goes dark as expected and the dialog content is displayed but on the background layer. Why is this happening?

    Read the article

  • Mac 10.6 Universal Binary scipy: cephes/specfun "_aswfa_" symbol not found

    - by Markus
    Hi folks, I can't get scipy to function in 32 bit mode when compiled as a i386/x86_64 universal binary, and executed on my 64 bit 10.6.2 MacPro1,1. My python setup With the help of this answer, I built a 32/64 bit intel universal binary of python 2.6.4 with the intention of using the arch command to select between the architectures. (I managed to make some universal binaries of a few libraries I wanted using lipo.) That all works. I then installed scipy according to the instructions on hyperjeff's article, only with more up-to-date numpy (1.4.0) and skipping the bit about moving numpy aside briefly during the installation of scipy. Now, everything except scipy seems to be working as far as I can tell, and I can indeed select between 32 and 64 bit mode using arch -i386 python and arch -x86_64 python. The error Scipy complains in 32 bit mode: $ arch -x86_64 python -c "import scipy.interpolate; print 'success'" success $ arch -i386 python -c "import scipy.interpolate; print 'success'" Traceback (most recent call last): File "<string>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/interpolate/__init__.py", line 7, in <module> from interpolate import * File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/interpolate/interpolate.py", line 13, in <module> import scipy.special as spec File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/__init__.py", line 8, in <module> from basic import * File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/basic.py", line 8, in <module> from _cephes import * ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so, 2): Symbol not found: _aswfa_ Referenced from: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so Expected in: flat namespace in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so Attempt at tracking down the problem It looks like scipy.interpolate imports something called _cephes, which looks for a symbol called _aswfa_ but can't find it in 32 bit mode. Browsing through scipy's source, I find an ASWFA subroutine in specfun.f. The only scipy product file with a similar name is specfun.so, but both that and _cephes.so appear to be universal binaries: $ cd /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/ $ file _cephes.so specfun.so _cephes.so: Mach-O universal binary with 2 architectures _cephes.so (for architecture i386): Mach-O bundle i386 _cephes.so (for architecture x86_64): Mach-O 64-bit bundle x86_64 specfun.so: Mach-O universal binary with 2 architectures specfun.so (for architecture i386): Mach-O bundle i386 specfun.so (for architecture x86_64): Mach-O 64-bit bundle x86_64 Ho hum. I'm stuck. Things I may try but haven't figured out how yet include compiling specfun.so myself manually, somehow. I would imagine that scipy isn't broken for all 32 bit machines, so I guess something is wrong with the way I've installed it, but I can't figure out what. I don't really expect a full answer given my fairly unique (?) setup, but if anyone has any clues that might point me in the right direction, they'd be greatly appreciated. (edit) More details to address questions: I'm using gfortran (GNU Fortran from GCC 4.2.1 Apple Inc. build 5646). Python 2.6.4 was installed more-or-less like so: cd /tmp curl -O http://www.python.org/ftp/python/2.6.4/Python-2.6.4.tar.bz2 tar xf Python-2.6.4.tar.bz2 cd Python-2.6.4 # Now replace buggy pythonw.c file with one that supports the "arch" command: curl http://bugs.python.org/file14949/pythonw.c | sed s/2.7/2.6/ > Mac/Tools/pythonw.c ./configure --enable-framework=/Library/Frameworks --enable-universalsdk=/ --with-universal-archs=intel make -j4 sudo make frameworkinstall Scipy 0.7.1 was installed pretty much as described as here, but it boils down to a simple sudo python setup.py install. It would indeed appear that the symbol is undefined in the i386 architecture if you look at the _cephes library with nm, as suggested by David Cournapeau: $ nm -arch x86_64 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so | grep _aswfa_ 00000000000d4950 T _aswfa_ 000000000011e4b0 d _oblate_aswfa_data 000000000011e510 d _oblate_aswfa_nocv_data (snip) $ nm -arch i386 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so | grep _aswfa_ U _aswfa_ 0002e96c d _oblate_aswfa_data 0002e99c d _oblate_aswfa_nocv_data (snip) however, I can't yet explain its absence.

    Read the article

  • How to Zip one IEnumerable with itself

    - by wageoghe
    I am implementing some math algorithms based on lists of points, like Distance, Area, Centroid, etc. Just like in this post: http://stackoverflow.com/questions/2227828/find-the-distance-required-to-navigate-a-list-of-points-using-linq That post describes how to calculate the total distance of a sequence of points (taken in order) by essentially zipping the sequence "with itself", generating the sequence for Zip by offsetting the start position of the original IEnumerable by 1. So, given the Zip extension in .Net 4.0, assuming Point for the point type, and a reasonable Distance formula, you can make calls like this to generate a sequence of distances from one point to the next and then to sum the distances: var distances = points.Zip(points.Skip(1),Distance); double totalDistance = distances.Sum(); Area and Centroid calculations are similar in that they need to iterate over the sequence, processing each pair of points (points[i] and points[i+1]). I thought of making a generic IEnumerable extension suitable for implementing these (and possibly other) algorithms that operate over sequences, taking two items at a time (points[0] and points[1], points[1] and points[2], ..., points[n-1] and points[n] (or is it n-2 and n-1 ...) and applying a function. My generic iterator would have a similar signature to Zip, but it would not receive a second sequence to zip with as it is really just going to zip with itself. My first try looks like this: public static IEnumerable<TResult> ZipMyself<TSequence, TResult>(this IEnumerable<TSequence> seq, Func<TSequence, TSequence, TResult> resultSelector) { return seq.Zip(seq.Skip(1),resultSelector); } With my generic iterator in place, I can write functions like this: public static double Length(this IEnumerable<Point> points) { return points.ZipMyself(Distance).Sum(); } and call it like this: double d = points.Length(); and double GreensTheorem(Point p1, Point p1) { return p1.X * p2.Y - p1.Y * p2.X; } public static double SignedArea(this IEnumerable<Point> points) { return points.ZipMyself(GreensTheorem).Sum() / 2.0 } public static double Area(this IEnumerable<Point> points) { return Math.Abs(points.SignedArea()); } public static bool IsClockwise(this IEnumerable<Point> points) { return SignedArea(points) < 0; } and call them like this: double a = points.Area(); bool isClockwise = points.IsClockwise(); In this case, is there any reason NOT to implement "ZipMyself" in terms of Zip and Skip(1)? Is there already something in LINQ that automates this (zipping a list with itself) - not that it needs to be made that much easier ;-) Also, is there better name for the extension that might reflect that it is a well-known pattern (if, indeed it is a well-known pattern)? Had a link here for a StackOverflow question about area calculation. It is question 2432428. Also had a link to Wikipedia article on Centroid. Just go to Wikipedia and search for Centroid if interested. Just starting out, so don't have enough rep to post more than one link,

    Read the article

  • Game loop and time tracking

    - by David Brown
    Maybe I'm just an idiot, but I've been trying to implement a game loop all day and it's just not clicking. I've read literally every article I could find on Google, but the problem is that they all use different timing mechanisms, which makes them difficult to apply to my particular situation (some use milliseconds, other use ticks, etc). Basically, I have a Clock object that updates each time the game loop executes. internal class Clock { public static long Timestamp { get { return Stopwatch.GetTimestamp(); } } public static long Frequency { get { return Stopwatch.Frequency; } } private long _startTime; private long _lastTime; private TimeSpan _totalTime; private TimeSpan _elapsedTime; /// <summary> /// The amount of time that has passed since the first step. /// </summary> public TimeSpan TotalTime { get { return _totalTime; } } /// <summary> /// The amount of time that has passed since the last step. /// </summary> public TimeSpan ElapsedTime { get { return _elapsedTime; } } public Clock() { Reset(); } public void Reset() { _startTime = Timestamp; _lastTime = 0; _totalTime = TimeSpan.Zero; _elapsedTime = TimeSpan.Zero; } public void Tick() { long currentTime = Timestamp; if (_lastTime == 0) _lastTime = currentTime; _totalTime = TimestampToTimeSpan(currentTime - _startTime); _elapsedTime = TimestampToTimeSpan(currentTime - _lastTime); _lastTime = currentTime; } public static TimeSpan TimestampToTimeSpan(long timestamp) { return TimeSpan.FromTicks( (timestamp * TimeSpan.TicksPerSecond) / Frequency); } } I based most of that on the XNA GameClock, but it's greatly simplified. Then, I have a Time class which holds various times that the Update and Draw methods need to know. public class Time { public TimeSpan ElapsedVirtualTime { get; internal set; } public TimeSpan ElapsedRealTime { get; internal set; } public TimeSpan TotalVirtualTime { get; internal set; } public TimeSpan TotalRealTime { get; internal set; } internal Time() { } internal Time(TimeSpan elapsedVirtualTime, TimeSpan elapsedRealTime, TimeSpan totalVirutalTime, TimeSpan totalRealTime) { ElapsedVirtualTime = elapsedVirtualTime; ElapsedRealTime = elapsedRealTime; TotalVirtualTime = totalVirutalTime; TotalRealTime = totalRealTime; } } My main class keeps a single instance of Time, which it should constantly update during the game loop. So far, I have this: private static void Loop() { do { Clock.Tick(); Time.TotalRealTime = Clock.TotalTime; Time.ElapsedRealTime = Clock.ElapsedTime; InternalUpdate(Time); InternalDraw(Time); } while (!_exitRequested); } The real time properties of the time class turn out great. Now I'd like to get a proper update/draw loop working so that the state is updated a variable number of times per frame, but at a fixed timestep. At the same time, the Time.TotalVirtualTime and Time.ElapsedVirtualTime should be updated accordingly. In addition, I intend for this to support multiplayer in the future, in case that makes any difference to the design of the game loop. Any tips or examples on how I could go about implementing this (aside from links to articles)?

    Read the article

  • Facebook require_login() in iFrame App

    - by LapKom
    Hi, I have serious problem with iframe application. I need to use many external JS libraries and other dynamic stuuf so FMBL application can't be done. When I call require_login() I get applicaition installing dialog when app is not already installed, which is ok. But then after authorization application enters an endless redirect loop with parameters like auth_token, installed and so. Yesterday I managed to fix this, but today it's broken again... What the heck is happening with FB? It's driving me crazy to find a sollution, none of ones found on net doesn't seem to be working. So far I tried: http://abhirama.wordpress.com/2010/03/07/facebook-iframe-xfbml-app/ (7th march 2010!) http://forum.developers.facebook.com/viewtopic.php?pid=156092 http://www.keywordintellect.com/facebook-development/how-to-set-up-a-facebook-iframe-application-in-php-in-5-minutes/ http://www.markdeepwell.com/2010/02/validating-a-facebook-session-within-an-iframe/ http://forum.developers.facebook.com/viewtopic.php?pid=210449 http://www.ajaxlines.com/ajax/stuff/article/facebook_fbml_rendering_in_iframe_application.php http://www.aratide.com/php/solving-the-break-out-issue-in-iframe-facebook-applications/ None of the above worked... According to those and some FB docs: http://wiki.developers.facebook.com/index.php/FB_RequireFeatures http://wiki.developers.facebook.com/index.php/Cross_Domain_Communication_Channel My example test files look as follow: <?php //Link in library. require_once '../application/vendor/Facebook/facebook.php'; //Authentication Keys $appapikey = 'XXXX'; $appsecret = 'XXXX'; //Construct the class $facebook = new Facebook($appapikey, $appsecret); //Require login $user_id = $facebook->require_login(); ?> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> <head> <title></title> </head> <body> <script src="http://static.ak.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" type="text/javascript"></script> This is you: <fb:name uid="<?php echo $user_id?>"></fb:name> <?php var_dump($facebook->$this->facebook->api_client->friends_get())?> <script type="text/javascript"> FB_RequireFeatures(["XFBML"], function(){ FB.Facebook.init("<?=$appapikey?>", "xd_receiver.html"); }); </script> </body> </html> And cross-domain file xd_receiver.html is: <!doctype html public "-//w3c//dtd xhtml 1.0 strict//en" "http://www.w3.org/tr/xhtml1/dtd/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <title>cross-domain receiver page</title> </head> <body> <script src="http://static.ak.facebook.com/js/api_lib/v0.4/XdCommReceiver.js" type="text/javascript"></script> </body> </html> How do I get it working? I'm using Kohana framework to do this and already replaced header('Location') with url::redirect() in facebook php library.

    Read the article

  • How can I bind the nested viewmodels to properties of a control

    - by Robert
    I used Microsoft's Chart Control of the WPF toolkit to write my own chart control. I blogged about it here. My Chart control stacks the yaxes in the chart on top of each other. As you can read in the article this all works quite well. Now I want to create a viewmodel that controls the data and axes in the chart. So far I'm able to add axes to the chart and show them in the chart. But I have a problem when I try to add the lineseries because it has one DependentAxis and one InDependentAxis property. I don't know how to assign the proper xAxis and yAxis controls to it. Below you see part of the LineSeriesViewModel. It has a nested XAxisViewModel and YAxisViewModel property. public class LineSeriesViewModel : ViewModelBase, IChartComponent { XAxisViewModel _xAxis; public XAxisViewModel XAxis { get { return _xAxis; } set { _xAxis = value; RaisePropertyChanged(() => XAxis); } } //The YAxis Property look the same } The viewmodels all have their own datatemplate. The xaml code looks like this: <UserControl.Resources> <DataTemplate x:Key="xAxisTemplate" DataType="{x:Type l:YAxisViewModel}"> <chart:LinearAxis x:Name="yAxis" Orientation="Y" Location="Left" Minimum="0" Maximum="10" IsHitTestVisible="False" Width="50" /> </DataTemplate> <DataTemplate x:Key="yAxisTemplate" DataType="{x:Type l:XAxisViewModel}"> <chart:LinearAxis x:Name="xAxis" Orientation="X" Location="Bottom" Minimum="0" Maximum="100" IsHitTestVisible="False" Height="50" /> </DataTemplate> <DataTemplate DataType="{x:Type l:LineSeriesViewModel}"> <!--Binding doesn't work on the Dependent and IndependentAxis! --> <!--YAxis XAxis and Series are properties of the LineSeriesViewModel --> <l:FastLineSeries DependentAxis="{Binding Path=YAxis}" IndependentAxis="{Binding Path=XAxis}" ItemsSource="{Binding Path=Series}"/> </DataTemplate> <Style TargetType="ItemsControl"> <Setter Property="ItemsPanel"> <Setter.Value> <ItemsPanelTemplate> <!--My stacked chart control --> <l:StackedPanel x:Name="stackedPanel" Width="Auto" Height="Auto" Background="LightBlue"> </l:StackedPanel> </ItemsPanelTemplate> </Setter.Value> </Setter> </Style> </UserControl.Resources> <Grid HorizontalAlignment="Stretch" VerticalAlignment="Stretch" ClipToBounds="True"> <!-- View is an ObservableCollection of all axes and series--> <ItemsControl x:Name="chartItems" ItemsSource="{Binding Path=View}" Focusable="False"> </ItemsControl> </Grid> This code works quite well. When I add axes they get drawn. But the DependentAxis and InDependentAxis of the lineseries control stay null, so the series doesn't get drawn. How can I bind the nested viewmodels to the properties of a control?

    Read the article

  • How to configure a GridView CommandField to trigger Full Page Update in UpdatePanel

    - by Frinavale
    I have 2 user controls on my page. One is used for searching, and the other is used for editing (along with a few other things). The user control that provides the search functionality uses a GridView to display the search results. This GridView has a CommandField used for editing (showEditButton="true"). I would like to place the GridView into an UpdatePanel so that paging through the search results will be smooth. The thing is that when the user clicks the Edit Link (the CommandField) I need to preform a full page postback so that the search user control can be hidden and the edit user control can be displayed. Edit: the reason I need to do a full page postback is because the edit user control is outside of the UpdatePanel that my GridView is in. It is not only outside the UpdatePanel, but it's in a completely different user control. I have no idea how to add the CommandField as a full page postback trigger to the UpdatePanel. The PostBackTrigger (which is used to indicate the controls cause a full page postback) takes a ControlID as a parameter; however the CommandButton does not have an ID...and you can see why I'm having a problem with this. Update on what else I've tried to solve the problem: I took a new approach to solving the problem. In my new approach, I used a TemplateField instead of a CommandField. I placed a LinkButton control in the TemplateField and gave it a name. During the GridView's RowDataBound event I retrieved the LinkButton control and added it to the UpdatePanel's Triggers. This is the ASP Markup for the UpdatePanel and the GridView <asp:UpdatePanel ID="SearchResultsUpdateSection" runat="server"> <ContentTemplate> <asp:GridView ID="SearchResultsGrid" runat="server" AllowPaging="true" AutoGenerateColumns="false"> <Columns> <asp:TemplateField> <HeaderTemplate></HeaderTemplate> <ItemTemplate> <asp:LinkButton ID="Edit" runat="server" Text="Edit"></asp:LinkButton> </ItemTemplate> </asp:TemplateField> <asp:BoundField ...... </Columns> </asp:GridView> </ContentTemplate> </asp:UpdatePanel> In my VB.NET code. I implemented a function that handles the GridView's RowDataBound event. In this method I find the LinkButton for the row being bound to, create a PostBackTrigger for the LinkButton, and add it to the UpdatePanel's Triggers. This means that a PostBackTrigger is created for every "edit" LinkButton in the GridView Edit: this did not create a PostBackTrigger for Every "edit" LinkButton because the ID is the same for all LinkButtons in the GridView. Private Sub SearchResultsGrid_RowDataBound(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewRowEventArgs) Handles SearchResultsGrod.RowDataBound If e.Row.RowType = DataControlRowType.Header Then ''I am doing stuff here that does not pertain to the problem Else Dim editLink As LinkButton = CType(e.Row.FindControl("Edit"), LinkButton) If editLink IsNot Nothing Then Dim fullPageTrigger As New PostBackTrigger fullPageTrigger.ControlID = editLink.ID SearchResultsUpdateSection.Triggers.Add(fullPageTrigger) End If End If End Sub And instead of handling the GridView's RowEditing event for editing purposes i use the RowCommand instead. Private Sub SearchResultsGrid_RowCommand(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewCommandEventArgs) Handles SearchResultsGrid.RowCommand RaiseEvent EditRecord(Me, New EventArgs()) End Sub This new approach didn't work at all because all of the LinkButtons in the GridView have the same ID. After reading the MSDN article on the UpdatePanel.Triggers Property I have the impression that Triggers can only be defined declaratively. This would mean that anything I did in the VB code wouldn't work. Any advise would be greatly appreciated. Thanks, -Frinny

    Read the article

  • C# struct and NuSOAP(php)

    - by opx
    Hello Im trying to build a client in c# that talks with some remote (php)server with SOAP using the NuSOAP library. Here im using a struct/object that will containt the user info of some user: public struct UserProfile { public string username; public string password; public string email; public string site; public string signature; public int age; public int points; And this is the PHP Code: server->wsdl->addComplexType( 'UserProfile', 'complexType', 'struct', 'all', '', array( 'username' => array('name' => 'username', 'type' => 'xsd:string'), 'password' => array('name' => 'password', 'type' => 'xsd:string'), 'email' => array('name' => 'email', 'type' => 'xsd:string'), 'site' => array('name' => 'site', 'type' => 'xsd:string'), 'signature' => array('name' => 'signature', 'type' => 'xsd:string'), 'age' => array('name' => 'age', 'type' => 'xsd:int'), 'points' => array('name' => 'username', 'type' => 'xsd:int'), ) ); $server->wsdl->addComplexType( 'UserProfileArray', 'complexType', 'array', '', 'SOAP-ENC:Array', array(), array(array('ref' => 'SOAP-ENC:arrayType', 'wsdl:arrayType' => 'tns:UserProfile[]')), 'tns:UserProfile' ); $server->register("getUserProfile", array(), array('return' => 'tns:UserProfileArray'), $namespace, false, 'rpc', false, 'Get the user profile object' ); function getUserProfile(){ $profile['username'] = "user"; $profile['password'] = "pass"; $profile['email'] = "usern@ame"; $profile['site'] = "u.com"; $profile['signature'] = "usucsdckme"; $profile['age'] = 111; $profile['points'] = time() / 2444; return $profile; } Now I already have a working login function, and I want to get the info about the logged in user but I dont know howto obtain these. This is what im using to get the userinfo: string user = txtUser.Text; string pass = txtPass.Text; SimpleService.SimpleService service = new SimpleService.SimpleService(); if(service.login(user, pass)){ //logged in } SoapApp.SimpleService.UserProfile[] user = service.getUserProfile(); // THIS LINE GIVES ME AN EXCEPTION MessageBox.Show(user[0].username + "--" + user[0].points); The getUserProfile() function produces an error: System.Web.Services.Protocols.SoapException was unhandled Message="unable to serialize result" Source="System.Web.Services" or I get something like 'cant parse xml' error. The article I used for this was from: http://www.sanity-free.org/125/php_webservices_and_csharp_dotnet_soap_clients.html The difference on what they are doing and what I try to do is that I only want to get one object returned instead of multiple 'MySoapObjects'. I hope someone is familiar with this and could help me, thanks in advance! Regards, opx

    Read the article

  • Extending XHTML

    - by Daniel Schaffer
    I'm playing around with writing a jQuery plugin that uses an attribute to define form validation behavior (yes, I'm aware there's already a validation plugin; this is as much a learning exercise as something I'll be using). Ideally, I'd like to have something like this: Example 1 - input: <input id="name" type="text" v:onvalidate="return this.value.length > 0;" /> Example 2 - wrapper: <div v:onvalidate="return $(this).find('[value]').length > 0;"> <input id="field1" type="text" /> <input id="field2" type="text" /> <input id="field3" type="text" /> </div> Example 3 - predefined: <input id="name" type="text" v:validation="not empty" /> The goal here is to allow my jQuery code to figure out which elements need to be validated (this is already done) and still have the markup be valid XHTML, which is what I'm having a problem with. I'm fairly sure this will require a combination of both DTD and XML Schema, but I'm not really quite sure how exactly to execute. Based on this article, I've created the following DTD: <!ENTITY % XHTML1-formvalidation1 PUBLIC "-//W3C//DTD XHTML 1.1 +FormValidation 1.0//EN" "http://new.dandoes.net/DTD/FormValidation1.dtd" > %XHTML1-formvalidation1; <!ENTITY % Inlspecial.extra "%div.qname; " > <!ENTITY % xhmtl-model.mod SYSTEM "formvalidation-model-1.mod" > <!ENTITY % xhtml11.dtd PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd" > %xhtml11.dtd; And here is "formvalidation-model-1": <!ATTLIST %div.qname; %onvalidation CDATA #IMPLIED %XHTML1-formvalidation1.xmlns.extra.attrib; > I've never done DTD before, so I'm not even really exactly sure what I'm doing. When I run my page through the W3 XHTML validator, I get 80+ errors because it's getting duplicate definitions of all the XHTML elements. Am I at least on the right track? Any suggestions? EDIT: I removed this section from my custom DTD, because it turned out that it was actually self-referencing, and the code I got the template from was really for combining two DTDs into one, not appending specific items to one: <!ENTITY % XHTML1-formvalidation1 PUBLIC "-//W3C//DTD XHTML 1.1 +FormValidation 1.0//EN" "http://new.dandoes.net/DTD/FormValidation1.dtd" > %XHTML1-formvalidation1; I also removed this, because it wasn't validating, and didn't seem to be doing anything: <!ENTITY % Inlspecial.extra "%div.qname; " > Additionally, I decided that since I'm only adding a handful of additional items, the separate files model recommended by W3 doesn't really seem that helpful, so I've put everything into the dtd file, the content of which is now this: <!ATTLIST div onvalidate CDATA #IMPLIED> <!ENTITY % xhtml11.dtd PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd" > %xhtml11.dtd; So now, I'm not getting any DTD-related validation errors, but the onvalidate attribute still is not valid. Update: I've ditched the DTD and added a schema: http://schema.dandoes.net/FormValidation/1.0.xsd Using v:onvalidate appears to validate in Visual Studio, but the W3C service still doesn't like it. Here's a page where I'm using it so you can look at the source: http://new.dandoes.net/auth And here's the link to the w3c validation result: http://validator.w3.org/check?uri=http://new.dandoes.net/auth&charset=(detect+automatically)&doctype=Inline&group=0 Is this about as close as I'll be able to get with this, or am I still doing something wrong?

    Read the article

  • NHibernate, VS 2010

    - by ??????
    ????????????, ANRY! ?????? ??????? ??? ??????????? ???????? ?? ????????????, ?????????? ? NHibernate. ??? ?? ???????? ???? ?????? "Hello NHibernate!". ???????????? ??????????? ????? ??????? ????????: ?? ????, ???? ?????, ??????, ?????. ?????????????? ?????? 4 ??????? ? MSSQL 2010: ?????(id_??????, ????????, ????), ??????(id_???????, ???, ???????), ?????(id_??????, id_???????, ?????????) ? ?????? ??????(id_?????? ??????, id_??????, id_??????, ??????????). ??????????????, ?????? 4 ??????: ?????, ??????, ?????, ?????? ??????. ?????? ??? ?????: ????? ?? ????????? 4 mapping-?????, ??? ?? ????? ???????????? ?????? ? ????? ???? Debug ???????? ????????? ??????: "Could not compile the mapping document: Sklad.products.hbm.xml". ?????? "????????" ?????????, ??? ??????. ? ??? ????? ???? ???????? ? ??? ?? ????? ??????? ? ?????????, ??????. P.S. ???? ?? ??????, ?????? ???????? ?? ?????: [email protected] Google translation (I cleaned this up a but, don't don't speak russian, someone else please improve if it's wrong) Hello, ANRY! Most recently, during the passage of the practice of the university, faced with NHibernate. I read your article "Hello NHibernate!". Took to implement something like a store: that is, a product, the customer order. Accordingly created 4 tables in MSSQL 2010: Goods (id_tovara, name, price) Client (id_klienta, name, surname) Order (id_zakaza, id_klienta, cost) Order Line (id_stroki order id_zakaza, id_tovara, quantity) Accordingly, created 4 classes: Product, Customer, Order, Order Line. my question is this: whether you want to create 4 mapping-file, or you can make only one? And when there is a Debug gives the following error: "Could not compile the mapping document: Sklad.products.hbm.xml. And "Build" is normal, no errors. In what may be the problem and how it can solve? Regards, Andrew. PS if not difficult, you can reply to e-mail: [email protected]

    Read the article

  • More outlook VSTO help...

    - by Jerry
    I posted an article here (http://stackoverflow.com/questions/1095195/how-do-i-set-permissions-on-my-vsto-outlook-add-in) and I was able to build my installer. I thought that once the installer built itself, everything would work fine. I was wrong. It works on about half of the PC's I've run the installer on. My problem is that the other half doesn't work. I'm trying to install an add-in to Outlook Office 2003. I've even gone so far as to create the steps manually by using a batch file. Nothing seems to work on these PCs and I can't find a common denominator that I can rule out or in that will make the VSTO Addin work. Here is the batch file I am using. What am I doing/not-doing wrong with this? I could really use a VSTO expert's help. Thanks!!!! EDIT I've changed the batch file and registry settings to reflect recent updates to them. I've also attached the error text that comes from the PCs that don't work. @echo off echo Installing Visual Studio for Office Runtime (SE 2005)... ..\VSTO\vstor.exe echo Creating Directories... mkdir "c:\program files\Project Archiver" echo Installying Add-In... echo Copying files... xcopy /Y *.dll "c:\program files\Project Archiver" xcopy /Y *.manifest "c:\program files\Project Archiver" echo Setting Security... "C:\Windows\Microsoft.NET\Framework\v2.0.50727\caspol.exe" -polchgprompt off "C:\Windows\Microsoft.NET\Framework\v2.0.50727\caspol.exe" -u -ag All_Code -url "c:\program files\Project Archiver\ProjectArchiver.dll" FullTrust -n "Project Archiver" -d "Outlook plugin for archiving" "C:\Windows\Microsoft.NET\Framework\v2.0.50727\caspol.exe" -u -ag All_Code -url "c:\program files\Project Archiver\Microsoft.Office.Interop.SmartTags.dll" FullTrust -n "Project Archiver" -d "Outlook plugin for archiving" "C:\Windows\Microsoft.NET\Framework\v2.0.50727\caspol.exe" -polchgprompt on echo Loading Registry Values... "c:\program files\Project Archiver\VSTO_settings.reg" echo "That should do it." pause I took the Registry settings (mentioned in the batch file above) straight from a PC that this application worked on. The VSTO Registry settings I am using are : Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\ProjectArchiver\CLSID] @="{27830B8D-F7A1-4945-AC4A-47661B9ED36D}" [HKEY_CLASSES_ROOT\CLSID\{27830B8D-F7A1-4945-AC4A-47661B9ED36D}] @="ProjectArchiver -- an addin created with VSTO technology" [HKEY_CLASSES_ROOT\CLSID\{27830B8D-F7A1-4945-AC4A-47661B9ED36D}\InprocServer32] @=hex(2):25,00,43,00,6f,00,6d,00,6d,00,6f,00,6e,00,50,00,72,00,6f,00,67,00,72,\ 00,61,00,6d,00,46,00,69,00,6c,00,65,00,73,00,25,00,5c,00,4d,00,69,00,63,00,\ 72,00,6f,00,73,00,6f,00,66,00,74,00,20,00,53,00,68,00,61,00,72,00,65,00,64,\ 00,5c,00,56,00,53,00,54,00,4f,00,5c,00,38,00,2e,00,30,00,5c,00,41,00,64,00,\ 64,00,69,00,6e,00,4c,00,6f,00,61,00,64,00,65,00,72,00,2e,00,64,00,6c,00,6c,\ 00,00,00 "ManifestName"="ProjectArchiver.dll.manifest" "ThreadingModel"="Both" "ManifestLocation"="C:\\Program Files\\Project Archiver\\" [HKEY_CLASSES_ROOT\CLSID\{27830B8D-F7A1-4945-AC4A-47661B9ED36D}\ProgID] @="ProjectArchiver" [HKEY_CLASSES_ROOT\CLSID\{27830B8D-F7A1-4945-AC4A-47661B9ED36D}\Programmable] [HKEY_CLASSES_ROOT\CLSID\{27830B8D-F7A1-4945-AC4A-47661B9ED36D}\VersionIndependentProgID] @="ProjectArchiver" [HKEY_CLASSES_ROOT\ProjectArchiver] @="" [HKEY_LOCAL_MACHINE\Software\Classes\CLSID\{27830B8D-F7A1-4945-AC4A-47661B9ED36D}] @="ProjectArchiver -- an addin created with VSTO technology" [HKEY_LOCAL_MACHINE\Software\Classes\CLSID\{27830B8D-F7A1-4945-AC4A-47661B9ED36D}\InprocServer32] @=hex(2):25,00,43,00,6f,00,6d,00,6d,00,6f,00,6e,00,50,00,72,00,6f,00,67,00,72,\ 00,61,00,6d,00,46,00,69,00,6c,00,65,00,73,00,25,00,5c,00,4d,00,69,00,

    Read the article

  • Why C# doesn't implement indexed properties ?

    - by Thomas Levesque
    I know, I know... Eric Lippert's answer to this kind of question is usually something like "because it wasn't worth the cost of designing, implementing, testing and documenting it". But still, I'd like a better explanation... I was reading this blog post about new C# 4 features, and in the section about COM Interop, the following part caught my attention : By the way, this code uses one more new feature: indexed properties (take a closer look at those square brackets after Range.) But this feature is available only for COM interop; you cannot create your own indexed properties in C# 4.0. OK, but why ? I already knew and regretted that it wasn't possible to create indexed properties in C#, but this sentence made me think again about it. I can see several good reasons to implement it : the CLR supports it (for instance, PropertyInfo.GetValue has an index parameter), so it's a pity we can't take advantage of it in C# it is supported for COM interop, as shown in the article (using dynamic dispatch) it is implemented in VB.NET it is already possible to create indexers, i.e. to apply an index to the object itself, so it would probably be no big deal to extend the idea to properties, keeping the same syntax and just replacing this with a property name It would allow to write that kind of things : public class Foo { private string[] _values = new string[3]; public string Values[int index] { get { return _values[index]; } set { _values[index] = value; } } } Currently the only workaround that I know is to create an inner class (ValuesCollection for instance) that implements an indexer, and change the Values property so that it returns an instance of that inner class. This is very easy to do, but annoying... So perhaps the compiler could do it for us ! An option would be to generate an inner class that implements the indexer, and expose it through a public generic interface : // interface defined in the namespace System public interface IIndexer<TIndex, TValue> { TValue this[TIndex index] { get; set; } } public class Foo { private string[] _values = new string[3]; private class <>c__DisplayClass1 : IIndexer<int, string> { private Foo _foo; public <>c__DisplayClass1(Foo foo) { _foo = foo; } public string this[int index] { get { return _foo._values[index]; } set { _foo._values[index] = value; } } } private IIndexer<int, string> <>f__valuesIndexer; public IIndexer<int, string> Values { get { if (<>f__valuesIndexer == null) <>f__valuesIndexer = new <>c__DisplayClass1(this); return <>f__valuesIndexer; } } } But of course, in that case the property would actually return a IIndexer<int, string>, and wouldn't really be an indexed property... It would be better to generate a real CLR indexed property. What do you think ? Would you like to see this feature in C# ? If not, why ?

    Read the article

  • Databinding to ObservableCollection in a different UserControl - how to preserve current selections?

    - by Dave
    Scope of question expanded on 2010-03-25 I ended up figuring out my problem, but here's a new problem that came up as a result of solving the original question, because I want to be able to award the bounty to someone!!! Once I figured out my problem, I soon found out that when the ObservableCollection updates, the databound ComboBox has its contents repopulated, but most of the selections have been blanked out. I assume that in this case, MVVM is going to make it difficult for me to remember the last selected item. I have an idea, but it seems a little nasty. I'll award the bounty to whomever comes up with a nice solution for this! Question re-written on 2010-03-24 I have two UserControls, where one is a dialog that has a TabControl, and the other is one that appears within said TabControl. I'll just call them CandyDialog and CandyNameViewer for simplicity's sake. There's also a data management class called Tracker that manages information storage, which for all intents and purposes just exposes a public property that is an ObservableCollection. I display the CandyNameViewer in CandyDialog via code behind, like this: private void CandyDialog_Loaded( object sender, RoutedEventArgs e) { _candyviewer = new CandyViewer(); _candyviewer.DataContext = _tracker; candy_tab.Content = _candyviewer; } The CandyViewer's XAML looks like this (edited for kaxaml): <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Page.Resources> <DataTemplate x:Key="CandyItemTemplate"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="120"></ColumnDefinition> <ColumnDefinition Width="150"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBox Grid.Column="0" Text="{Binding CandyName}" Margin="3"></TextBox> <!-- just binding to DataContext ends up using InventoryItem as parent, so we need to get to the UserControl --> <ComboBox Grid.Column="1" SelectedItem="{Binding SelectedCandy, Mode=TwoWay}" ItemsSource="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type UserControl}}, Path=DataContext.CandyNames}" Margin="3"></ComboBox> </Grid> </DataTemplate> </Page.Resources> <Grid> <ListBox DockPanel.Dock="Top" ItemsSource="{Binding CandyBoxContents, Mode=TwoWay}" ItemTemplate="{StaticResource CandyItemTemplate}" /> </Grid> </Page> Now everything works fine when the controls are loaded. As long as CandyNames is populated first, and then the consumer UserControl is displayed, all of the names are there. I obviously don't get any errors in the Output Window or anything like that. The issue I have is that when the ObservableCollection is modified from the model, those changes are not reflected in the consumer UserControl! I've never had this problem before; all of my previous uses of ObservableCollection updated fine, although in those cases I wasn't databinding across assemblies. Although I am currently only adding and removing candy names to/from the ObservableCollection, at a later date I will likely also allow renaming from the model side. Is there something I did wrong? Is there a good way to actually debug this? Reed Copsey indicates here that inter-UserControl databinding is possible. Unfortunately, my favorite Bea Stollnitz article on WPF databinding debugging doesn't suggest anything that I could use for this particular problem.

    Read the article

  • Code Golf: Finite-state machine!

    - by Adam Matan
    Finite state machine A deterministic finite state machine is a simple computation model, widely used as an introduction to automata theory in basic CS courses. It is a simple model, equivalent to regular expression, which determines of a certain input string is Accepted or Rejected. Leaving some formalities aside, A run of a finite state machine is composed of: alphabet, a set of characters. states, usually visualized as circles. One of the states must be the start state. Some of the states might be accepting, usually visualized as double circles. transitions, usually visualized as directed arches between states, are directed links between states associated with an alphabet letter. input string, a list of alphabet characters. A run on the machine begins at the starting state. Each letter of the input string is read; If there is a transition between the current state and another state which corresponds to the letter, the current state is changed to the new state. After the last letter was read, if the current state is an accepting state, the input string is accepted. If the last state was not an accepting state, or a letter had no corresponding arch from a state during the run, the input string is rejected. Note: This short descruption is far from being a full, formal definition of a FSM; Wikipedia's fine article is a great introduction to the subject. Example For example, the following machine tells if a binary number, read from left to right, has an even number of 0s: The alphabet is the set {0,1}. The states are S1 and S2. The transitions are (S1, 0) -> S2, (S1, 1) -> S1, (S2, 0) -> S1 and (S2, 1) -> S2. The input string is any binary number, including an empty string. The rules: Implement a FSM in a language of your choice. Input The FSM should accept the following input: <States> List of state, separated by space mark. The first state in the list is the start state. Accepting states begin with a capital letter. <transitions> One or more lines. Each line is a three-tuple: origin state, letter, destination state) <input word> Zero or more characters, followed by a newline. For example, the aforementioned machine with 1001010 as an input string, would be written as: S1 s2 S1 0 s2 S1 1 S1 s2 0 S1 s2 1 s2 1001010 Output The FSM's run, written as <State> <letter> -> <state>, followed by the final state. The output for the example input would be: S1 1 -> S1 S1 0 -> s2 s2 0 -> S1 S1 1 -> S1 S1 0 -> s2 s2 1 -> s2 s2 0 -> S1 ACCEPT For the empty input '': S1 ACCEPT For 101: S1 1 -> S1 S1 0 -> s2 s2 1 -> s2 REJECT For '10X': S1 1 -> S1 S1 0 -> s2 s2 X REJECT Prize A nice bounty will be given to the most elegant and short solution. Reference implementation A reference Python implementation will be published soon.

    Read the article

  • Drawing a TextBox in an extended Glass Frame w/o WPF

    - by Lazlo
    I am trying to draw a TextBox on the extended glass frame of my form. I won't describe this technique, it's well-known. Here's an example for those who haven't heard of it: http://www.danielmoth.com/Blog/Vista-Glass-In-C.aspx The thing is, it is complex to draw over this glass frame. Since black is considered to be the 0-alpha color, anything black disappears. There are apparently ways of countering this problem: drawing complex GDI+ shapes are not affected by this alpha-ness. For example, this code can be used to draw a Label on glass (note: GraphicsPath is used instead of DrawString in order to get around the horrible ClearType problem): public class GlassLabel : Control { public GlassLabel() { this.BackColor = Color.Black; } protected override void OnPaint(PaintEventArgs e) { GraphicsPath font = new GraphicsPath(); font.AddString( this.Text, this.Font.FontFamily, (int)this.Font.Style, this.Font.Size, Point.Empty, StringFormat.GenericDefault); e.Graphics.SmoothingMode = SmoothingMode.HighQuality; e.Graphics.FillPath(new SolidBrush(this.ForeColor), font); } } Similarly, such an approach can be used to create a container on the glass area. Note the use of the polygons instead of the rectangle - when using the rectangle, its black parts are considered as alpha. public class GlassPanel : Panel { public GlassPanel() { this.BackColor = Color.Black; } protected override void OnPaint(PaintEventArgs e) { Point[] area = new Point[] { new Point(0, 1), new Point(1, 0), new Point(this.Width - 2, 0), new Point(this.Width - 1, 1), new Point(this.Width -1, this.Height - 2), new Point(this.Width -2, this.Height-1), new Point(1, this.Height -1), new Point(0, this.Height - 2) }; Point[] inArea = new Point[] { new Point(1, 1), new Point(this.Width - 1, 1), new Point(this.Width - 1, this.Height - 1), new Point(this.Width - 1, this.Height - 1), new Point(1, this.Height - 1) }; e.Graphics.FillPolygon(new SolidBrush(Color.FromArgb(240, 240, 240)), inArea); e.Graphics.DrawPolygon(new Pen(Color.FromArgb(55, 0, 0, 0)), area); base.OnPaint(e); } } Now my problem is: How can I draw a TextBox? After lots of Googling, I came up with the following solutions: Subclassing the TextBox's OnPaint method. This is possible, although I could not get it to work properly. It should involve painting some magic things I don't know how to do yet. Making my own custom TextBox, perhaps on a TextBoxBase. If anyone has good, valid and working examples, and thinks this could be a good overall solution, please tell me. Using BufferedPaintSetAlpha. (http://msdn.microsoft.com/en-us/library/ms649805.aspx). The downsides of this method may be that the corners of the textbox might look odd, but I can live with that. If anyone knows how to implement that method properly from a Graphics object, please tell me. I personally don't, but this seems the best solution so far. To be honest, I found a great C++ article, but I am way too lazy to convert it. http://weblogs.asp.net/kennykerr/archive/2007/01/23/controls-and-the-desktop-window-manager.aspx Note: If I ever succeed with the BufferedPaint methods, I swear to s/o that I will make a simple DLL with all the common Windows Forms controls drawable on glass.

    Read the article

  • Does anyone know how to appropriately deal with user timezones in rails 2.3?

    - by Amazing Jay
    We're building a rails app that needs to display dates (and more importantly, calculate them) in multiple timezones. Can anyone point me towards how to work with user timezones in rails 2.3(.5 or .8) The most inclusive article I've seen detailing how user time zones are supposed to work is here: http://wiki.rubyonrails.org/howtos/time-zones... although it is unclear when this was written or for what version of rails. Specifically it states that: "Time.zone - The time zone that is actually used for display purposes. This may be set manually to override config.time_zone on a per-request basis." Keys terms being "display purposes" and "per-request basis". Locally on my machine, this is true. However on production, neither are true. Setting Time.zone persists past the end of the request (to all subsequent requests) and also affects the way AR saves to the DB (basically treating any date as if it were already in UTC even when its not), thus saving completely inappropriate values. We run Ruby Enterprise Edition on production with passenger. If this is my problem, do we need to switch to JRuby or something else? To illustrate the problem I put the following actions in my ApplicationController right now: def test p_time = Time.now.utc s_time = Time.utc(p_time.year, p_time.month, p_time.day, p_time.hour) logger.error "TIME.ZONE" + Time.zone.inspect logger.error ENV['TZ'].inspect logger.error p_time.inspect logger.error s_time.inspect jl = JunkLead.create! jl.date_at = s_time logger.error s_time.inspect logger.error jl.date_at.inspect jl.save! logger.error s_time.inspect logger.error jl.date_at.inspect render :nothing => true, :status => 200 end def test2 Time.zone = 'Mountain Time (US & Canada)' logger.error "TIME.ZONE" + Time.zone.inspect logger.error ENV['TZ'].inspect render :nothing => true, :status => 200 end def test3 Time.zone = 'UTC' logger.error "TIME.ZONE" + Time.zone.inspect logger.error ENV['TZ'].inspect render :nothing => true, :status => 200 end and they yield the following: Processing ApplicationController#test (for 98.202.196.203 at 2010-12-24 22:15:50) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c57a68 @tzinfo=#<TZInfo::DataTimezone: Etc/UTC>, @name="UTC", @utc_offset=0> nil Fri Dec 24 22:15:50 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Completed in 21ms (View: 0, DB: 4) | 200 OK [http://www.dealsthatmatter.com/test] Processing ApplicationController#test2 (for 98.202.196.203 at 2010-12-24 22:15:53) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c580a8 @tzinfo=#<TZInfo::DataTimezone: America/Denver>, @name="Mountain Time (US & Canada)", @utc_offset=-25200> nil Completed in 143ms (View: 1, DB: 3) | 200 OK [http://www.dealsthatmatter.com/test2] Processing ApplicationController#test (for 98.202.196.203 at 2010-12-24 22:15:59) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c580a8 @tzinfo=#<TZInfo::DataTimezone: America/Denver>, @name="Mountain Time (US & Canada)", @utc_offset=-25200> nil Fri Dec 24 22:15:59 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 15:00:00 MST -07:00 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 15:00:00 MST -07:00 Completed in 20ms (View: 0, DB: 4) | 200 OK [http://www.dealsthatmatter.com/test] Processing ApplicationController#test3 (for 98.202.196.203 at 2010-12-24 22:16:03) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c57a68 @tzinfo=#<TZInfo::DataTimezone: Etc/UTC>, @name="UTC", @utc_offset=0> nil Completed in 17ms (View: 0, DB: 2) | 200 OK [http://www.dealsthatmatter.com/test3] Processing ApplicationController#test (for 98.202.196.203 at 2010-12-24 22:16:04) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c57a68 @tzinfo=#<TZInfo::DataTimezone: Etc/UTC>, @name="UTC", @utc_offset=0> nil Fri Dec 24 22:16:05 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Completed in 151ms (View: 0, DB: 4) | 200 OK [http://www.dealsthatmatter.com/test] It should be clear above that the 2nd call to /test shows Time.zone set to Mountain, even though it shouldn't. Additionally, checking the database reveals that the test action when run after test2 saved a JunkLead record with a date of 2010-12-22 15:00:00, which is clearly wrong.

    Read the article

  • What's the most unsound program you've had to maintain?

    - by Robert Rossney
    I periodically am called upon to do maintenance work on a system that was built by a real rocket surgeon. There's so much wrong with it that it's hard to know where to start. No, wait, I'll start at the beginning: in the early days of the project, the designer was told that the system would need to scale, and he'd read that a source of scalability problems was traffic between the application and database servers, so he made sure to minimize this traffic. How? By putting all of the application logic in SQL Server stored procedures. Seriously. The great bulk of the application functions by the HTML front end formulating XML messages. When the middle tier receives an XML message, it uses the document element's tag name as the name of the stored procedure it should call, and calls the SP, passing it the entire XML message as a parameter. It takes the XML message that the SP returns and returns it directly back to the front end. There is no other logic in the application tier. (There was some code in the middle tier to validate the incoming XML messages against a library of schemas. But I removed it, after ascertaining that 1) only a small handful of messages had corresponding schemas, 2) the messages didn't actually conform to these schemas, and 3) after validating the messages, if any errors were encountered, the method discarded them. "This fuse box is a real time-saver - it comes from the factory with pennies pre-installed!") I've seen software that does the wrong thing before. Lots of it. I've written quite a bit. But I've never seen anything like the steely-eyed determination to do the wrong thing, at every possible turn, that's embodied in the design and programming of this system. Well, at least he went with what he knew, right? Um. Apparently, what he knew was Access. And he didn't really understand Access. Or databases. Here's a common pattern in this code: SELECT @TestCodeID FROM TestCode WHERE TestCode = @TestCode SELECT @CountryID FROM Country WHERE CountryAbbr = @CountryAbbr SELECT Invoice.*, TestCode.*, Country.* FROM Invoice JOIN TestCode ON Invoice.TestCodeID = TestCode.ID JOIN Country ON Invoice.CountryID = Country.ID WHERE Invoice.TestCodeID = @TestCodeID AND Invoice.CountryID = @CountryID Okay, fine. You don't trust the query optimizer either. But how about this? (Originally, I was going to post this in What's the best comment in source code you have ever encountered? but I realized that there was so much more to write about than just this one comment, and things just got out of hand.) At the end of many of the utility stored procedures, you'll see code that looks like the following: -- Fix NULLs SET @TargetValue = ISNULL(@TargetValue, -9999) Yes, that code is doing exactly what you can't allow yourself to believe it's doing lest you be driven mad. If the variable contains a NULL, he's alerting the caller by changing its value to -9999. Here's how this number is commonly used: -- Get target value EXEC ap_GetTargetValue @Param1, @Param2, OUTPUT @TargetValue -- Check target value for NULL value IF @TargetValue = -9999 ... Really. For another dimension of this system, see the article on thedailywtf.com entitled I Think I'll Call Them "Transactions". I'm not making any of this up. I swear. I'm often reminded, when I work on this system, of Wolfgang Pauli's famous response to a student: "That isn't right. It isn't even wrong." This can't really be the very worst program ever. It's definitely the worst one I've worked

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >