Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 600/1071 | < Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >

  • Removing NSLog breaks compiler

    - by DVG
    Okay, so this is weird I have this code - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { switch (indexPath.row) { case 1: NSLog(@"Platform Cell Selected"); AddGamePlatformSelectionViewController *platformVC = [[AddGamePlatformSelectionViewController alloc] initWithNibName:@"AddGamePlatformSelectionViewController" bundle:nil]; platformVC.context = context; platformVC.game = newGame; [self.navigationController pushViewController:platformVC animated:YES]; [platformVC release]; break; default: break; } } Which works fine. When I remove the NSLog Statement, like so: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { switch (indexPath.row) { case 1: //NSLog(@"Platform Cell Selected"); AddGamePlatformSelectionViewController *platformVC = [[AddGamePlatformSelectionViewController alloc] initWithNibName:@"AddGamePlatformSelectionViewController" bundle:nil]; platformVC.context = context; platformVC.game = newGame; [self.navigationController pushViewController:platformVC animated:YES]; [platformVC release]; break; default: break; } } I get the following compiler errors /Users/DVG/Development/iPhone/Backlog/Classes/AddGameTableViewController.m:102:0 /Users/DVG/Development/iPhone/Backlog/Classes/AddGameTableViewController.m:102: error: expected expression before 'AddGamePlatformSelectionViewController' /Users/DVG/Development/iPhone/Backlog/Classes/AddGameTableViewController.m:103:0 /Users/DVG/Development/iPhone/Backlog/Classes/AddGameTableViewController.m:103: error: 'platformVC' undeclared (first use in this function) If I just edit out the two // for commenting out that line, everything works swimingly.

    Read the article

  • delivery mechanism, Rational ClearCase

    - by kadaba
    Hi All, We came up with a stream structure for the Rational ClearCase UCM model. We recently migrated the code base into the new setup. We had three different code bases, i.e. three physical code bases. The way migration was done in this way. we moved the production code first, created a baseline. Then the uat code and created a baseline and then the development code and created a baseline. As of now the integration stream has the latest baseline that is the development baseline. Now we have other two streams for the prd and the uat from which the release will be done in the respective environments. I have my dev stream now. I create an activity and make some changes. now I need to promote these changes into the uat environment. If I deliver the changes to the integration stream, merge is done but on a development basline. I do not want to rebase it to uat as many development apps wil get rebased into the uat which is not desired. How do I achieve promoting changes to the uat environment(uat stream). kindly advice.

    Read the article

  • ASP.NET "Object reference not set..." error

    - by Roman
    Hi, I have a website written using ASP.NET. We have a development machine and a deployment server. The site works great on the development machine, but when is transfered (using simple FTP Upload) generates strange behavior. It starts working just fine, but after a while stops working and throws an exception "Exception: Object reference not set to an instance of an object.". The deal is that the absolute path of the website on the development machine is different than on the deployment server (and why should they be similar?) and the exact error is: Exception: Object reference not set to an instance of an object. at SOMEPROJECT_Objects.Player..ctor(Int32 PlayerID) in C:\inetpub\wwwroot\SOMEPROJECTSolution\ALLPROJECT\SOMEPROJECT_Objects\Player.cs:line 123 at SOMEPROJECT_GameLayer.M_Game.PlayerActiveGame(Int32 PlayerID) in C:\inetpub\wwwroot\SOMEPROJECTSolution\ALLPROJECT\SOMEPROJECT_GameLayer\M_Game.cs:line 85 at Web.getsms.Page_Load(Object sender, EventArgs e) in C:\inetpub\wwwroot\SOMEPROJECTSolution\ALLPROJECT\SOMEPROJECT-sms\Web\getsms.aspx.cs:line 59 The address that it is looking for is the address on the DEVELOPMENT machine, where as the site now resides on the deployment server. Any ideas why this happens would be appreciated. Thanks, Roman

    Read the article

  • Msdtc Transaction

    - by Shimjith
    I am using Linked server For Tansaction example Alter Proc [dbo].[usp_Select_TransferingDatasFromServerCheckingforExample] @RserverName varchar(100), ----- Server Name @RUserid Varchar(100), ----- server user id @RPass Varchar(100), ----- Server Password @DbName varchar(100) ----- Server database As Set nocount on Set Xact_abort on Declare @user varchar(100) Declare @userID varchar(100) Declare @Db Varchar(100) Declare @Lserver varchar(100) Select @Lserver = @@servername Select @userID = suser_name() select @User=user Exec('if exists(Select 1 From [Master].[' + @user + '].[sysservers] where srvname = ''' + @RserverName + ''') begin Exec sp_droplinkedsrvlogin ''' + @RserverName + ''',''' + @userID + ''' exec sp_dropserver ''' + @RserverName + ''' end ') set @RserverName='['+@RserverName+']' BEGIN TRY BEGIN TRANSACTION declare @ColumnList varchar(max) set @ColumnList = null select @ColumnList = case when @ColumnList is not null then @ColumnList + ',' + quotename(name) else quotename(name) end from syscolumns where id = object_id('bditm') order by colid set identity_insert Bditm on exec ('Insert Into Bditm ('+ @ColumnList +') Select * From '+ @RserverName + '.'+ @DbName + '.'+ @user + '.Bditm') set identity_insert Bditm off Commit Select 1 End try Begin catch if (@@ERROR < 0) Begin if @@trancount 0 Begin Rollback transaction Select 0 END End End Catch set @RserverName=replace(replace(@RserverName,'[',''),']','') Exec sp_droplinkedsrvlogin @RserverName,@userID Exec sp_dropserver @RserverName this is the Error Occuerd The Microsoft Distributed Transaction Coordinator (MS DTC) has cancelled the distributed transaction.

    Read the article

  • Is this a good job description? What title would you give this position?

    - by Zack Peterson
    Department: Information Technology Reports To: Chief Information Officer Purpose: Company's ________________ is specifically engaged in the development of World Wide Web applications and distributed network applications. This person is concerned with all facets of the software development process and specializes in software product management. He or she contributes to projects in an application architect role and also performs individual programming tasks. Essential Duties & Responsibilities: This person is involved in all aspects of the software development process such as: Participation in software product definitions, including requirements analysis and specification Development and refinement of simulations or prototypes to confirm requirements Feasibility and cost-benefit analysis, including the choice of architecture and framework Application and database design Implementation (e.g. installation, configuration, customization, integration, data migration) Authoring of documentation needed by users and partners Testing, including defining/supporting acceptance testing and gathering feedback from pre-release testers Participation in software release and post-release activities, including support for product launch evangelism (e.g. developing demonstrations and/or samples) and subsequent product build/release cycles Maintenance Qualifications: Bachelor's degree in computer science or software engineering Several years of professional programming experience Proficiency in the general technology of the World Wide Web: Hypertext Transfer Protocol (HTTP) Hypertext Markup Language (HTML) JavaScript Cascading Style Sheets (CSS) Proficiency in the following principles, practices, and techniques: Accessibility Interoperability Usability Security (especially prevention of SQL injection and cross-site scripting (XSS) attacks) Object-oriented programming (e.g. encapsulation, inheritance, modularity, polymorphism, etc.) Relational database design (e.g. normalization, orthogonality) Search engine optimization (SEO) Asynchronous JavaScript and XML (AJAX) Proficiency in the following specific technologies utilized by Company: C# or Visual Basic .NET ADO.NET (including ADO.NET Entity Framework) ASP.NET (including ASP.NET MVC Framework) Windows Presentation Foundation (WPF) Language Integrated Query (LINQ) Extensible Application Markup Language (XAML) jQuery Transact-SQL (T-SQL) Microsoft Visual Studio Microsoft Internet Information Services (IIS) Microsoft SQL Server Adobe Photoshop

    Read the article

  • Ruby on Rails 2.3.5: Populating my prod and devel database with data (migration or fixture?)

    - by randombits
    I need to populate my production database app with data in particular tables. This is before anyone ever even touches the application. This data would also be required in development mode as it's required for testing against. Fixtures are normally the way to go for testing data, but what's the "best practice" for Ruby on Rails to ship this data to the live database also upon db creation? ultimately this is a two part question I suppose. 1) What's the best way to load test data into my database for development, this will be roughly 1,000 items. Is it through a migration or through fixtures? The reason this is a different answer than the question below is that in development, there's certain fields in the tables that I'd like to make random. In production, these fields would all start with the same value of 0. 2) What's the best way to bootstrap a production db with live data I need in it, is this also through a migration or fixture? I think the answer is to seed as described here: http://lptf.blogspot.com/2009/09/seed-data-in-rails-234.html but I need a way to seed for development and seed for production. Also, why bother using Fixtures if seeding is available? When does one seed and when does one use fixtures?

    Read the article

  • Error when loading YAML config files in Rails

    - by ZelluX
    I am configuring Rails with MongoDB, and find a strange problem when paring config/mongo.yml file. config/mongo.yml is generated by executing script/rails generate mongo_mapper:config, and it looks like following: defaults: &defaults host: 127.0.0.1 port: 27017 development: <<: *defaults database: tc_web_development test: <<: *defaults database: tc_web_test From the config file we can see the objects development and test should both have a database field. But when it is parsed and loaded in config/initializers/mongo.db, config = YAML::load(File.read(Rails.root.join('config/mongo.yml'))) puts config.inspect MongoMapper.setup(config, Rails.env) the strange thing comes: the output of puts config.inspect is {"defaults"=>{"host"=>"127.0.0.1", "port"=>27017}, "development"=>{"host"=>"127.0.0.1", "port"=>27017}, "test"=>{"host"=>"127.0.0.1", "port"=>27017}} which does not contain database attribute. But when I execute the same statements in a plain ruby console, instead of using rails console, mongo.yml is parsed in a right way. {"defaults"=>{"host"=>"127.0.0.1", "port"=>27017}, "development"=>{"host"=>"127.0.0.1", "port"=>27017, "database"=>"tc_web_development"}, "test"=>{"host"=>"127.0.0.1", "port"=>27017, "database"=>"tc_web_test"}} I am wondering what may be the cause of this problem. Any ideas? Thanks.

    Read the article

  • Visual Studio 2010: very slow web applications debugging!

    - by micha12
    I recently installed Visual Studio 2010 (Ultimate edition, final version released in April), and found that debugging a web application became very slow (2-3 times slower than in Visual Studio 2008)! I took the same web application and checked the speed of loading of one of its pages in VS 2008 and VS 2010, and compared the time it takes to load the page. I tested it using 2 approaches: 1) debugging under ASP.NET Development Server (by pressing the "Start" button) and 2) using ASP.NET Development Server without debugging (by using the "View in Browser" menu command). And I got the following results for Visual Studio 2008 and 2010. 1) ASP.NET Development Server withoud debugging ("View in Browser"): the speed of page loading is the same in VS 2008 and 2010. 2) Debugging under ASP.NET Development Server ("Start" button): in VS 2010 the page takes more time to load than in VS 2008 - VS 2010 debugging is 2-3 times slower than in VS 2008! 3) At the same time, when debugging a web application in VS 2008, it takes the same time to load the page compared to when using only the "View in Browser" command. That is, VS 2008 debugging does not introduce any overhead to page loading in the web browser! I wanted to make sure that other people have the same problem with slow debugging of web applications in VS 2010. Can this issue be solved by any means? BTW, I am using Windows XP SP3. Thank you.

    Read the article

  • cheap way to scale a rails application

    - by VP
    I have an application, that is becoming big, but until now, its not giving me a good revenue. That means, short money to re-invest on that. In this scenario, i found a way to make a "cheap distributed rails" deployment. I've got 4 VPS. All of them are in the same physical server. I added a load balance server running HAproxy in one dedicated VPS. There i pointed my virtual ip address where my domain name is associated. Behind this HAproxy i have more two VPS running my rails APP, passenger and memcache. Both apps servers are looking to the same database server, my 4th VPS. So with $44/month, i mounted a distributed environment. It won't be my final choice, but now, that the budget is short, is that a good way to deploy a rails application? Any pros or cons? It worth my $44/month?

    Read the article

  • Best approach to design a service oriented system

    - by Gustavo Paulillo
    Thinking about service orientation, our team are involved on new application designs. We consist in a group of 4 developers and a manager (that knows something about programming and distributed systems). Each one, having own opinion on service design. It consists in a distributed system: a user interface (web app) accessing the services in a dedicated server (inside the firewall), to obtain the business logic operations. So we got 2 main approachs that I list above : Modular services Having many modules, each one consisting of a service (WCF). Example: namespaces SystemX.DebtService, SystemX.CreditService, SystemX.SimulatorService Unique service All the business logic is centralized in a unique service. Example: SystemX.OperationService. The web app calls the same service for all operations. In your opinion, whats the best? Or having another approach is better for this scenario?

    Read the article

  • Corrupt UTF-8 Characters with PHP 5.2.10 and MySQL 5.0.81

    - by jkndrkn
    We have an application hosted on both a local development server and a live site. We are experiencing UTF-8 corruption issues and are looking to figure out how to resolve them. The system is run using symfony 1.0 with Propel. On our development server, we are running PHP 5.2.0 and MySQL 5.0.32. We do not experience corrupted UTF-8 characters there. On our live site, PHP 5.2.10 and MySQL 5.0.81 is running. On that server, certain characters such as ô´ and S are corrupted once they are stored in the database. The corrupted characters are showing up as either question marks or approximations of the original character with adjacent question marks. Examples of corruption: Uncorrupted: ô´ Corrupted: ô? Uncorrupted: S Corrupted: ? We are currently using the following techniques on both development and live servers: Executing the following queries prior to execution of any other queries: SET NAMES 'utf8' COLLATE 'utf8_unicode_ci' SET CHARSET 'utf8' Setting the <meta> Content-Type value to: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Adding the following to our .htaccess file: AddDefaultCharset utf-8 Using mb_* (multibyte) PHP functions where necessary. Being sure to set database columns to use utf8_unicode_ci collation. These techniques are sufficient for our development site, but do not work on the live site. On the live site I've also tried adding mysql_set_encoding('ut8', $mysql_connection) but this does not help either. I have found some evidence that newer versions of PHP and MySQL are mishandling UTF-8 character encodings.

    Read the article

  • Silverlight vs Flex

    - by 1kevgriff
    My company develops several types of applications. A lot of our business comes from doing multimedia-type apps, typically done in Flash. However, now that side of the house is starting to migrate towards doing Flex development. Most of our other development is done using .NET. I'm trying to make a push towards doing Silverlight development instead, since it would take better advantage of the .NET developers on staff. I prefer the Silverlight platform over the Flex platform for the simple fact that Silverlight is all .NET code. We have more .NET developers on staff than Flash/Flex developers, and most of our Flash/Flex developers are graphic artists (not real programmers). Only reason they push towards Flex right now is because it seems like the logical step from Flash. I've done development using both, and I honestly believe Silverlight is easier to work with. But I'm trying to convince people who are only Flash developers. So here's my question: If I'm going to go into a meeting to praise Silverlight, why would a company want to go with Silverlight instead of Flex? Other than the obvious "not everyone has Silverlight", what are the pros and cons for each?

    Read the article

  • How to compare two file structures in PHP?

    - by OM The Eternity
    I have a function which gives me the complete file structure upto n-level, function getDirectory($path = '.', $ignore = '') { $dirTree = array (); $dirTreeTemp = array (); $ignore[] = '.'; $ignore[] = '..'; $dh = @opendir($path); while (false !== ($file = readdir($dh))) { if (!in_array($file, $ignore)) { if (!is_dir("$path/$file")) { //display of file and directory name with their modification time $stat = stat("$path/$file"); $statdir = stat("$path"); $dirTree["$path"][] = $file. " === ". date('Y-m-d H:i:s', $stat['mtime']) . " Directory == ".$path."===". date('Y-m-d H:i:s', $statdir['mtime']) ; } else { $dirTreeTemp = getDirectory("$path/$file", $ignore); if (is_array($dirTreeTemp))$dirTree = array_merge($dirTree, $dirTreeTemp); } } } closedir($dh); return $dirTree; } $ignore = array('.htaccess', 'error_log', 'cgi-bin', 'php.ini', '.ftpquota'); //function call $dirTree = getDirectory('.', $ignore); //file structure array print print_r($dirTree); Now here my requirement is , I have two sites The Development/Test Site- where i do testing of all the changes The Production Site- where I finally post all the changes as per test in development site Now, for example, I have tested an image upload in the Development/test site, and i found it appropriate to publish on Production site then i will completely transfer the Development/Test DB detail to Production DB, but now I want to compare the files structure as well to transfer the corresponding image file to Production folder. There could be the situation when I update the image by editing the image and upload it with same name, now in this case the image file would be already present there, which will restrict the use of "file_exist" logic, so for these type of situations....HOW CAN I COMPARE THE TWO FILE STRUCTURE TO GET THE SYNCHRONIZATION DONE AS PER REQUIREMENT??

    Read the article

  • What is branched in a repository?

    - by Peter M
    Ok I hope that this will end up sounding like a reasonable question. From what I understand of subversion if you have a repo that contains multiple projects, then you can branch individual projects within that repo (see SVN Red book - Using Branches) However what I don't quite follow is what happens when you create a branch in one of the distributed systems (Git, Hg, Bazaar - I don't think it matters which one). Can you branch just a sub-directory of the repo, or when you create the branch are you branching the entire repo? This question is part of a larger one that I posted on superuser (choice and setup of version control) and has come about as I am trying to figure out how to best version control a large hierarchal layout of independent projects. It may be that for distributed systems that what I would like to do is best handled by a sub-project mechanism of some sort - but again that is something I am not clear on although I have heard the term mentioned in regards to git.

    Read the article

  • Breaking dependencies when you can't make changes to other files?

    - by codemuncher
    I'm doing some stealth agile development on a project. The lead programmer sees unit testing, refactoring, etc as a waste of resources and there is no way to convince him otherwise. His philosophy is "If it ain't broke don't fix it" and I understand his point of view. He's been working on the project for over a decade and knows the code inside and out. I'm not looking to debate development practices. I'm new to the project and I've been tasked with adding a new feature. I've worked on legacy projects before and used agile development practices with good result but those teams were more receptive to the idea and weren't afraid of making changes to code. I've been told I can use whatever development methodology I want but I have to limit my changes to only those necessary to add the feature. I'm using tdd for the new classes I'm writing but I keep running into road blocks caused by the liberal use of global variables and the high coupling in the classes I need to interact with. Normally I'd start extracting interfaces for these classes and make their dependence on the global variables explicit by injecting them as constructor arguments or public properties. I could argue that the changes are necessary but considering the lead never had to make them I doubt he would see it my way. What techniques can I use to break these dependencies without ruffling the lead developer's feathers? I've made some headway using: Extract Interface (for the new classes I'm creating) Extend and override the wayward classes with test stubs. (luckily most methods are public virtual) But these two can only get me so far.

    Read the article

  • Will TFS 2010 support non-contiguous merging?

    - by steve_d
    I know that merging non-contiguous changesets at once may not be a good idea. However there is at least one situation in which merging non-contiguous changesets is (probably) not going to break anything: when there are no intervening changes on the affected individual files. (At least, it wouldn't break any worse than would a series of cherry-picked merges, checked in each time; and at least this way you would discover breakage before checking in). For instance, let's say you have a Main and a Development branch. They start out identical (e.g. after a release). They have two files, foo.cs and bar.cs. Alice makes a change in Development\foo.cs and checks it in as changeset #1001. Bob makes a change in Development\bar.cs and checks it in as #1002. Alice makes another change to Development\foo.cs and checks it in as #1003. Now we could in theory merge both changes #1001 and #1003 from dev-to main in a single operation. If we try to merge at the branch level, dev-to-main, we will have to do it as two operations. In this simple, contrived example it's simple enough to merge the one file - but in the real world where there would be many files involved, it's not so simple. Non-contiguous merging is one of the reasons given for why "merge by workitem" is not implemented in TFS.

    Read the article

  • A question of long-running and disruptive branches

    - by Matt Enright
    We are about to begin prototyping a new application that will share some existing infrastructure assemblies with an existing application, and also involve a significant subset of the existing domain model. Parts of the domain model will likely undergo some serious changes for this new application, and the endgame for all of this, once the new application has been fully specified and is launch-ready is that we would like to re-unify the models of the two applications (as well as share a database, link functionality, etc.), but for the duration of development, prototyping, etc, we will be using a separate database so that we can change things without worrying about impact to development or use of the existing application. Since it is a prototype, there will be a pretty long window during which serious changes or rearchitecturing can occur as product management experiments with different workflows, different customer bases are surveyed, and we try and keep up. We have already made a Subversion branch, so as to not impact concurrent development on the mature application, and are toying with 2 potential ways of moving forward with this: Use the svn branch as the sole mechanism of separation. Make our changes to the existing domain models, and evaluate their impact on the existing application (and make requisite changes to ProjectA) when we have established that our long-running side branch is stable enough for re-entry to trunk. "Fork" the shared code (temporarily): Copy ProjectA.Entities to NewProject.Entities, and treat all of the NewProject code as self-contained. When all of the perturbations around the model have died down and we feel satisfied, manually re-integrate the changes (as granular or sweeping as warranted) back into ProjectA.Entities, updating ProjectA to use the improved models at each step (this can take place either before or after the subversion merge has occurred). The subversion merge will then not handle recombination of any of the heavy changes here. Note: the "fork" method only applies to the code we see significant changes in store for, and whose modification will break ProjectA - shared infrastructure stuff for example, we would just modify in place (on our branch) and let the merge sort out. Development is hard, go shopping. Naturally, after not coming to an agreement, we're turning it over to the oracle of power that is SO. Any experience with any of these methods, pain points to watch out for, something new entirely?

    Read the article

  • How to go about signing text in a verifiable way from within ruby in a simple yet strong & portable

    - by roja
    Guys, I have been looking for a portable method to digitally sign arbitrary text which can be placed in a document and distributed while maintaining its verifiable origin. Here is an example: a = 'some text' a.sign(<private key>) # => <some signature in ASCII format> The contents of a can now be distributed freely. If a receiver wants to check the validity of said text they can do the following: b = 'some text' b.valid(<public key>, <signature supplied with text>) # => true/false Is there any library out there that already offers this kind of functionality? Ruby standard library contains SHA hashing code so at lest there is a portable way to perform the hashing but from that point I am struggling to find anything which fits purpose. Kind Regards, Roja

    Read the article

  • Weird Rails database errors

    - by Jason Swett
    I've had some trouble getting my Rails app to connect to PostgreSQL so I decided to just say screw it and use SQLite for now. (I'm using the tutorial here: http://guides.rubyonrails.org/getting_started.html) I started a BRAND NEW, fresh Rails app from this tutorial. When I visit my app in the browser after deleting public/index.html, I get this the first time: Please install the pg adapter: `gem install activerecord-pg-adapter` (no such file to load -- active_record/connection_adapters/pg_adapter) That's odd to me because I'm not mentioning PostgreSQL anywhere. Here's my databases.yml: # SQLite version 3.x # gem install sqlite3-ruby (not necessary on OS X Leopard) development: adapter: sqlite3 database: db/development.sqlite3 pool: 5 timeout: 5000 # Warning: The database defined as "test" will be erased and # re-generated from your development database when you run "rake". # Do not set this db to the same as development or production. test: adapter: sqlite3 database: db/test.sqlite3 pool: 5 timeout: 5000 production: adapter: sqlite3 database: db/production.sqlite3 pool: 5 timeout: 5000 To make things more confusing, I only get that "pg adapter" error on the first load. For every subsequent page request, I get this error: ActiveRecord::ConnectionNotEstablished So even though I removed all mention of PostgreSQL, I'm still getting errors. What could be going on?

    Read the article

  • Export from a standalone database to an embedded database.

    - by jdana
    I have a two-part application, where there is a central database that is edited, and then at certain times, the data is released and distributed as its own application. I would like to use a standalone database for the central database (MySQL, Postgres, Oracle, SQL Server, etc.) and then have a reliable export to an embedded database (probably SQLite) for distribution. What tools/processes are available for such an export, or is it a practice to be avoided? EDIT: A couple of additional pieces of information. The distributed application should be able to run without having to connect to another server (ex: your spellchecker still works even you don't have internet), and I don't want to install a full DB server for read-only access to the data.

    Read the article

  • Is there any Application Server Frameworks for other languages/platforms than JavaEE and .NET?

    - by Jonas
    I'm a CS student and has rare experience from the enterprise software industry. When I'm reading about enterprise software platforms, I mostly read about these two: Java Enterprise Edition, JavaEE .NET and Windows Communication Foundation By "enterprise software platforms" I mean frameworks and application servers with support for the same characteristics as J2EE and WCF has: [JavaEE] provide functionality to deploy fault-tolerant, distributed, multi-tier Java software, based largely on modular components running on an application server. WCF is designed in accordance with service oriented architecture principles to support distributed computing where services are consumed by consumers. Clients can consume multiple services and services can be consumed by multiple clients. Services are loosely coupled to each other. Is there any alternatives to these two "enterprise software platforms"? Isn't any other programming languages used in a bigger rate for this problem area? I.e Why isn't there any popular application servers for C++/Qt?

    Read the article

  • TFS, G.I. Joe and Under-doing

    If I were to rank the most consistently irritating parts of my work day, using TFS would come in first by a wide margin. Even repeated network outages this week seem like a pleasant reprieve from this monolithic beast. This is not a reflexive anti-Microsoft feeling, that attitude just wouldnt work for a consultant who does .NET development. It is also not an utter dismissal of TFS as worthless; Ive seen people use it effectively on several projects. So why? Ill start with a laundry list of shortcomings. An out of the box UI for work items that is insultingly bad, a source control system that is confoundingly fragile when handling merges, folder renames and long file names, the arcane XML wizardry necessary to customize a template and a build system that adds an extra layer of oddness on top of msbuild. Im sure my legion of readers will soon point out to me how I can work around all these issues, how this is fixed in TFS 2010 or with this add-in, and how once you have everything set up, youre fine. And theyd be right, any one of these problems could be worked around. If not dirty laundry, what else? I thought about it for a while, and came to the conclusion that TFS is so irritating to me because it represents a vision of software development that I find unappealing. To expand upon this, lets start with some wisdom from those great PSAs at the end of the G.I. Joe cartoons of the 80s: Now you know, and knowing is half the battle. In software development, Id go further and say knowing is more than half the battle. Understanding the dimensions of the problem you are trying to solve, the needs of the users, the value that your software can provide are more than half the battle. Implementation of this understanding is not easy, but it is not even possible without this knowledge. Assuming we have a fixed amount of time and mental energy for any project, why does this spell trouble for TFS? If you think about what TFS is doing, its offering you a huge array of options to track the day to day implementation of your project. From tasks, to code churn, to test coverage. All valuable metrics, but only in exchange for valuable time to get it all working. In addition, when you have a shiny toy like TFS, the temptation is to feel obligated to use it. So the push from TFS is to encourage a project manager and team to focus on process and metrics around process. You can get great visibility, and graphs to show your project stakeholders, but none of that is important if you are not implementing the right product. Not just unimportant, these activities can be harmful as they drain your time and sap your creativity away from the rest of the project. To be more concrete, lets suppose your organization has invested the time to create a template for your projects and trained people in how to use it, so there is no longer a big investment of time for each project to get up and running. First, Id challenge if that template could be specific enough to be full featured and still applicable for any project. Second, the very existence of this template would be a indication to a project manager that the success of their project was somehow directly related to fitting management of that project into this format. Again, while the capabilities are wonderful, the mirage is there; just get everything into TFS and your project will run smoothly. Ill close the loop on this first topic by proposing a thought experiment. Think of the projects youve worked on. How many times have you been chagrined to discover youve implemented the wrong feature, misunderstood how a feature should work or just plain spent too much time on a screen that nobody uses? That sounds like a really worthwhile area to invest time in improving. How about going back to these projects and thinking about how many times you wished you had optimized the state change flow of your tasks or been embarrassed to not have a code churn report linked back to the latest changeset? With thanks to the Real American Heroes, Ill move on to a more current influence, that of the developers at 37signals, and their philosophy towards software development. This philosophy, fully detailed in the books Getting Real and Rework, is a vision of software that under does the competition. This is software that is deliberately limited in functionality in order to concentrate fully on making sure ever feature that is there is awesome and needed. Why is this relevant? Well, in one of those fun seeming paradoxes in life, constraints can be a spark for creativity. Think Twitter, the small screen of an iPhone, the limitations of HTML for applications, the low memory limits of older or embedded system. As long as there is some freedom within those constraints, amazing things emerge. For project management, some of the most respected people in the industry recommend using just index cards, pens and tape. They argue that with change the constant in software development, your process should be as limited (yet rigorous) as possible. Looking at TFS, this is not a system designed to under do anybody. It is a big jumble of components and options, with every feature you could think of. Predictably this means many basic functions are hard to use. For task management, many people just use an Excel spreadsheet linked up to TFS. Not a stirring endorsement of the tooling there. TFS as a whole would be far more appealing to me if there was less of it, but better. Id cut 50% of the features to make the other half really amaze and inspire me. And thats really the heart of the matter. TFS has great promise and I want to believe it can work better. But ultimately it focuses your attention on a lot of stuff that doesnt really matter and then clamps down your creativity in a mess of forms and dialogs obscuring what does.   --- Relevant Links --- All those great G.I. Joe PSAs are on YouTube, including lots of mashed up versions. A simple Google search will get you on the right track.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • September Independent Oracle User Group (IOUG) Regional Events:

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} September 5, 2012 – Denver, CO Oracle 11g Database Upgrade Seminar Join Roy Swonger, Senior Director of software development at Oracle to learn about upgrading to Oracle Database 11g. Topics include: All the required preparatory steps Database upgrade strategies Post-upgrade performance analysis Helpful tips and common pitfalls to watch out for http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=152242&src=7598177&src=7598177&Act=4 September 6, 2012 – Salt Lake City, UT Fall Symposium 2012 Plan to join us for our annual fall event on Sept 6. They day will be filled with learning and networking with tracks focused on Applications, APEX, BI, Development and DBA Topics. This event is free for UTOUG members to attend, but please register. http://www.utoug.org/apex/f?p=972:2:6686308836668467::::P2_EVENT_ID:121 September 6, 2012 – Portland, OR Oracle’s Hands on Workshop Series focused on providing Defense-in-Depth Solutions to secure data at the source, reduce risk and simplify compliance The Oracle Database Security Workshop is a one-day hands-on session for IT Managers, IT Security Architects and Oracle DBAs who are looking for solutions to address their information protection, privacy, and accountability challenges within their Oracle database environment. Most security programs offered today fail toadequately address database security. Customers continue to be challenged tosecure information against loss and protect the integrity of sensitiveinformation like critical financial data, personally identifiable information(PII) and credit card data for PCI compliance. http://nwoug.org/content.aspx?page_id=87&club_id=165905&item_id=241082 September 11, 2012 – Montreal, QC APEXposed! For APEX aficionados – join ODTUG in Montreal, September 11-12 for APEXposed! Topics will include Dynamic Actions, Plug-ins, Tuning, and Building Mobile Apps. The cost is $399 US and early registration ends August 15th. For more information: http://www.odtugapextraining.com  September 11, 2012 – Philadelphia, PA Big Data & What are we still doing wrong with Tom Kyte Tom Kyte is a Senior Technical Architect in Oracle's Server Technology Division. Tom is the Tom behind the AskTom column in Oracle Magazine and is also the author of Expert Oracle Database Architecture (Apress, 2005/2009) among other books Abstract: Big Data The term "big data" draws a lot of attention, but behind the hype there's a simple story. For decades, companies have been making business decisions based on transactional data stored in relational databases. However, beyond that critical data is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. This presentation will take a look at what Big Data is and means - and Oracle's strategy for handling it Abstract: What are we still doing wrong? I've given many best practices presentations in the last 10 years. I've given many worst practices presentations in the last 10 years. I've seen some things change over the last ten years and many other things stay exactly the same. In this talk - we'll be taking a look at the good and the bad - what we do right and what we continue to do wrong over and over again. We'll look at why "Why" is probably the right initial answer to most any question. We'll look at how we get to "Know what we Know", and why that can be both a help and a hindrance. We'll peek at "Best Practices" and tie them into what I term "Worst Practices". In short, a talk on the good and the bad. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} http://ioug.itconvergence.com/pls/apex/f?p=207:27:3669516430980563::NO September 12, 2012- New York, NY NYOUG Fall General Meeting “Trends in Database Administration and Why the Future of Database Administration is the Vdba” http://www.nyoug.org/upcoming_events.htm#General_Meeting1 September 21, 2012 – Cleveland, OH Oracle Database 11g for Developers: What You need to know or Oracle Database 11g New Features for Developers Attendees are introduced to the new and improved features of Oracle 11g (both Oracle 11g R1 and Oracle 11g R2) that directly impact application development. Special emphasis is placed on features that reduce development time, make development simpler, improve performance, or speed deployment. Specific topics include: New SQL functions, virtual columns, result caching, XML improvements, pivot statements, JDBC improvements, and PL/SQL enhancements such as compound triggers. http://www.neooug.org/ September 24, 2012 – Ottawa, ON Introduction to Oracle Spatial The free Oracle Locator functionality, and the Oracle Spatial option which dramatically extends Locator, are very useful, but poorly understood capabilities of the database. In the afternoon we will extend into additional areas selected from: storage and performance; answering business problems with spatial queries; using Oracle Maps in OBIEE; an overview and capabilities of Oracle Topology; under the covers with GeoCoding. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} http://www.oug-ottawa.org/pls/htmldb/f?p=327:27:4209274028390246::NO

    Read the article

  • How I do VCS

    - by Wes McClure
    After years of dabbling with different version control systems and techniques, I wanted to share some of what I like and dislike in a few blog posts.  To start this out, I want to talk about how I use VCS in a team environment.  These come in a series of tips or best practices that I try to follow.  Note: This list is subject to change in the future. Always use some form of version control for all aspects of software development. Development is an evolution.  Looking back at where we were is an invaluable asset in that process.  This includes data schemas and documentation. Reverting / reapplying changes is absolutely critical for efficient development. The tools I use: Code: Hg (preferred), SVN Database: TSqlMigrations Documents: Sometimes in code repository, also SharePoint with versioning Always tag a commit (changeset) with comments This is a quick way to describe to someone else (or your future self) what the changeset entails. Be brief but courteous. One or two sentences about the task, not the actual changes. Use precommit hooks or setup the central repository to reject changes without comments. Link changesets to documentation If your project management system integrates with version control, or has a way to externally reference stories, tasks etc then leave a reference in the commit.  This helps locate more information about the commit and/or related changesets. It’s best to have a precommit hook or system that requires this information, otherwise it’s easy to forget. Ability to work offline is required, including commits and history Yes this requires a DVCS locally but doesn’t require the central repository to be a DVCS.  I prefer to use either Git or Hg but if it isn’t possible to migrate the central repository, it’s still possible for a developer to push / pull changes to that repository from a local Hg or Git repository. Never lock resources (files) in a central repository… Rude! We have merge tools for a reason, merging sucked a long time ago, it doesn’t anymore… stop locking files! This is unproductive, rude and annoying to other team members. Always review everything in your commit. Never ever commit a set of files without reviewing the changes in each. Never add a file without asking yourself, deep down inside, does this belong? If you leave to make changes during a review, start the review over when you come back.  Never assume you didn’t touch a file, double check. This is another reason why you want to avoid large, infrequent commits. Requirements for tools Quickly show pending changes for the entire repository. Default action for a resource with pending changes is a diff. Pluggable diff & merge tool Produce a unified diff or a diff of all changes.  This is helpful to bulk review changes instead of opening each file. The central repository is not your own personal dump yard.  Breaking this rule is a sure fire way to get the F bomb dropped in front of your name, multiple times. If you turn on Visual Studio’s commit on closing studio option, I will personally break your fingers. By the way, the person(s) in charge of this feature should be fired and never be allowed near programming, ever again. Commit (integrate) to the central repository / branch frequently I try to do this before leaving each day, especially without a DVCS.  One never knows when they might need to work from remote the following day. Never commit commented out code If it isn’t needed anymore, delete it! If you aren’t sure if it might be useful in the future, delete it! This is why we have history. If you don’t know why it’s commented out, figure it out and then either uncomment it or delete it. Don’t commit build artifacts, user preferences and temporary files. Build artifacts do not belong in VCS, everything in them is present in the code. (ie: bin\*, obj\*, *.dll, *.exe) User preferences are your settings, stop overriding my preferences files! (ie: *.suo and *.user files) Most tools allow you to ignore certain files and Hg/Git allow you to version this as an ignore file.  Set this up as a first step when creating a new repository! Be polite when merging unresolved conflicts. Count to 10, cuss, grab a stress ball and realize it’s not a big deal.  Actually, it’s an opportunity to let you know that someone else is working in the same area and you might want to communicate with them. Following the other rules, especially committing frequently, will reduce the likelihood of this. Suck it up, we all have to deal with this unintended consequence at times.  Just be careful and GET FAMILIAR with your merge tool.  It’s really not as scary as you think.  I personally prefer KDiff3 as its merging capabilities rock. Don’t blindly merge and then blindly commit your changes, this is rude and unprofessional.  Make sure you understand why the conflict occurred and which parts of the code you want to keep.  Apply scrutiny when you commit a manual merge: review the diff! Make sure you test the changes (build and run automated tests) Become intimate with your version control system and the tools you use with it. Avoid trial and error as much as is possible, sit down and test the tool out, read some tutorials etc.  Create test repositories and walk through common scenarios. Find the most efficient way to do your work.  These tools will be used repetitively, so inefficiencies will add up. Sometimes this involves a mix of tools, both GUI and CLI. I like a combination of both Tortoise Hg and hg cli to get the job efficiently. Always tag releases Create a way to find a given release, whether this be in comments or an explicit tag / branch.  This should be readily discoverable. Create release branches to patch bugs and then merge the changes back to other development branch(es). If using feature branches, strive for periodic integrations. Feature branches often cause forked code that becomes irreconcilable.  Strive to re-integrate somewhat frequently with the branch this code will ultimately be merged into.  This will avoid merge conflicts in the future. Feature branches are best when they are mutually exclusive of active development in other branches. Use and abuse local commits , at least one per task in a story. This builds a trail of changes in your local repository that can be pushed to a central repository when the story is complete. Never commit a broken build or failing tests to the central repository. It’s ok for a local commit to break the build and/or tests.  In fact, I encourage this if it helps group the changes more logically.  This is one of the main reasons I got excited about DVCS, when I wanted more than one changeset for a set of pending changes but some files could be grouped into both changesets (like solution file / project file changes). If you have more than a dozen outstanding changed resources, there should probably be more than one commit involved. Exceptions when maintaining code bases that require shotgun surgery, in this case, it’s a design smell :) Don’t version sensitive information Especially usernames / passwords   There is one area I haven’t found a solution I like yet: versioning 3rd party libraries and/or code.  I really dislike keeping any assemblies in the repository, but seems to be a common practice for external libraries.  Please feel free to share your ideas about this below.    -Wes

    Read the article

< Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >