Search Results

Search found 10827 results on 434 pages for 'helper functions'.

Page 332/434 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • The 80 column limit, still useful?

    - by Tim Post
    Related: While coding, how many columns do you format for? Is there a valid reason for enforcing a maximum width of 80 characters in a code file, this day and age? I mostly use C, however this question is language agnostic. Its also subjective, so I'll tag it as such. Many individual projects set their own various coding standards, a guide to adjust your coding style. Many enforce an 80 column limit on code, i.e. don't force a dumb 80 x 25 terminal to wrap your lines in someone else's editor of choice if they are stuck with such a display, don't force them to turn off wrapping. Both private and open source projects usually have some style guidelines. My question is, in this day and age, is that requirement more of a pest than a helper? Does anyone still login via the local console with no framebuffer and actually edit code? If so, how often and why cant you use SSH? I help to manage a few open source projects, I was considering extending this limit to 110 columns, but I wanted to get feedback first. So, any feedback is appreciated. I can see the need to make certain OUTPUT of programs (i.e. a --help /h display) 80 columns or less, but I really don't see the need to force people to break up code under 110 columns long into 2 lines, when its easier to read on one line. I can also see the case for adhering to an 80 column limit if you're writing code that will be used on micro controllers that have to be serviced in the field with a god-knows-what terminal emulator. Beyond that, what are your thoughts? Edit: This is not an exact duplicate. I am asking very specific questions, such as how many people are actually still using such a display. I am also not asking "what is a good column limit", I'm proposing one and hoping to gather feedback. Beyond that, I'm also citing cases where the 80 column limit is still a good idea. I don't want a guide to my own "c-style", I'm hoping to adjust standards for several projects. If the duplicate in question had answered all of my questions, I would not have posted this one :) That will teach me to mention it next time. Edit 2 question |= COMMUNITY_WIKI

    Read the article

  • Vacuum spread in a tile-based space game (like in Faster Than Light game)

    - by Reeze
    I've a space game with tilemap that looks like this (simplified): Map view - from top (like in SimCity 1) 0 - room space, 1 - some kind of wall, 8 - "lock" beetween rooms public int[,] _layer = new int[,] { { 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 1, 1, 8, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 1, 0, 8, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, }; Each tile contains Air property (100 = air, 0 = vacuum). I made a little helper method to take tiles near tile with vacuum (to "suck air"): Point[] GetNearCells(Point cell, int distance = 1, bool diag = false) { Point topCell = new Point(cell.X, cell.Y - distance); Point botCell = new Point(cell.X, cell.Y + distance); Point leftCell = new Point(cell.X - distance, cell.Y); Point rightCell = new Point(cell.X + distance, cell.Y); if (diag) { return new[] { topCell, botCell, leftCell, rightCell }; } Point topLeftCell = new Point(cell.X - distance, cell.Y - distance); Point topRightCell = new Point(cell.X + distance, cell.Y + distance); Point botLeftCell = new Point(cell.X - distance, cell.Y - distance); Point botRightCell = new Point(cell.X - distance, cell.Y - distance); return new[] { topCell, botCell, leftCell, rightCell, topLeftCell, topRightCell, botLeftCell, botRightCell }; } What is the best practice to fill rooms with vacuum (decrease air) from some point? Should i use some kind of water flow? Thank you for any help!

    Read the article

  • CodeIgniter Form Validation

    - by mmcglynn
    I am trying to set up validation on a simple contact form that is created using the form helper. No validation at all occurs. What is wrong? In the code below, the “good” keyword always shows, regardless of what is entered into the form, and the saved values set via set_value are never shown. Controller // Contact function contact() { $data['pageTitle'] = "Contact"; $data['bodyId'] = "contact"; $this->load->library('form_validation'); $config_rules = array ('email' => 'required','message' => 'required'); $this->form_validation->set_rules($config_rules); if ($this->form_validation->run() == FALSE) { echo "bad"; $data['include'] = "v_contact"; $this->load->view('v_template',$data); } else { echo "good"; $data['include'] = "v_contact"; $this->load->view('v_template',$data); } } View echo validation_errors(); echo form_open('events/contact'); // name echo form_label('Name', 'name'); $data = array ( 'name' => 'name', 'id' => 'name', 'maxlength' => '64', 'size' => '40', 'value' => set_value('name') ); echo form_input($data) . "\n<br />"; // email address echo form_label('Email Address', 'email'); $data = array ( 'name' => 'email', 'id' => 'email', 'maxlength' => '64', 'size' => '40', 'value' => set_value('email') ); echo form_input($data) . "\n<br />"; // message echo form_label('Message', 'message'); $data = array ( 'name' => 'message', 'id' => 'message', 'rows' => '8', 'cols' => '35', 'value' => set_value('message') ); echo form_textarea($data) . "\n<br />"; echo form_submit('mysubmit', 'Send Message'); echo form_close();

    Read the article

  • How Should I Generate Trade Statistics For CouchDB/Rails3 Application?

    - by James
    My Problem: I am trying to developing a web application for currency traders. The application allows traders to enter or upload information about their trades and I want to calculate a wide variety of statistics based on what the user entered. Now, normally I would use a relational database for this, but I have two requirements that don't fit well with a relational database so I am attempting to use couchdb. Those two problems are: 1) Primarily, I have a companion desktop application that users will be able to work with and replicate to the site using couchdb's awesome replication feature and 2) I would like to allow users to be able to define their own custom things to track about trades and generate results based off of what they enter. The schema less nature of couch seems perfect here, but it may end up being harder than it sounds. (I already know couch requires you to define views in advance and such so I was just planning on sticking all the custom attributes in an array and then emitting the array in the view and further processing from there.) What I Am Doing: Right now I am just emitting each trade in couch keyed by each user's system and querying with the key of the system to get an array of trades per system. Simple. I am not using a reduce function currently to calculate any stats because I couldn't figure out how to get everything I need without getting a reduce overflow error. Here is an example of rows that are getting emitted from couch: {"total_rows":134,"offset":0,"rows":[ {"id":"5b1dcd47221e160d8721feee4ccc64be", "key":["80e40ba2fa43589d57ec3f1d19db41e6","2010/05/14 04:32:37 +0000"], null, "doc":{ "_id":"5b1dcd47221e160d8721feee4ccc64be", "_rev":"1-bc9fe763e2637694df47d6f5efb58e5b", "couchrest-type":"Trade", "system":"80e40ba2fa43589d57ec3f1d19db41e6", "pair":"EUR/USD", "direction":"Buy", "entry":12600, "exit":12700, "stop_loss":12500, "profit_target":12700, "status":"Closed", "slug":"101332132375", "custom_tracking": [{"name":"signal", "value":"Pin Bar"}] "updated_at":"2010/05/14 04:32:37 +0000", "created_at":"2010/05/14 04:32:37 +0000", "result":100}} ]} In my rails 3 controller I am basically just populating an array of trades such as the one above and then extracting out the relevant data into smaller arrays that I can compute my statistics on. Here is my show action for the page that I want to display the stats and all the trades: def show @trades = Trade.by_system(:startkey => [@system.id], :endkey => [@system.id, Time.now ]) @trades.each do |trade| if trade.result > 0 @winning_trades << trade.result elsif trade.result < 0 @losing_trades << trade.result else @breakeven_trades << trade.result end if trade.direction == "Buy" @long_trades << trade.result else @short_trades << trade.result end if trade["custom_tracking"] != nil @custom_tracking << {"result" => trade.result, "variables" => trade["custom_tracking"]} end end end I am omitting some other stuff that is going on, but that is the gist of what I am doing. Then I am calculating stuff in the view layer to produce some results: <% winning_long_trades = @long_trades.reject {|trade| trade <= 0 } %> <% winning_short_trades = @short_trades.reject {|trade| trade <= 0 } %> <ul> <li>Total Trades: <%= @trades.count %></li> <li>Winners: <%= @winning_trades.size %></li> <li>Biggest Winner (Pips): <%= @winning_trades.max %></li> <li>Average Win(Pips): <%= @winning_trades.sum/@winning_trades.size %></li> <li>Losers: <%= @losing_trades.size %></li> <li>Biggest Loser (Pips): <%= @losing_trades.min %></li> <li>Average Loss(Pips): <%= @losing_trades.sum/@losing_trades.size %></li> <li>Breakeven Trades: <%= @breakeven_trades.size %></li> <li>Long Trades: <%= @long_trades.size %></li> <li>Winning Long Trades: <%= winning_long_trades.size %></li> <li>Short Trades: <%= @short_trades.size %></li> <li>Winning Short Trades: <%= winning_short_trades.size %></li> <li>Total Pips: <%= @winning_trades.sum + @losing_trades.sum %></li> <li>Win Rate (%): <%= @winning_trades.size/@trades.count.to_f * 100 %></li> </ul> This produces the following results, which aside from a few things is exactly what I want: Total Trades: 134 Winners: 70 Biggest Winner (Pips): 1488 Average Win(Pips): 440 Losers: 58 Biggest Loser (Pips): -516 Average Loss(Pips): -225 Breakeven Trades: 6 Long Trades: 125 Winning Long Trades: 67 Short Trades: 9 Winning Short Trades: 3 Total Pips: 17819 Win Rate (%): 52.23880597014925 What I Am Wondering- Finally The Actual Questions: I am starting to get really skeptical of how well this method will work when a user has 5,000 trades instead of just 134 like in this example. I anticipate most users will only have somewhere under 200 per year, but some users may have a couple thousand trades per year. Probably no more than 5,000 per year. It seems to work ok now, but the page load times are already getting a tad high for my tastes. (About 800ms to generate the page according to rails logs with about a 250ms of that spent in the view layer.) I will end up caching this page I am sure, but I still need the regenerate the page each time a trade is updated and I can't afford to have this be too slow. Sooo..... Is doing something similar here possible with a straight couchdb reduce function? I am assuming handing this off to couch would possibly help with larger data sets. I couldn't figure out how, but I suppose that doesn't mean it isn't possible. If possible, any hints will be helpful. Could I use a list function if a reduce was not available due to reduce constraints? Are couchdb list functions suitable for this type of calculations? Anyone have any idea of whether or not list functions perform well? Any hints what one would look like for the type of calculations I am trying to achieve? I thought about other options such as running the calculations at the time each trade was saved or nightly if I had to and saving the results to a statistics doc that I could then query so that all the processing was done ahead of time. I would like this to be the last resort because then I can't really filter out trades by time periods dynamically like I would really like to. (I want to have a slider that a user can slide to only show trades from that time period using the startkey and endkey in couchdb if I can.) If I should continue running the calculations inside the rails app at the time of the page view, what can I do to improve my current implementation. I am new to rails, couch and programming in general. I am sure that I could be doing something better here. Do I need to create an array for each stat or is there a better way to do that. I guess I just would really like some advice on how to tackle this problem. I want to keep the page generation time minimal since I anticipate these being some of the highest trafficked pages. My gut is that I will need to offload the statistics calculation to either couch or run the stats in advance of when they are called, but I am not sure. Lastly: Like I mentioned above, one of the primary reasons for using couch is to allow users to define their own things to track per trade. Getting the data into couch is no problem, but how would I be able to take the custom_tracking array and find how many winning trades for each named tracking attribute. If anyone can give me any hints to the possibility of doing this that would be great. Thanks a bunch. Would really appreciate any help. Willing to fork out some $$$ if someone wants to take on the problem for me. (Don't know if that is allowed on stack overflow or not.)

    Read the article

  • Is there some way to make variables like $a and $b in regard to strict?

    - by Axeman
    In light of Michael Carman's comment, I have decided to rewrite the question. Note that 11 comments appear before this edit, and give credence to Michael's observation that I did not write the question in a way that made it clear what I was asking. Question: What is the standard--or cleanest way--to fake the special status that $a and $b have in regard to strict by simply importing a module? First of all some setup. The following works: #!/bin/perl use strict; print "\$a=$a\n"; print "\$b=$b\n"; If I add one more line: print "\$c=$c\n"; I get an error at compile time, which means that none of my dazzling print code gets to run. If I comment out use strict; it runs fine. Outside of strictures, $a and $b are mainly special in that sort passes the two values to be compared with those names. my @reverse_order = sort { $b <=> $a } @unsorted; Thus the main functional difference about $a and $b--even though Perl "knows their names"--is that you'd better know this when you sort, or use some of the functions in List::Util. It's only when you use strict, that $a and $b become special variables in a whole new way. They are the only variables that strict will pass over without complaining that they are not declared. : Now, I like strict, but it strikes me that if TIMTOWTDI (There is more than one way to do it) is Rule #1 in Perl, this is not very TIMTOWDI. It says that $a and $b are special and that's it. If you want to use variables you don't have to declare $a and $b are your guys. If you want to have three variables by adding $c, suddenly there's a whole other way to do it. Nevermind that in manipulating hashes $k and $v might make more sense: my %starts_upper_1_to_25 = skim { $k =~ m/^\p{IsUpper}/ && ( 1 <= $v && $v <= 25 ) } %my_hash ;` Now, I use and I like strict. But I just want $k and $v to be visible to skim for the most compact syntax. And I'd like it to be visible simply by use Hash::Helper qw<skim>; I'm not asking this question to know how to black-magic it. My "answer" below, should let you know that I know enough Perl to be dangerous. I'm asking if there is a way to make strict accept other variables, or what is the cleanest solution. The answer could well be no. If that's the case, it simply does not seem very TIMTOWTDI.

    Read the article

  • Adding Information in SQLite

    - by Cam
    Hi All, I am having trouble with my Android App when adding information into SQLite. I am relatively new to Java/SQLite and though I have followed a lot of tutorials on SQLite and have been able to get the example code to run I am unable to get tables to be created and data to import when running my own app. I have included my code in two Java files Questions (Main Program) and QuestionData (helper class represents the database). Questions.java: public class Questions extends Activity { private QuestionData questions; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.quiztest); questions = new QuestionData(this); try { Cursor cursor = getQuestions(); showQuestions(cursor); } finally { questions.close(); } } private Cursor getQuestions() { //Select Query String loadQuestions = "SELECT * FROM questionlist"; SQLiteDatabase db = questions.getReadableDatabase(); Cursor cursor = db.rawQuery(loadQuestions, null); startManagingCursor(cursor); return cursor; } private void showQuestions(Cursor cursor) { // Collect String Values from Query and Display them this part of the code is wokring fine when there is data present. QuestionData.java public class QuestionData extends SQLiteOpenHelper { private static final String DATABASE_NAME = "TriviaQuiz.db" ; private static final int DATABASE_VERSION = 2; public QuestionData(Context ctx) { super(ctx, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL("CREATE TABLE questionlist (_id INTEGER PRIMARY KEY AUTOINCREMENT, QID TEXT, QQuestion TEXT, QAnswer TEXT, QOption1 TEXT, QOption2 TEXT, QOption3 TEXT, QCategoryTagLvl1 TEXT, QCategoryTagLvl2 TEXT, QOptionalTag1 TEXT, QOptionalTag2 TEXT, QOptionalTag3 TEXT, QOptionalTag4 TEXT, QOptionalTag5 TEXT, QTimePeriod TEXT, QDifficultyRating TEXT, QGenderBias TEXT, QAgeBias TEXT, QRegion TEXT, QWikiLink TEXT, QValidationLink1 TEXT, QValidationLink2 TEXT, QHint TEXT, QLastValidation TEXT, QNotes TEXT, QMultimediaType TEXT, QMultimediaLink TEXT, QLastAsked TEXT);"); db.execSQL("INSERT INTO questionlist (_id, QID, QQuestion, QAnswer, QOption1, QOption2, QOption3, QCategoryTagLvl1, QCategoryTagLvl2, QOptionalTag1, QOptionalTag2, QOptionalTag3, QOptionalTag4, QOptionalTag5, QTimePeriod, QDifficultyRating, QGenderBias, QAgeBias, QRegion, QWikiLink, QValidationLink1, QValidationLink2, QHint, QLastValidation, QNotes, QMultimediaType, QMultimediaLink, QLastAsked)"+ "VALUES (null,'Q00001','Example','Ans1','Q1','Q2','Q3','Q4','','','','','','','','','','','','','','','','','','','','')"); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { db.execSQL("DROP TABLE IF EXISTS " + TABLE_NAME); onCreate(db); } } Any suggestions at all would be great. I have tried debugging which suggests that the database does not exist. Thanks in advance for your assistance.

    Read the article

  • Better way to write an object generator for an RAII template class?

    - by Dan
    I would like to write an object generator for a templated RAII class -- basically a function template to construct an object using type deduction of parameters so the types don't have to be specified explicitly. The problem I foresee is that the helper function that takes care of type deduction for me is going to return the object by value, which will result in a premature call to the RAII destructor when the copy is made. Perhaps C++0x move semantics could help but that's not an option for me. Anyone seen this problem before and have a good solution? This is what I have: template<typename T, typename U, typename V> class FooAdder { private: typedef OtherThing<T, U, V> Thing; Thing &thing_; int a_; // many other members public: FooAdder(Thing &thing, int a); ~FooAdder(); void foo(T t, U u); void bar(V v); }; The gist is that OtherThing has a horrible interface, and FooAdder is supposed to make it easier to use. The intended use is roughly like this: FooAdder(myThing, 2) .foo(3, 4) .foo(5, 6) .bar(7) .foo(8, 9); The FooAdder constructor initializes some internal data structures. The foo and bar methods populate those data structures. The ~FooAdder dtor wraps things up and calls a method on thing_, taking care of all the nastiness. That would work fine if FooAdder wasn't a template. But since it is, I would need to put the types in, more like this: FooAdder<Abc, Def, Ghi>(myThing, 2) ... That's annoying, because the types can be inferred based on myThing. So I would prefer to create a templated object generator, similar to std::make_pair, that will do the type deduction for me. Something like this: template<typename T, typename U, typename V> FooAdder<T, U, V> AddFoo(Thing &thing, int a) { return FooAdder<T, U, V>(thing, a); } That seems problematic: because it returns by value, the stack temporary object will be destructed, which will cause the RAII dtor to run prematurely. One thought I had was to give FooAdder a copy ctor with move semantics, kinda like std::auto_ptr. But I would like to do this without dynamic memory allocation, so I thought the copy ctor could set a flag within FooAdder indicating the dtor shouldn't do the wrap-up. Like this: FooAdder(FooAdder &rhs) // Note: rhs is not const : thing_(rhs.thing_) , a_(rhs.a_) , // etc... lots of other members, annoying. , moved(false) { rhs.moved = true; } ~FooAdder() { if (!moved) { // do whatever it would have done } } Seems clunky. Anyone got a better way?

    Read the article

  • jQuery: load refuses to get dynamic content in IE6

    - by user260157
    jQuery refuses to load my dynamic content in IE6. All in FireFox & Safari works fine. Only IE6 is being a pain. When I try the a html with <p>Hello World</p> that works. Properly. But when loading a PHP it doesn't work! As you can see it's doing multiple things. <script type="text/javascript"> // When the document is ready set up our sortable with it's inherant function(s) $(document).ready(function() { // Sort list & amend in database function sortTableMenuAndReload() { var order = $('#menuList').sortable('serialize'); $.post("PLUGINS/SortableMenu/process-sortable.php",order); $("#menuList").load("PLUGINS/SortableMenu/sortableMenu_ajax.php"); } function sortTableOrder() { var order = $('#menuList').sortable('serialize'); $.post("PLUGINS/SortableMenu/process-sortable.php",order); } function sortTableOrderAndRemove(removeID) { $('#listItem_'+removeID).remove(); var order = $('#menuList').sortable('serialize'); $.post("PLUGINS/SortableMenu/process-sortable.php",order); $("#menuList").load("PLUGINS/SortableMenu/sortableMenu_ajax.php"); } $("#menuList > li > .remove").live('click', function () { var removeID = $(this).attr('id'); $.ajax({ type: 'post', url: 'PLUGINS/SortableMenu/removeLine.php', data: 'id='+removeID, success: sortTableOrderAndRemove(removeID) }); }); $("#menuList > li > .publish").live('click', function () { var publishID = $(this).attr('id'); $.ajax({ type: 'post', url: 'PLUGINS/SortableMenu/publishLine.php', data: 'id='+publishID, success: sortTableOrder }); }); $('#new_documents > li').draggable({ addClasses: false, helper:'clone', connectToSortable:'#menuList' }); $("#menuList").droppable({ addClasses: false, drop: function() { var clone = $("#menuList > li#newArticleTYPE1"); $(clone).attr("id","listItem_newArticleTYPE1"); } }); $("#menuList").sortable({ opacity: 0.6, handle : '.handle, .remove', update : sortTableMenuAndReload }); }); </script>

    Read the article

  • IList<T> and IReadOnlyList<T>

    - by Safak Gür
    My problem is that I have a method that can take a collection as parameter that, Has a Count property Has an integer indexer (get-only) And I don't know what type should this parameter be. I would choose IList<T> before .NET 4.5 since there is no other indexable collection interface for this and arrays implement it, which is a big plus. But .NET 4.5 introduces the new IReadOnlyList<T> interface and I want my method to support that, too. How can I write this method to support both IList<T> and IReadOnlyList<T> without violating the basic principles like DRY? Can I convert IList<T> to IReadOnlyList<T> somehow in an overload? What is the way to go here? Edit: Daniel's answer gave me some pretty ideas, I guess I'll go with this: public void Do<T>(IList<T> collection) { DoInternal(collection, collection.Count, i => collection[i]); } public void Do<T>(IReadOnlyList<T> collection) { DoInternal(collection, collection.Count, i => collection[i]); } private void DoInternal<T>(IEnumerable<T> collection, int count, Func<int, T> indexer) { // Stuff } Or I could just accept a ReadOnlyList<T> and provide an helper like this: public static class CollectionEx { public static IReadOnlyList<T> AsReadOnly<T>(this IList<T> collection) { if (collection == null) throw new ArgumentNullException("collection"); return new ReadOnlyWrapper<T>(collection); } private sealed class ReadOnlyWrapper<T> : IReadOnlyList<T> { private readonly IList<T> _Source; public int Count { get { return _Source.Count; } } public T this[int index] { get { return _Source[index]; } } public ReadOnlyWrapper(IList<T> source) { _Source = source; } public IEnumerator<T> GetEnumerator() { return _Source.GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } } } Then I could call Do(array.AsReadOnly())

    Read the article

  • capistrano initial deployment

    - by Richard G
    I'm trying to set up Capistrano to deploy to an AWS box. This is the first time I've tried to set this up, so please bear with me. Could someone take a look at this and let me know if you can solve this error? The output below is the deploy.rb file, and it's output when it runs. set :application, "apparel1" set :repository, "git://github.com/rgilling/GroceryRun.git" set :scm, :git set :user, "ubuntu" set :scm_passphrase, "pre5ence" # Or: `accurev`, `bzr`, `cvs`, `darcs`, `git`, `mercurial`, `perforce`, `subversion` or `none` ssh_options[:keys] = ["/Users/rgilling/Documents/Projects/Apparel1/abesakey.pem"] ssh_options[:forward_agent] = true set :location, "ec2-107-22-27-42.compute-1.amazonaws.com" role :web, location # Your HTTP server, Apache/etc role :app, location # This may be the same as your `Web` server role :db, location, :primary => true # This is where Rails migrations will run set :deploy_to, "/var/www/#{application}" set :deploy_via, :remote_cache set :use_sudo, true # if you want to clean up old releases on each deploy uncomment this: # after "deploy:restart", "deploy:cleanup" # if you're still using the script/reaper helper you will need # these http://github.com/rails/irs_process_scripts # If you are using Passenger mod_rails uncomment this: namespace :deploy do task :start do ; end task :stop do ; end task :restart, :roles => :app, :except => { :no_release => true } do run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}" end end Then the execution results in this permission error. I think I"ve set up the SSH etc. correctly... updating the cached checkout on all servers executing locally: "git ls-remote git://github.com/rgilling/GroceryRun.git HEAD" command finished in 1294ms * executing "if [ -d /var/www/apparel1/shared/cached-copy ]; then cd /var/www/apparel1/shared/cached-copy && git fetch -q origin && git fetch --tags -q origin && git reset -q --hard f35dc5868b52649eea86816d536d5db8c915856e && git clean -q -d -x -f; else git clone -q git://github.com/rgilling/GroceryRun.git /var/www/apparel1/shared/cached-copy && cd /var/www/apparel1/shared/cached-copy && git checkout -q -b deploy f35dc5868b52649eea86816d536d5db8c915856e; fi" servers: ["ec2-107-22-27-42.compute-1.amazonaws.com"] [ec2-107-22-27-42.compute-1.amazonaws.com] executing command ** **[ec2-107-22-27-42.compute-1.amazonaws.com :: err] error: cannot open .git/FETCH_HEAD: Permission denied**

    Read the article

  • ImageView place at center on click in gallery view

    - by TGMCians
    i used gallery view in which i place multiple imageview dynamically but on click imageview place at center and second question how to start first imageview from left of screen. I do not want to change the place until user scroll horizontally by finger . Is there any way to achieve this. Please help for this.. private class ImageAdapter extends BaseAdapter{ public ImageAdapter() { //To set blank at bottom and make visible TextView textView = (TextView)findViewById(R.id.textView2); textView.setVisibility(View.VISIBLE); //To set the visibility visible of gallery myGallery.setVisibility(View.VISIBLE); } public int getCount() { return ProductItemArray.Image_URL.length; } public Object getItem(int position) { return null; } public long getItemId(int position) { return 0; } public View getView(int position, View arg1, ViewGroup arg2) { ImageView bottomImageView = new ImageView(context); if(Helper.isTablet(context)) bottomImageView.setLayoutParams(new Gallery.LayoutParams(VirtualMirrorActivity.convertDpToPixel(100, context), VirtualMirrorActivity.convertDpToPixel(100, context))); else bottomImageView.setLayoutParams(new Gallery.LayoutParams(VirtualMirrorActivity.convertDpToPixel(80, context), VirtualMirrorActivity.convertDpToPixel(80, context))); UrlImageViewHelper.setUrlDrawable(bottomImageView, ProductItemArray.Image_URL[position]); bottomImageView.setBackgroundResource(R.layout.border); return bottomImageView; } } myGallery.setAdapter(new ImageAdapter()); myGallery.setSelection(1); myGallery.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> parent, View view, final int position, long arg3) { linearLayout.removeView(frameImageView); Thread newThread = new Thread(new Runnable() { public void run() { URL url_1 = null; try { isAlreadyExistInWishlist = false; VMProductListPaging.productUrl = ProductItemArray.Image_small_URL[position]; VMProductListPaging.productId = ProductItemArray.productId[position]; VMProductListPaging.productName = ProductItemArray.product_Name[position]; url_1 = new URL(ProductItemArray.Image_small_URL[position]); bmp = BitmapFactory.decodeStream(url_1.openConnection().getInputStream()); isExecuted = true; bitmapHandler.sendMessage(bitmapHandler.obtainMessage()); } catch (Exception e) { //Toast.makeText(context,"Sorry!! This link appears to be broken",Toast.LENGTH_LONG).show(); } } }); newThread.start(); } }); Layout.xml <Gallery android:id="@+id/galleryView" android:layout_width="fill_parent" android:layout_height="wrap_content" android:spacing="5dp" android:layout_below="@+id/sendPhoto" android:layout_marginTop="10dp" android:visibility="gone"/>

    Read the article

  • Can SQLite file copied successfully on the data folder of an unrooted android device ?

    - by student
    I know that in order to access the data folder on the device, it needs to be rooted. However, if I just want to copy the database from my assets folder to the data folder on my device, will the copying process works on an unrooted phone? The following is my Database Helper class. From logcat, I can verify that the methods call to copyDataBase(), createDataBase() and openDataBase() are returned successfully. However, I got this error message android.database.sqlite.SQLiteException: no such table: TABLE_NAME: when my application is executing rawQuery. I'm suspecting the database file is not copied successfully (cannot be too sure as I do not have access to data folder), yet the method call to copyDatabase() are not throwing any exception. What could it be? Thanks. ps: My device is still unrooted, I hope it is not the main cause of the error. public DatabaseHelper(Context context) { super(context, DB_NAME, null, 1); this.myContext = context; } public void createDataBase() throws IOException{ boolean dbExist = checkDataBase(); String s = new Boolean(dbExist).toString(); Log.d("dbExist", s ); if(dbExist){ //do nothing - database already exist Log.d("createdatabase","DB exists so do nothing"); }else{ this.getReadableDatabase(); try { copyDataBase(); Log.d("copydatabase","Successful return frm method call!"); } catch (IOException e) { throw new Error("Error copying database"); } } } private boolean checkDataBase(){ File dbFile = new File(DB_PATH + DB_NAME); return dbFile.exists(); } private void copyDataBase() throws IOException{ //Open your local db as the input stream InputStream myInput = null; myInput = myContext.getAssets().open(DB_NAME); Log.d("copydatabase","InputStream successful!"); // Path to the just created empty db String outFileName = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myOutput = new FileOutputStream(outFileName); //transfer bytes from the inputfile to the outputfile byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } //Close the streams myOutput.flush(); myOutput.close(); myInput.close(); } public void openDataBase() throws SQLException{ //Open the database String myPath = DB_PATH + DB_NAME; myDataBase = SQLiteDatabase.openDatabase(myPath, null, SQLiteDatabase.OPEN_READONLY); } /* @Override public synchronized void close() { if(myDataBase != null) myDataBase.close(); super.close(); }*/ public void close() { // NOTE: openHelper must now be a member of CallDataHelper; // you currently have it as a local in your constructor if (myDataBase != null) { myDataBase.close(); } } @Override public void onCreate(SQLiteDatabase db) { } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { } }

    Read the article

  • best alternative to in-definition initialization of static class members? (for SVN keywords)

    - by Jeff
    I'm storing expanded SVN keyword literals for .cpp files in 'static char const *const' class members and want to store the .h descriptions as similarly as possible. In short, I need to guarantee single instantiation of a static member (presumably in a .cpp file) to an auto-generated non-integer literal living in a potentially shared .h file. Unfortunately the language makes no attempt to resolve multiple instantiations resulting from assignments made outside class definitions and explicitly forbids non-integer inits inside class definitions. My best attempt (using static-wrapping internal classes) is not too dirty, but I'd really like to do better. Does anyone have a way to template the wrapper below or have an altogether superior approach? // Foo.h: class with .h/.cpp SVN info stored and logged statically class Foo { static Logger const verLog; struct hInfoWrap; public: static hInfoWrap const hInfo; static char const *const cInfo; }; // Would like to eliminate this per-class boilerplate. struct Foo::hInfoWrap { hInfoWrapper() : text("$Id$") { } char const *const text; }; ... // Foo.cpp: static inits called here Foo::hInfoWrap const Foo::hInfo; char const *const Foo::cInfo = "$Id$"; Logger const Foo::verLog(Foo::cInfo, Foo::hInfo.text); ... // Helper.h: output on construction, with no subsequent activity or stored fields class Logger { Logger(char const *info1, char const *info2) { cout << info0 << endl << info1 << endl; } }; Is there a way to get around the static linkage address issue for templating the hInfoWrap class on string literals? Extern char pointers assigned outside class definitions are linguistically valid but fail in essentially the same manner as direct member initializations. I get why the language shirks the whole resolution issue, but it'd be very convenient if an inverted extern member qualifier were provided, where the definition code was visible in class definitions to any caller but only actually invoked at the point of a single special declaration elsewhere. Anyway, I digress. What's the best solution for the language we've got, template or otherwise? Thanks!

    Read the article

  • WebDav issue with Mac OS X 10.5.3 onwards

    - by svnr
    Hi, We upgraded to Mac OS X 10.5.3 and getting problem when uploading files (PUT) to a webdav server (the server is Apache running on a Windows environment). When we drag and drop on to a webdav folder using Finder we get a -36 error. When looking at the stack trace of the web server the problem is due to INVALID CRLF or some times getting the following error. Both the stack point to error when copying the stream. When googled found that it is because the Mac changed to Transfer-Encoding to 'Chunked' ClientAbortException: java.net.SocketException: Software caused connection abort: socket write error at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:366) at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:433) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:348) at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:392) at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:381) at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:88) at org.apache.commons.io.CopyUtils.copy(CopyUtils.java:200) at com.artesia.webdav.action.helper.ResponseWriterHelper.writeFileContentResponse(ResponseWriterHelper.java:206) at com.artesia.webdav.action.GetMethodAction.executeWebDavMethod(GetMethodAction.java:147) at com.artesia.webdav.action.BaseWebDavMethodAction.execute(BaseWebDavMethodAction.java:257) at com.artesia.webdav.action.BaseWebDavAction.execute(BaseWebDavAction.java:92) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:507) at javax.servlet.http.HttpServlet.service(HttpServlet.java:697) at com.artesia.webdav.web.WebDavActionServlet.service(WebDavActionServlet.java:93) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:672) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:301) at org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1069) at org.apache.struts.action.RequestProcessor.processForwardConfig(RequestProcessor.java:455) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:279) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:507) at javax.servlet.http.HttpServlet.service(HttpServlet.java:697) at com.artesia.webdav.web.WebDavActionServlet.service(WebDavActionServlet.java:93) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:672) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:301) at com.artesia.webdav.web.BaseWebDavServlet.forward(BaseWebDavServlet.java:91) at com.artesia.webdav.web.BaseWebDavServlet.service(BaseWebDavServlet.java:83) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.action.RequestFilter.doFilter(RequestFilter.java:46) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.web.WebDavAuthenticationFilter.doFilter(WebDavAuthenticationFilter.java:463) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.web.MacSessionHackFilter.doFilter(MacSessionHackFilter.java:111) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:175) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:74) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:869) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:664) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527) at org.apache.tomcat.util.net.MasterSlaveWorkerThread.run(MasterSlaveWorkerThread.java:112) at java.lang.Thread.run(Thread.java:595) Caused by: java.net.SocketException: Software caused connection abort: socket write error at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:136) at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:746) at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:433) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:348) at org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:769) at org.apache.coyote.http11.filters.IdentityOutputFilter.doWrite(IdentityOutputFilter.java:117) at org.apache.coyote.http11.InternalOutputBuffer.doWrite(InternalOutputBuffer.java:579) at org.apache.coyote.Response.doWrite(Response.java:559) at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:361)

    Read the article

  • WebDav issue with Mac OS X 10.5.3 onwards

    - by svnr
    We upgraded to Mac OS X 10.5.3 and getting problem when uploading files (PUT) to a webdav server (the server is Apache running on a Windows environment). When we drag and drop on to a webdav folder using Finder we get a -36 error. When looking at the stack trace of the web server the problem is due to INVALID CRLF or some times getting the following error. Both the stack point to error when copying the stream. When googled found that it is because the Mac changed to Transfer-Encoding to 'Chunked' ClientAbortException: java.net.SocketException: Software caused connection abort: socket write error at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:366) at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:433) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:348) at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:392) at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:381) at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:88) at org.apache.commons.io.CopyUtils.copy(CopyUtils.java:200) at com.artesia.webdav.action.helper.ResponseWriterHelper.writeFileContentResponse(ResponseWriterHelper.java:206) at com.artesia.webdav.action.GetMethodAction.executeWebDavMethod(GetMethodAction.java:147) at com.artesia.webdav.action.BaseWebDavMethodAction.execute(BaseWebDavMethodAction.java:257) at com.artesia.webdav.action.BaseWebDavAction.execute(BaseWebDavAction.java:92) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:507) at javax.servlet.http.HttpServlet.service(HttpServlet.java:697) at com.artesia.webdav.web.WebDavActionServlet.service(WebDavActionServlet.java:93) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:672) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:301) at org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1069) at org.apache.struts.action.RequestProcessor.processForwardConfig(RequestProcessor.java:455) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:279) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:507) at javax.servlet.http.HttpServlet.service(HttpServlet.java:697) at com.artesia.webdav.web.WebDavActionServlet.service(WebDavActionServlet.java:93) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:672) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:301) at com.artesia.webdav.web.BaseWebDavServlet.forward(BaseWebDavServlet.java:91) at com.artesia.webdav.web.BaseWebDavServlet.service(BaseWebDavServlet.java:83) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.action.RequestFilter.doFilter(RequestFilter.java:46) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.web.WebDavAuthenticationFilter.doFilter(WebDavAuthenticationFilter.java:463) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.web.MacSessionHackFilter.doFilter(MacSessionHackFilter.java:111) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:175) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:74) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:869) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:664) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527) at org.apache.tomcat.util.net.MasterSlaveWorkerThread.run(MasterSlaveWorkerThread.java:112) at java.lang.Thread.run(Thread.java:595) Caused by: java.net.SocketException: Software caused connection abort: socket write error at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:136) at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:746) at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:433) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:348) at org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:769) at org.apache.coyote.http11.filters.IdentityOutputFilter.doWrite(IdentityOutputFilter.java:117) at org.apache.coyote.http11.InternalOutputBuffer.doWrite(InternalOutputBuffer.java:579) at org.apache.coyote.Response.doWrite(Response.java:559) at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:361)

    Read the article

  • No device file for partition on logical volume (Linux LVM)

    - by Brian
    I created a logical volume (scandata) containing a single ext3 partition. It is the only logical volume in its volume group (case4t). Said volume group is comprised by 3 physical volumes, which are three primary partitions on a single block device (/dev/sdb). When I created it, I could mount the partition via the block device /dev/mapper/case4t-scandatap1. Since last reboot the aforementioned block device file has disappeared. It may be of note -- I'm not sure -- that my superior (a college professor) had prompted this reboot by running sudo chmod -R [his name] /usr/bin, which obliterated all suid in its path, preventing the both of us from sudo-ing. That issue has been (temporarily) rectified via this operation. Now I'll cut the chatter and get started with the terminal dumps: $ sudo pvs; sudo vgs; sudo lvs Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Scanning for physical volume names PV VG Fmt Attr PSize PFree /dev/sdb1 case4t lvm2 a- 819.32G 0 /dev/sdb2 case4t lvm2 a- 866.40G 0 /dev/sdb3 case4t lvm2 a- 47.09G 0 Wiping internal VG cache Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Finding all volume groups Finding volume group "case4t" VG #PV #LV #SN Attr VSize VFree case4t 3 1 0 wz--n- 1.69T 0 Wiping internal VG cache Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Finding all logical volumes LV VG Attr LSize Origin Snap% Move Log Copy% Convert scandata case4t -wi-a- 1.69T Wiping internal VG cache $ sudo vgchange -a y Logging initialised at Sat Jan 8 11:43:14 2011 Set umask to 0077 Finding all volume groups Finding volume group "case4t" 1 logical volume(s) in volume group "case4t" already active 1 existing logical volume(s) in volume group "case4t" monitored Found volume group "case4t" Activated logical volumes in volume group "case4t" 1 logical volume(s) in volume group "case4t" now active Wiping internal VG cache $ ls /dev | grep case4t case4t $ ls /dev/mapper case4t-scandata control $ sudo fdisk -l /dev/case4t/scandata Disk /dev/case4t/scandata: 1860.5 GB, 1860584865792 bytes 255 heads, 63 sectors/track, 226203 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00049bf5 Device Boot Start End Blocks Id System /dev/case4t/scandata1 1 226203 1816975566 83 Linux $ sudo parted /dev/case4t/scandata print Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/case4t-scandata: 1861GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 1861GB 1861GB primary ext3 $ sudo fdisk -l /dev/sdb Disk /dev/sdb: 1860.5 GB, 1860593254400 bytes 255 heads, 63 sectors/track, 226204 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000081 Device Boot Start End Blocks Id System /dev/sdb1 1 106955 859116006 83 Linux /dev/sdb2 113103 226204 908491815 83 Linux /dev/sdb3 106956 113102 49375777+ 83 Linux Partition table entries are not in disk order $ sudo parted /dev/sdb print Model: DELL PERC 6/i (scsi) Disk /dev/sdb: 1861GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 880GB 880GB primary reiserfs 3 880GB 930GB 50.6GB primary 2 930GB 1861GB 930GB primary I find it a bit strange that partition one above is said to be reiserfs, or if it matters -- it was previously reiserfs, but LVM recognizes it as a PV. To reiterate, neither /dev/mapper/case4t-scandatap1 (which I had used previously) nor /dev/case4t/scandata1 (as printed by fdisk) exists. And /dev/case4t/scandata (no partition number) cannot be mounted: $sudo mount -t ext3 /dev/case4t/scandata /mnt/new mount: wrong fs type, bad option, bad superblock on /dev/mapper/case4t-scandata, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so All I get on syslog is: [170059.538137] VFS: Can't find ext3 filesystem on dev dm-0. Thanks in advance for any help you can offer, Brian P.S. I am on Ubuntu GNU/Linux 2.6.28-11-server (Jaunty) (out of date, I know -- that's on the laundry list).

    Read the article

  • Rails Passenger Nginx cannot load such file -- bundler

    - by Stuart
    I have set up Rails, Passenger, nginx, and PostgreSQL on Ubuntu Server 12.04LTS. Upon trying to access the application/website, however, I am greeted with an error page saying that the application could not be started because a source file is missing. Error message: cannot load such file -- bundler. My nginx config (/opt/nginx/conf/nginx.conf): user railsapp; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; passenger_root /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14; passenger_ruby /home/railsapp/.rvm/rubies/ruby-1.9.3-p194/bin/ruby; server { listen 80; server_name fitness_schedules.local; root /home/railsapp/fitness_schedules/public; passenger_enabled on; rack_env development; } } Here is the error message: A source file that the application requires, is missing. It is possible that you didn't upload your application files correctly. Please check whether all your application files are uploaded. A required library may not installed. Please install all libraries that this application requires. Further information about the error may have been written to the application's log file. Please check it in order to analyse the problem. Error message: cannot load such file -- bundler Exception class: LoadError Application root: /home/railsapp/fitness_schedules Here is the backtrace from the webpage that is presented by nginx: Backtrace: # File Line Location 0 /home/railsapp/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb 36 in `require' 1 /home/railsapp/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb 36 in `require' 2 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/utils.rb 325 in `prepare_app_process' 3 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/rack/application_spawner.rb 156 in `block in initialize_server' 4 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/utils.rb 563 in `report_app_init_status' 5 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/rack/application_spawner.rb 154 in `initialize_server' 6 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/abstract_server.rb 204 in `start_synchronously' 7 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/abstract_server.rb 180 in `start' 8 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/rack/application_spawner.rb 129 in `start' 9 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/spawn_manager.rb 253 in `block (2 levels) in spawn_rack_application' 10 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/abstract_server_collection.rb 132 in `lookup_or_add' 11 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/spawn_manager.rb 246 in `block in spawn_rack_application' 12 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/abstract_server_collection.rb 82 in `block in synchronize' 13 prelude> 10:in `synchronize' 14 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/abstract_server_collection.rb 79 in `synchronize' 15 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/spawn_manager.rb 244 in `spawn_rack_application' 16 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/spawn_manager.rb 137 in `spawn_application' 17 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/spawn_manager.rb 275 in `handle_spawn_application' 18 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/abstract_server.rb 357 in `server_main_loop' 19 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/abstract_server.rb 206 in `start_synchronously' 20 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/helper-scripts/passenger-spawn-server 99 in `' In ~/fitness_schedules/log there are only development and test logs, no production/development logs.

    Read the article

  • How can I recover an ext4 filesystem corrupted after a fsck?

    - by Regan
    I have an ext4 filesystem on luks over software raid5. The filesystem was operating "just fine" for several years when I was beginning to run out of space. I had a 9T volume on 6x2T drives. I began upgrading to 3T drives by doing the mdadm fail, remove, add, rebuild, repeat process until I had a larger array. I then grew the luks container, and then when I unmounted and tried to resize2fs I was given the message the filesystem was dirty and needed e2fsck. Without thinking I just did e2fsck -y /dev/mapper/candybox and it began spewing all kinds of inode being removed type messages (can't remember exactly) I killed e2fsck and tried to remount the filesystem to backup data I was concerned about. When trying to mount at this point I get: # mount /dev/mapper/candybox /candybox mount: wrong fs type, bad option, bad superblock on /dev/mapper/candybox, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Looking back at my older logs I noticed the filesystem was giving this error each time the machine booted: kernel: [79137.275531] EXT4-fs (dm-2): warning: mounting fs with errors, running e2fsck is recommended So shame on me for not paying attention :( I then tried to mount using every backup superblock (one after another) and each attempt left this in my log: EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 0 failed (26534!=65440) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 1 failed (38021!=36729) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 2 failed (18336!=39845) ... EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 11911 failed (28743!=44098) BUG: soft lockup - CPU#0 stuck for 23s! [mount:2939] Attempts to restart e2fsck results in: # e2fsck /dev/mapper/candybox e2fsck 1.41.14 (22-Dec-2010) e2fsck: Group descriptors look bad... trying backup blocks... candy: recovering journal e2fsck: unable to set superblock flags on candy At this point, I decided it best to order some more drives and make an image using ddrescue Now two weeks later I have an image of the luks partition in a .img file. # ls -lh total 14T -rw-r--r-- 1 root root 14T Oct 25 01:57 candybox.img -rw-r--r-- 1 root root 271 Oct 20 14:32 candybox.logfile After numerous attempts using everything I could find online I could not coerce e2fsck to do anything on the image, so I used mkfs.ext4 -L candy candybox.img -m 0 -S and I was able to mount the dirty filesystem readonly without the journal and recover 960G of data. It gave all kinds of errors of various directories not existing and so forth but I was able to get some stuff. Which gave me some hope! I then ran e2fsck again and it had to recreate the root inode and gave a massive list of correcting group counts, I accepted the root inode creation and said no to everything else, leaving a completely empty filesystem. Re-ran again and said yes to all questions with the same result but now a "clean" but empty filesystem. extundelete gives me 0 recoverable inodes found. And now I'm stuck again, I can't come up with any other methods other than dropping to something like photorec which will give me an absolute mess with how large the filesystem was. I'm willing to re-copy the image from the original array and start over, if I can get any suggestions or ideas on a way to get more of my files back. I wish I could give more detailed logs of the commands that have run, but the output is long scrolled passed except for what gets logged to syslog and my memory is not as detailed due to the timeframe this has occurred over. Any help is greatly appreciated!

    Read the article

  • Installing EclipseFP on Mac OS X

    - by Dom Kennedy
    I am trying to install EclipseFP. I'm running OS X Mavericks. I've tried following both the official installation instructions and the advice in this answer on SU, but I'm still having the same problem. I can get the plugin itself installed painlessly using Help -> Install New Software..., Bbut when I restart and switch to the Haskell perspective, things start to go wrong. The installation instructions tells me that I should receive a prompt to install BuildWrapper and Scion Browser. I do not receive this prompt. Furthermore, if I create a new Haskell project, my code has no syntax highlighting, and the Hoogle search feature does not appear to do anything. It's clear that the plugin is not set up correctly yet. I've tried running cabal update in Terminal, but this does not change anything. After several attempts going round in circles with this on Eclipse Juno, I uninstalled Eclispe and the Haskell Platform and performed a clean install of Eclipse Luna and the latest Haskell Platform. However, the problems are persisting. I've tried going into Preferences to see if I could sort any of this out manually. I should initially point out that my GHC installation seems to be correctly references under Preferences -> Haskell Implementations Under Haskell -> Helper executables, there are areas for configuring the options of both BuildWrapper and Scion Browser. At present, both are blank. I tried clicking the Install from Hackage... button beside each of them with no success; I receive an error message saying Expected executable <workspace>/.metadata/.plugins/net.sf.eclipsefp.haskell.ui/sandbox/.cabal-sandbox/bin/buildwrapper not found!` (replace buildwrapper for scion-browser and the message is the same) The Eclipse console displays the following exception after doing the above with BuildWrapper: src/Language/Haskell/BuildWrapper/GHCStorage.hs:313:32: Not in scope: data constructor ‘MatchGroup’ cabal.real: Error: some packages failed to install: buildwrapper-0.7.4 failed during the building phase. The exception was: ExitFailure 1 and after doing it for Scion-Browser: zip-archive-0.2.3.4 (reinstall) changes: text-1.1.0.0 -> 0.11.3.1 pandoc-1.12.3.3 (latest: 1.13) -http-conduit (new version) Graphalyze-0.14.1.0 (reinstall) changes: pandoc-1.12.4.2 -> 1.12.3.3, text-1.1.0.0 -> 0.11.3.1 cabal.real: The following packages are likely to be broken by the reinstalls: pandoc-1.12.4.2 unordered-containers-0.2.4.0 aeson-0.7.0.4 scientific-0.2.0.2 case-insensitive-1.1.0.3 HTTP-4000.2.10 Use --force-reinstalls if you want to install anyway. After receiving similar results as the above on previous attempts, I've tried using force-reinstalls and ended up at more dead ends. I am at a loss as to what is wrong and how to solve this. I should point out that my GHC installation appears to be correctly configured under Preferences -> Haskell -> Haskell Implementations. Apologies if any of this information is irrelevant, I'm just not really sure what is important and what isn't at this point. Any help anyone could provide me with would be greatly appreciated.

    Read the article

  • ASP.NET MVC 3 Hosting :: New Features in ASP.NET MVC 3

    - by mbridge
    Razor View Engine The Razor view engine is a new view engine option for ASP.NET MVC that supports the Razor templating syntax. The Razor syntax is a streamlined approach to HTML templating designed with the goal of being a code driven minimalist templating approach that builds on existing C#, VB.NET and HTML knowledge. The result of this approach is that Razor views are very lean and do not contain unnecessary constructs that get in the way of you and your code. ASP.NET MVC 3 Preview 1 only supports C# Razor views which use the .cshtml file extension. VB.NET support will be enabled in later releases of ASP.NET MVC 3. For more information and examples, see Introducing “Razor” – a new view engine for ASP.NET on Scott Guthrie’s blog. Dynamic View and ViewModel Properties A new dynamic View property is available in views, which provides access to the ViewData object using a simpler syntax. For example, imagine two items are added to the ViewData dictionary in the Index controller action using code like the following: public ActionResult Index() {          ViewData["Title"] = "The Title";          ViewData["Message"] = "Hello World!"; } Those properties can be accessed in the Index view using code like this: <h2>View.Title</h2> <p>View.Message</p> There is also a new dynamic ViewModel property in the Controller class that lets you add items to the ViewData dictionary using a simpler syntax. Using the previous controller example, the two values added to the ViewData dictionary can be rewritten using the following code: public ActionResult Index() {     ViewModel.Title = "The Title";     ViewModel.Message = "Hello World!"; } “Add View” Dialog Box Supports Multiple View Engines The Add View dialog box in Visual Studio includes extensibility hooks that allow it to support multiple view engines, as shown in the following figure: Service Location and Dependency Injection Support ASP.NET MVC 3 introduces improved support for applying Dependency Injection (DI) via Inversion of Control (IoC) containers. ASP.NET MVC 3 Preview 1 provides the following hooks for locating services and injecting dependencies: - Creating controller factories. - Creating controllers and setting dependencies. - Setting dependencies on view pages for both the Web Form view engine and the Razor view engine (for types that derive from ViewPage, ViewUserControl, ViewMasterPage, WebViewPage). - Setting dependencies on action filters. Using a Dependency Injection container is not required in order for ASP.NET MVC 3 to function properly. Global Filters ASP.NET MVC 3 allows you to register filters that apply globally to all controller action methods. Adding a filter to the global filters collection ensures that the filter runs for all controller requests. To register an action filter globally, you can make the following call in the Application_Start method in the Global.asax file: GlobalFilters.Filters.Add(new MyActionFilter()); The source of global action filters is abstracted by the new IFilterProvider interface, which can be registered manually or by using Dependency Injection. This allows you to provide your own source of action filters and choose at run time whether to apply a filter to an action in a particular request. New JsonValueProviderFactory Class The new JsonValueProviderFactory class allows action methods to receive JSON-encoded data and model-bind it to an action-method parameter. This is useful in scenarios such as client templating. Client templates enable you to format and display a single data item or set of data items by using a fragment of HTML. ASP.NET MVC 3 lets you connect client templates easily with an action method that both returns and receives JSON data. Support for .NET Framework 4 Validation Attributes and IvalidatableObject The ValidationAttribute class was improved in the .NET Framework 4 to enable richer support for validation. When you write a custom validation attribute, you can use a new IsValid overload that provides a ValidationContext instance. This instance provides information about the current validation context, such as what object is being validated. This change enables scenarios such as validating the current value based on another property of the model. The following example shows a sample custom attribute that ensures that the value of PropertyOne is always larger than the value of PropertyTwo: public class CompareValidationAttribute : ValidationAttribute {     protected override ValidationResult IsValid(object value,              ValidationContext validationContext) {         var model = validationContext.ObjectInstance as SomeModel;         if (model.PropertyOne > model.PropertyTwo) {            return ValidationResult.Success;         }         return new ValidationResult("PropertyOne must be larger than PropertyTwo");     } } Validation in ASP.NET MVC also supports the .NET Framework 4 IValidatableObject interface. This interface allows your model to perform model-level validation, as in the following example: public class SomeModel : IValidatableObject {     public int PropertyOne { get; set; }     public int PropertyTwo { get; set; }     public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) {         if (PropertyOne <= PropertyTwo) {            yield return new ValidationResult(                "PropertyOne must be larger than PropertyTwo");         }     } } New IClientValidatable Interface The new IClientValidatable interface allows the validation framework to discover at run time whether a validator has support for client validation. This interface is designed to be independent of the underlying implementation; therefore, where you implement the interface depends on the validation framework in use. For example, for the default data annotations-based validator, the interface would be applied on the validation attribute. Support for .NET Framework 4 Metadata Attributes ASP.NET MVC 3 now supports .NET Framework 4 metadata attributes such as DisplayAttribute. New IMetadataAware Interface The new IMetadataAware interface allows you to write attributes that simplify how you can contribute to the ModelMetadata creation process. Before this interface was available, you needed to write a custom metadata provider in order to have an attribute provide extra metadata. This interface is consumed by the AssociatedMetadataProvider class, so support for the IMetadataAware interface is automatically inherited by all classes that derive from that class (notably, the DataAnnotationsModelMetadataProvider class). New Action Result Types In ASP.NET MVC 3, the Controller class includes two new action result types and corresponding helper methods. HttpNotFoundResult Action The new HttpNotFoundResult action result is used to indicate that a resource requested by the current URL was not found. The status code is 404. This class derives from HttpStatusCodeResult. The Controller class includes an HttpNotFound method that returns an instance of this action result type, as shown in the following example: public ActionResult List(int id) {     if (id < 0) {                 return HttpNotFound();     }     return View(); } HttpStatusCodeResult Action The new HttpStatusCodeResult action result is used to set the response status code and description. Permanent Redirect The HttpRedirectResult class has a new Boolean Permanent property that is used to indicate whether a permanent redirect should occur. A permanent redirect uses the HTTP 301 status code. Corresponding to this change, the Controller class now has several methods for performing permanent redirects: - RedirectPermanent - RedirectToRoutePermanent - RedirectToActionPermanent These methods return an instance of HttpRedirectResult with the Permanent property set to true. Breaking Changes The order of execution for exception filters has changed for exception filters that have the same Order value. In ASP.NET MVC 2 and earlier, exception filters on the controller with the same Order as those on an action method were executed before the exception filters on the action method. This would typically be the case when exception filters were applied without a specified order Order value. In MVC 3, this order has been reversed in order to allow the most specific exception handler to execute first. As in earlier versions, if the Order property is explicitly specified, the filters are run in the specified order. Known Issues When you are editing a Razor view (CSHTML file), the Go To Controller menu item in Visual Studio will not be available, and there are no code snippets.

    Read the article

  • TFS 2010 Build Custom Activity for Merging Assemblies

    - by Jakob Ehn
    *** The sample build process template discussed in this post is available for download from here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/ILMerge.xaml ***   In my previous post I talked about library builds that we use to build and replicate dependencies between applications in TFS. This is typically used for common libraries and tools that several other application need to reference. When the libraries grow in size over time, so does the number of assemblies. So all solutions that uses the common library must reference all the necessary assemblies that they need, and if we for example do a refactoring and extract some code into a new assembly, all the clients must update their references to reflect these changes, otherwise it won’t compile. To improve on this, we use a tool from Microsoft Research called ILMerge (Download from here). It can be used to merge several assemblies into one assembly that contains all types. If you haven’t used this tool before, you should check it out. Previously I have implemented this in builds using a simple batch file that contains the full command, something like this: "%ProgramFiles(x86)%\microsoft\ilmerge\ilmerge.exe" /target:library /attr:ClassLibrary1.bl.dll /out:MyNewLibrary.dll ClassLibrary1.dll ClassLibrar2.dll ClassLibrary3.dll This merges 3 assemblies (ClassLibrary1, 2 and 3) into a new assembly called MyNewLibrary.dll. It will copy the attributes (file version, product version etc..) from ClassLibrary1.dll, using the /attr switch. For more info on ILMerge command line tool, see the above link. This approach works, but requires a little bit too much knowledge for the developers creating builds, therefor I have implemented a custom activity that wraps the use of ILMerge. This makes it much simpler to setup a new build definition and have the build automatically do the merging. The usage of the activity is then implemented as part of the Library Build process template mentioned in the previous post. For this article I have just created a simple build process template that only performs the ILMerge operation.   Below is the code for the custom activity. To make it compile, you need to reference the ILMerge.exe assembly. /// <summary> /// Activity for merging a list of assembies into one, using ILMerge /// </summary> public sealed class ILMergeActivity : BaseCodeActivity { /// <summary> /// A list of file paths to the assemblies that should be merged /// </summary> [RequiredArgument] public InArgument<IEnumerable<string>> InputAssemblies { get; set; } /// <summary> /// Full path to the generated assembly /// </summary> [RequiredArgument] public InArgument<string> OutputFile { get; set; } /// <summary> /// Which input assembly that the attibutes for the generated assembly should be copied from. /// Optional. If not specified, the first input assembly will be used /// </summary> public InArgument<string> AttributeFile { get; set; } /// <summary> /// Kind of assembly to generate, dll or exe /// </summary> public InArgument<TargetKindEnum> TargetKind { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { string message = InputAssemblies.Get(context).Aggregate("", (current, assembly) => current + (assembly + " ")); TrackMessage(context, "Merging " + message + " into " + OutputFile.Get(context)); ILMerge m = new ILMerge(); m.SetInputAssemblies(InputAssemblies.Get(context).ToArray()); m.TargetKind = TargetKind.Get(context) == TargetKindEnum.Dll ? ILMerge.Kind.Dll : ILMerge.Kind.Exe; m.OutputFile = OutputFile.Get(context); m.AttributeFile = !String.IsNullOrEmpty(AttributeFile.Get(context)) ? AttributeFile.Get(context) : InputAssemblies.Get(context).First(); m.SetTargetPlatform(RuntimeEnvironment.GetSystemVersion().Substring(0,2), RuntimeEnvironment.GetRuntimeDirectory()); m.Merge(); TrackMessage(context, "Generated " + m.OutputFile); } } [Browsable(true)] public enum TargetKindEnum { Dll, Exe } NB: The activity inherits from a BaseCodeActivity class which is an internal helper class which contains some methods and properties useful for moste custom activities. In this case, it uses the TrackeMessage method for writing to the build log. You either need to remove the TrackMessage method calls, or implement this yourself (which is not very hard… ) The custom activity has the following input arguments: InputAssemblies A list with the (full) paths to the assemblies to merge OutputFile The name of the resulting merged assembly AttributeFile Which assembly to use as the template for the attribute of the merged assembly. This argument is optional and if left blank, the first assembly in the input list is used TargetKind Decides what type of assembly to create, can be either a dll or an exe Of course, there are more switches to the ILMerge.exe, and these can be exposed as input arguments as well if you need it. To show how the custom activity can be used, I have attached a build process template (see link at the top of this post) that merges the output of the projects being built (CommonLibrary.dll and CommonLibrary2.dll) into a merged assembly (NewLibrary.dll). The build process template has the following custom process parameters:   The Assemblies To Merge argument is passed into a FindMatchingFiles activity to located all assemblies that are located in the BinariesDirectory folder after the compilation has been performed by Team Build. Here is the complete sequence of activities that performs the merge operation. It is located at the end of the Try, Compile, Test and Associate… sequence: It splits the AssembliesToMerge parameter and appends the full path (using the BinariesDirectory variable) and then enumerates the matching files using the FindMatchingFiles activity. When running the build, you can see that it merges two assemblies into a new one:     And the merged assembly (and associated pdb file) is copied to the drop location together with the rest of the assemblies:

    Read the article

  • WatiN screenshot saver

    - by Brian Schroer
    In addition to my automated unit, system and integration tests for ASP.NET projects, I like to give my customers something pretty that they can look at and visually see that the web site is behaving properly. I use the Gallio test runner to produce a pretty HTML report, and WatiN (Web Application Testing In .NET) to test the UI and create screenshots. I have a couple of issues with WatiN’s “CaptureWebPageToFile” method, though: It blew up the first (and only) time I tried it, possibly because… It scrolls down to capture the entire web page (I tried it on a very long page), and I usually don’t need that Also, sometimes I don’t need a picture of the whole browser window - I just want a picture of the element that I'm testing (for example, proving that a button has the correct caption). I wrote a WatiN screenshot saver helper class with these methods: SaveBrowserWindowScreenshot(Watin.Core.IE ie)  / SaveBrowserWindowScreenshot(Watin.Core.Element element) saves a screenshot of the browser window SaveBrowserWindowScreenshotWithHighlight(Watin.Core.Element element) saves a screenshot of the browser window, with the specified element scrolled into view and highlighted SaveElementScreenshot(Watin.Core.Element element) saves a picture of only the specified element The element highlighting improves on the built-in WatiN method (which just gives the element a yellow background, and makes the element pretty much unreadable when you have a light foreground color) by adding the ability to specify a HighlightCssClassName that points to a style in your site’s stylesheet. This code is specifically for testing with Internet Explorer (‘cause that’s what I have to test with at work), but you’re welcome to take it and do with it what you want… using System; using System.Drawing; using System.Drawing.Imaging; using System.IO; using System.Reflection; using System.Runtime.InteropServices; using System.Text; using System.Threading; using SHDocVw; using WatiN.Core; using mshtml; namespace BrianSchroer.TestHelpers { public static class WatinScreenshotSaver { public static void SaveBrowserWindowScreenshotWithHighlight (Element element, string screenshotName) { HighlightElement(element, true); SaveBrowserWindowScreenshot(element, screenshotName); HighlightElement(element, false); } public static void SaveBrowserWindowScreenshotWithHighlight(Element element) { HighlightElement(element, true); SaveBrowserWindowScreenshot(element); HighlightElement(element, false); } public static void SaveBrowserWindowScreenshot(Element element, string screenshotName) { SaveScreenshot(GetIe(element), screenshotName, SaveBitmapForCallbackArgs); } public static void SaveBrowserWindowScreenshot(Element element) { SaveScreenshot(GetIe(element), null, SaveBitmapForCallbackArgs); } public static void SaveBrowserWindowScreenshot(IE ie, string screenshotName) { SaveScreenshot(ie, screenshotName, SaveBitmapForCallbackArgs); } public static void SaveBrowserWindowScreenshot(IE ie) { SaveScreenshot(ie, null, SaveBitmapForCallbackArgs); } public static void SaveElementScreenshot(Element element, string screenshotName) { // TODO: Figure out how to get browser window "chrome" size and not have to go to full screen: var iex = (InternetExplorerClass) GetIe(element).InternetExplorer; bool fullScreen = iex.FullScreen; if (!fullScreen) iex.FullScreen = true; ScrollIntoView(element); SaveScreenshot(GetIe(element), screenshotName, args => SaveElementBitmapForCallbackArgs(element, args)); iex.FullScreen = fullScreen; } public static void SaveElementScreenshot(Element element) { SaveElementScreenshot(element, null); } private static void SaveScreenshot(IE browser, string screenshotName, Action<ScreenshotCallbackArgs> screenshotCallback) { string fileName = string.Format("{0:000}{1}{2}.jpg", ++_screenshotCount, (string.IsNullOrEmpty(screenshotName)) ? "" : " ", screenshotName); string path = Path.Combine(ScreenshotDirectoryName, fileName); Console.WriteLine(); // Gallio HTML-encodes the following display, but I have a utility program to // remove the "HTML===" and "===HTML" and un-encode the rest to show images in the Gallio report: Console.WriteLine("HTML===<div><b>{0}:</br></b><img src=\"{1}\" /></div>===HTML", screenshotName, new Uri(path).AbsoluteUri); MakeBrowserWindowTopmost(browser); try { var args = new ScreenshotCallbackArgs { InternetExplorerClass = (InternetExplorerClass)browser.InternetExplorer, ScreenshotPath = path }; Thread.Sleep(100); screenshotCallback(args); } catch (Exception ex) { Console.WriteLine(ex.Message); } } public static void HighlightElement(Element element, bool doHighlight) { if (!element.Exists) return; if (string.IsNullOrEmpty(HighlightCssClassName)) { element.Highlight(doHighlight); return; } string jsRef = element.GetJavascriptElementReference(); if (string.IsNullOrEmpty(jsRef)) return; var sb = new StringBuilder("try { "); sb.AppendFormat(" {0}.scrollIntoView(false);", jsRef); string format = (doHighlight) ? "{0}.className += ' {1}'" : "{0}.className = {0}.className.replace(' {1}', '')"; sb.AppendFormat(" " + format + ";", jsRef, HighlightCssClassName); sb.Append("} catch(e) {}"); string script = sb.ToString(); GetIe(element).RunScript(script); } public static void ScrollIntoView(Element element) { string jsRef = element.GetJavascriptElementReference(); if (string.IsNullOrEmpty(jsRef)) return; var sb = new StringBuilder("try { "); sb.AppendFormat(" {0}.scrollIntoView(false);", jsRef); sb.Append("} catch(e) {}"); string script = sb.ToString(); GetIe(element).RunScript(script); } public static void MakeBrowserWindowTopmost(IE ie) { ie.BringToFront(); SetWindowPos(ie.hWnd, HWND_TOPMOST, 0, 0, 0, 0, TOPMOST_FLAGS); } public static string HighlightCssClassName { get; set; } private static int _screenshotCount; private static string _screenshotDirectoryName; public static string ScreenshotDirectoryName { get { if (_screenshotDirectoryName == null) { var asm = Assembly.GetAssembly(typeof(WatinScreenshotSaver)); var uri = new Uri(asm.CodeBase); var fileInfo = new FileInfo(uri.LocalPath); string directoryName = fileInfo.DirectoryName; _screenshotDirectoryName = Path.Combine( directoryName, string.Format("Screenshots_{0:yyyyMMddHHmm}", DateTime.Now)); Console.WriteLine("Screenshot folder: {0}", _screenshotDirectoryName); Directory.CreateDirectory(_screenshotDirectoryName); } return _screenshotDirectoryName; } set { _screenshotDirectoryName = value; _screenshotCount = 0; } } [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] private static extern bool SetWindowPos(IntPtr hWnd, IntPtr hWndInsertAfter, int X, int Y, int cx, int cy, uint uFlags); private static readonly IntPtr HWND_TOPMOST = new IntPtr(-1); private const UInt32 SWP_NOSIZE = 0x0001; private const UInt32 SWP_NOMOVE = 0x0002; private const UInt32 TOPMOST_FLAGS = SWP_NOMOVE | SWP_NOSIZE; private static IE GetIe(Element element) { if (element == null) return null; var container = element.DomContainer; while (container as IE == null) container = container.DomContainer; return (IE)container; } private static void SaveBitmapForCallbackArgs(ScreenshotCallbackArgs args) { InternetExplorerClass iex = args.InternetExplorerClass; SaveBitmap(args.ScreenshotPath, iex.Left, iex.Top, iex.Width, iex.Height); } private static void SaveElementBitmapForCallbackArgs(Element element, ScreenshotCallbackArgs args) { InternetExplorerClass iex = args.InternetExplorerClass; Rectangle bounds = GetElementBounds(element); SaveBitmap(args.ScreenshotPath, iex.Left + bounds.Left, iex.Top + bounds.Top, bounds.Width, bounds.Height); } /// <summary> /// This method is used instead of element.NativeElement.GetElementBounds because that /// method has a bug (http://sourceforge.net/tracker/?func=detail&aid=2994660&group_id=167632&atid=843727). /// </summary> private static Rectangle GetElementBounds(Element element) { var ieElem = element.NativeElement as WatiN.Core.Native.InternetExplorer.IEElement; IHTMLElement elem = ieElem.AsHtmlElement; int left = elem.offsetLeft; int top = elem.offsetTop; for (IHTMLElement parent = elem.offsetParent; parent != null; parent = parent.offsetParent) { left += parent.offsetLeft; top += parent.offsetTop; } return new Rectangle(left, top, elem.offsetWidth, elem.offsetHeight); } private static void SaveBitmap(string path, int left, int top, int width, int height) { using (var bitmap = new Bitmap(width, height)) { using (Graphics g = Graphics.FromImage(bitmap)) { g.CopyFromScreen( new Point(left, top), Point.Empty, new Size(width, height) ); } bitmap.Save(path, ImageFormat.Jpeg); } } private class ScreenshotCallbackArgs { public InternetExplorerClass InternetExplorerClass { get; set; } public string ScreenshotPath { get; set; } } } }

    Read the article

  • Creating packages in code - Package Configurations

    Continuing my theme of building various types of packages in code, this example shows how to building a package with package configurations. Incidentally it shows you how to add a variable, and a connection too. It covers the five most common configurations: Configuration File Indirect Configuration File SQL Server Indirect SQL Server Environment Variable  For a general overview try the SQL Server Books Online Package Configurations topic. The sample uses a a simple helper function ApplyConfig to create or update a configuration, although in the example we will only ever create. The most useful knowledge is the configuration string (Configuration.ConfigurationString) that you need to set. Configuration Type Configuration String Description Configuration File The full path and file name of an XML configuration file. The file can contain one or more configuration and includes the target path and new value to set. Indirect Configuration File An environment variable the value of which contains full path and file name of an XML configuration file as per the Configuration File type described above. SQL Server A three part configuration string, with each part being quote delimited and separated by a semi-colon. -- The first part is the connection manager name. The connection tells you which server and database to look for the configuration table. -- The second part is the name of the configuration table. The table is of a standard format, use the Package Configuration Wizard to help create an example, or see the sample script files below. The table contains one or more rows or configuration items each with a target path and new value. -- The third and final part is the optional filter name. A configuration table can contain multiple configurations, and the filter is  literal value that can be used to group items together and act as a filter clause when configurations are being read. If you do not need a filter, just leave the value empty. Indirect SQL Server An environment variable the value of which is the three part configuration string as per the SQL Server type described above. Environment Variable An environment variable the value of which is the value to set in the package. This is slightly different to the other examples as the configuration definition in the package also includes the target information. In our ApplyConfig function this is the only example that actually supplies a target value for the Configuration.PackagePath property. The path is an XPath style path for the target property, \Package.Variables[User::Variable].Properties[Value], the equivalent of which can be seen in the screenshot below, with the object being our variable called Variable, and the property to set is the Value property of that variable object. The configurations as seen when opening the generated package in BIDS: The sample code creates the package, adds a variable and connection manager, enables configurations, and then adds our example configurations. The package is then saved to disk, useful for checking the package and testing, before finally executing, just to prove it is valid. There are some external resources used here, namely some environment variables and a table, see below for more details. namespace Konesans.Dts.Samples { using System; using Microsoft.SqlServer.Dts.Runtime; public class PackageConfigurations { public void CreatePackage() { // Create a new package Package package = new Package(); package.Name = "ConfigurationSample"; // Add a variable, the target for our configurations package.Variables.Add("Variable", false, "User", 0); // Add a connection, for SQL configurations // Add the SQL OLE-DB connection ConnectionManager connectionManagerOleDb = package.Connections.Add("OLEDB"); connectionManagerOleDb.Name = "SQLConnection"; connectionManagerOleDb.ConnectionString = "Provider=SQLOLEDB.1;Data Source=(local);Initial Catalog=master;Integrated Security=SSPI;"; // Add our example configurations, first must enable package setting package.EnableConfigurations = true; // Direct configuration file, see sample file this.ApplyConfig(package, "Configuration File", DTSConfigurationType.ConfigFile, "C:\\Temp\\XmlConfig.dtsConfig", string.Empty); // Indirect configuration file, the emvironment variable XmlConfigFileEnvironmentVariable // contains the path to the configuration file, e.g. C:\Temp\XmlConfig.dtsConfig this.ApplyConfig(package, "Indirect Configuration File", DTSConfigurationType.IConfigFile, "XmlConfigFileEnvironmentVariable", string.Empty); // Direct SQL Server configuration, uses the SQLConnection package connection to read // configurations from the [dbo].[SSIS Configurations] table, with a filter of "SampleFilter" this.ApplyConfig(package, "SQL Server", DTSConfigurationType.SqlServer, "\"SQLConnection\";\"[dbo].[SSIS Configurations]\";\"SampleFilter\";", string.Empty); // Indirect SQL Server configuration, the environment variable "SQLServerEnvironmentVariable" // contains the configuration string e.g. "SQLConnection";"[dbo].[SSIS Configurations]";"SampleFilter"; this.ApplyConfig(package, "Indirect SQL Server", DTSConfigurationType.ISqlServer, "SQLServerEnvironmentVariable", string.Empty); // Direct environment variable, the value of the EnvironmentVariable environment variable is // applied to the target property, the value of the "User::Variable" package variable this.ApplyConfig(package, "EnvironmentVariable", DTSConfigurationType.EnvVariable, "EnvironmentVariable", "\\Package.Variables[User::Variable].Properties[Value]"); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); Console.WriteLine(@"C:\Temp\{0}.dtsx", package.Name); #endif // Execute package package.Execute(); // Basic check for warnings foreach (DtsWarning warning in package.Warnings) { Console.WriteLine("WarningCode : {0}", warning.WarningCode); Console.WriteLine(" SubComponent : {0}", warning.SubComponent); Console.WriteLine(" Description : {0}", warning.Description); Console.WriteLine(); } // Basic check for errors foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); Console.WriteLine(); } package.Dispose(); } /// <summary> /// Add or update an package configuration. /// </summary> /// <param name="package">The package.</param> /// <param name="name">The configuration name.</param> /// <param name="type">The type of configuration</param> /// <param name="setting">The configuration setting.</param> /// <param name="target">The target of the configuration, leave blank if not required.</param> internal void ApplyConfig(Package package, string name, DTSConfigurationType type, string setting, string target) { Configurations configurations = package.Configurations; Configuration configuration; if (configurations.Contains(name)) { configuration = configurations[name]; } else { configuration = configurations.Add(); } configuration.Name = name; configuration.ConfigurationType = type; configuration.ConfigurationString = setting; configuration.PackagePath = target; } } } The following table lists the environment variables required for the full example to work along with some sample values. Variable Sample value EnvironmentVariable 1 SQLServerEnvironmentVariable "SQLConnection";"[dbo].[SSIS Configurations]";"SampleFilter"; XmlConfigFileEnvironmentVariable C:\Temp\XmlConfig.dtsConfig Sample code, package and configuration file. ConfigurationApplication.cs ConfigurationSample.dtsx XmlConfig.dtsConfig

    Read the article

  • Michael Crump&rsquo;s notes for 70-563 PRO &ndash; Designing and Developing Windows Applications usi

    - by mbcrump
    TIME TO GO PRO! This is my notes for 70-563 PRO – Designing and Developing Windows Applications using .NET Framework 3.5 I created it using several resources (various certification web sites, msdn, official ms 70-548 book). The reason that I created this review is because a) I am taking the exam. b) MS did not create a book for this exam. Use the(MS 70-548)book. c) To make sure I am familiar with each before the exam. I hope that it provides a good start for your own notes. I hope that someone finds this useful. At least, it will give you a starting point of what to expect to know on the PRO exam. Also, for those wondering, the PRO exam does contains very little code. It is basically all theory. 1. Validation Controls – How to prevent users from entering invalid data on forms. (MaskedTextBox control and RegEx) 2. ServiceController – used to start and control the behavior of existing services. 3. User Feedback (know winforms Status Bar, Tool Tips, Color, Error Provider, Context-Sensitive and Accessibility) 4. Specific (derived) exceptions must be handled before general (base class) exceptions. By moving the exception handling for the base type Exception to after exception handling of ArgumentNullException, all ArgumentNullException thrown by the Helper method will be caught and logged correctly. 5. A heartbeat method is a method exposed by a Web service that allows external applications to check on the status of the service. 6. New users must master key tasks quickly. Giving these tasks context and appropriate detail will help. However, advanced users will demand quicker paths. Shortcuts, accelerators, or toolbar buttons will speed things along for the advanced user. 7. MSBuild uses project files to instruct the build engine what to build and how to build it. MSBuild project files are XML files that adhere to the MSBuild XML schema. The MSBuild project files contain complete file, build action, and dependency information for each individual projects. 8. Evaluating whether or not to fix a bug involves a triage process. You must identify the bug's impact, set the priority, categorize it, and assign a developer. Many times the person doing the triage work will assign the bug to a developer for further investigation. In fact, the workflow for the bug work item inside of Team System supports this step. Developers are often asked to assess the impact of a given bug. This assessment helps the person doing the triage make a decision on how to proceed. When assessing the impact of a bug, you should consider time and resources to fix it, bug risk, and impacts of the bug. 9. In large projects it is generally impossible and unfeasible to fix all bugs because of the impact on schedule and budget. 10. Code reviews should be conducted by a technical lead or a technical peer. 11. Testing Applications 12. WCF Services – application state 13. SQL Server 2005 / 2008 Express Edition – reliable storage of data / Microsoft SQL Server 3.5 Compact Database– used for client computers to retrieve and save data from a shared location. 14. SQL Server 2008 Compact Edition – used for minimum possible memory and can synchronize data with a corporate SQL Server 2008 Database. Supports offline user and minimum dependency on external components. 15. MDI and SDI Forms (specifically IsMDIContainer) 16. GUID – in the case of data warehousing, it is important to define unique keys. 17. Encrypting / Security Data 18. Understanding of Isolated Storage/Proper location to store items 19. LINQ to SQL 20. Multithreaded access 21. ADO.NET Entity Framework model 22. Marshal.ReleaseComObject 23. Common User Interface Layout (ComboBox, ListBox, Listview, MaskedTextBox, TextBox, RichTextBox, SplitContainer, TableLayoutPanel, TabControl) 24. DataSets Class - http://msdn.microsoft.com/en-us/library/system.data.dataset%28VS.71%29.aspx 25. SQL Server 2008 Reporting Services (SSRS) 26. SystemIcons.Shield (Vista UAC) 27. Leverging stored procedures to perform data manipulation for a database schema that can change. 28. DataContext 29. Microsoft Windows Installer Packages, ClickOnce(bootstrapping features), XCopy. 30. Client Application Services – will authenticate users by using the same data source as a ASP.NET web application. 31. SQL Server 2008 Caching 32. StringBuilder 33. Accessibility Guidelines for Windows Applications http://msdn.microsoft.com/en-us/library/ms228004.aspx 34. Logging erros 35. Testing performance related issues. 36. Role Based Security, GenericIdentity and GenericPrincipal 37. System.Net.CookieContainer will store session data for webapps (see isolated storage for winforms) 38. .NET CLR Profiler tool will identify objects that cause performance issues. 39. ADO.NET Synchronization (SyncGroup) 40. Globalization - CultureInfo 41. IDisposable Interface- reports on several questions relating to this. 42. Adding timestamps to determine whether data has changed or not. 43. Converting applications to .NET Framework 3.5 44. MicrosoftReportViewer 45. Composite Controls 46. Windows Vista KNOWN folders. 47. Microsoft Sync Framework 48. TypeConverter -Provides a unified way of converting types of values to other types, as well as for accessing standard values and sub properties. http://msdn.microsoft.com/en-us/library/system.componentmodel.typeconverter.aspx 49. Concurrency control mechanisms The main categories of concurrency control mechanisms are: Optimistic - Delay the checking of whether a transaction meets the isolation rules (e.g., serializability and recoverability) until its end, without blocking any of its (read, write) operations, and then abort a transaction, if the desired rules are violated. Pessimistic - Block operations of a transaction, if they may cause violation of the rules. Semi-optimistic - Block operations in some situations, and do not block in other situations, while delaying rules checking to transaction's end, as done with optimistic. 50. AutoResetEvent 51. Microsoft Messaging Queue (MSMQ) 4.0 52. Bulk imports 53. KeyDown event of controls 54. WPF UI components 55. UI process layer 56. GAC (installing, removing and queuing) 57. Use a local database cache to reduce the network bandwidth used by applications. 58. Sound can easily be annoying and distracting to users, so use it judiciously. Always give users the option to turn sound off. Because a user might have sound off, never convey important information through sound alone.

    Read the article

  • AngularJS on top of ASP.NET: Moving the MVC framework out to the browser

    - by Varun Chatterji
    Heavily drawing inspiration from Ruby on Rails, MVC4’s convention over configuration model of development soon became the Holy Grail of .NET web development. The MVC model brought with it the goodness of proper separation of concerns between business logic, data, and the presentation logic. However, the MVC paradigm, was still one in which server side .NET code could be mixed with presentation code. The Razor templating engine, though cleaner than its predecessors, still encouraged and allowed you to mix .NET server side code with presentation logic. Thus, for example, if the developer required a certain <div> tag to be shown if a particular variable ShowDiv was true in the View’s model, the code could look like the following: Fig 1: To show a div or not. Server side .NET code is used in the View Mixing .NET code with HTML in views can soon get very messy. Wouldn’t it be nice if the presentation layer (HTML) could be pure HTML? Also, in the ASP.NET MVC model, some of the business logic invariably resides in the controller. It is tempting to use an anti­pattern like the one shown above to control whether a div should be shown or not. However, best practice would indicate that the Controller should not be aware of the div. The ShowDiv variable in the model should not exist. A controller should ideally, only be used to do the plumbing of getting the data populated in the model and nothing else. The view (ideally pure HTML) should render the presentation layer based on the model. In this article we will see how Angular JS, a new JavaScript framework by Google can be used effectively to build web applications where: 1. Views are pure HTML 2. Controllers (in the server sense) are pure REST based API calls 3. The presentation layer is loaded as needed from partial HTML only files. What is MVVM? MVVM short for Model View View Model is a new paradigm in web development. In this paradigm, the Model and View stuff exists on the client side through javascript instead of being processed on the server through postbacks. These frameworks are JavaScript frameworks that facilitate the clear separation of the “frontend” or the data rendering logic from the “backend” which is typically just a REST based API that loads and processes data through a resource model. The frameworks are called MVVM as a change to the Model (through javascript) gets reflected in the view immediately i.e. Model > View. Also, a change on the view (through manual input) gets reflected in the model immediately i.e. View > Model. The following figure shows this conceptually (comments are shown in red): Fig 2: Demonstration of MVVM in action In Fig 2, two text boxes are bound to the same variable model.myInt. Thus, changing the view manually (changing one text box through keyboard input) also changes the other textbox in real time demonstrating V > M property of a MVVM framework. Furthermore, clicking the button adds 1 to the value of model.myInt thus changing the model through JavaScript. This immediately updates the view (the value in the two textboxes) thus demonstrating the M > V property of a MVVM framework. Thus we see that the model in a MVVM JavaScript framework can be regarded as “the single source of truth“. This is an important concept. Angular is one such MVVM framework. We shall use it to build a simple app that sends SMS messages to a particular number. Application, Routes, Views, Controllers, Scope and Models Angular can be used in many ways to construct web applications. For this article, we shall only focus on building Single Page Applications (SPAs). Many of the approaches we will follow in this article have alternatives. It is beyond the scope of this article to explain every nuance in detail but we shall try to touch upon the basic concepts and end up with a working application that can be used to send SMS messages using Sent.ly Plus (a service that is itself built using Angular). Before you read on, we would like to urge you to forget what you know about Models, Views, Controllers and Routes in the ASP.NET MVC4 framework. All these words have different meanings in the Angular world. Whenever these words are used in this article, they will refer to Angular concepts and not ASP.NET MVC4 concepts. The following figure shows the skeleton of the root page of an SPA: Fig 3: The skeleton of a SPA The skeleton of the application is based on the Bootstrap starter template which can be found at: http://getbootstrap.com/examples/starter­template/ Apart from loading the Angular, jQuery and Bootstrap JavaScript libraries, it also loads our custom scripts /app/js/controllers.js /app/js/app.js These scripts define the routes, views and controllers which we shall come to in a moment. Application Notice that the body tag (Fig. 3) has an extra attribute: ng­app=”smsApp” Providing this tag “bootstraps” our single page application. It tells Angular to load a “module” called smsApp. This “module” is defined /app/js/app.js angular.module('smsApp', ['smsApp.controllers', function () {}]) Fig 4: The definition of our application module The line shows above, declares a module called smsApp. It also declares that this module “depends” on another module called “smsApp.controllers”. The smsApp.controllers module will contain all the controllers for our SPA. Routing and Views Notice that in the Navbar (in Fig 3) we have included two hyperlinks to: “#/app” “#/help” This is how Angular handles routing. Since the URLs start with “#”, they are actually just bookmarks (and not server side resources). However, our route definition (in /app/js/app.js) gives these URLs a special meaning within the Angular framework. angular.module('smsApp', ['smsApp.controllers', function () { }]) //Configure the routes .config(['$routeProvider', function ($routeProvider) { $routeProvider.when('/binding', { templateUrl: '/app/partials/bindingexample.html', controller: 'BindingController' }); }]); Fig 5: The definition of a route with an associated partial view and controller As we can see from the previous code sample, we are using the $routeProvider object in the configuration of our smsApp module. Notice how the code “asks for” the $routeProvider object by specifying it as a dependency in the [] braces and then defining a function that accepts it as a parameter. This is known as dependency injection. Please refer to the following link if you want to delve into this topic: http://docs.angularjs.org/guide/di What the above code snippet is doing is that it is telling Angular that when the URL is “#/binding”, then it should load the HTML snippet (“partial view”) found at /app/partials/bindingexample.html. Also, for this URL, Angular should load the controller called “BindingController”. We have also marked the div with the class “container” (in Fig 3) with the ng­view attribute. This attribute tells Angular that views (partial HTML pages) defined in the routes will be loaded within this div. You can see that the Angular JavaScript framework, unlike many other frameworks, works purely by extending HTML tags and attributes. It also allows you to extend HTML with your own tags and attributes (through directives) if you so desire, you can find out more about directives at the following URL: http://www.codeproject.com/Articles/607873/Extending­HTML­with­AngularJS­Directives Controllers and Models We have seen how we define what views and controllers should be loaded for a particular route. Let us now consider how controllers are defined. Our controllers are defined in the file /app/js/controllers.js. The following snippet shows the definition of the “BindingController” which is loaded when we hit the URL http://localhost:port/index.html#/binding (as we have defined in the route earlier as shown in Fig 5). Remember that we had defined that our application module “smsApp” depends on the “smsApp.controllers” module (see Fig 4). The code snippet below shows how the “BindingController” defined in the route shown in Fig 5 is defined in the module smsApp.controllers: angular.module('smsApp.controllers', [function () { }]) .controller('BindingController', ['$scope', function ($scope) { $scope.model = {}; $scope.model.myInt = 6; $scope.addOne = function () { $scope.model.myInt++; } }]); Fig 6: The definition of a controller in the “smsApp.controllers” module. The pieces are falling in place! Remember Fig.2? That was the code of a partial view that was loaded within the container div of the skeleton SPA shown in Fig 3. The route definition shown in Fig 5 also defined that the controller called “BindingController” (shown in Fig 6.) was loaded when we loaded the URL: http://localhost:22544/index.html#/binding The button in Fig 2 was marked with the attribute ng­click=”addOne()” which added 1 to the value of model.myInt. In Fig 6, we can see that this function is actually defined in the “BindingController”. Scope We can see from Fig 6, that in the definition of “BindingController”, we defined a dependency on $scope and then, as usual, defined a function which “asks for” $scope as per the dependency injection pattern. So what is $scope? Any guesses? As you might have guessed a scope is a particular “address space” where variables and functions may be defined. This has a similar meaning to scope in a programming language like C#. Model: The Scope is not the Model It is tempting to assign variables in the scope directly. For example, we could have defined myInt as $scope.myInt = 6 in Fig 6 instead of $scope.model.myInt = 6. The reason why this is a bad idea is that scope in hierarchical in Angular. Thus if we were to define a controller which was defined within the another controller (nested controllers), then the inner controller would inherit the scope of the parent controller. This inheritance would follow JavaScript prototypal inheritance. Let’s say the parent controller defined a variable through $scope.myInt = 6. The child controller would inherit the scope through java prototypical inheritance. This basically means that the child scope has a variable myInt that points to the parent scopes myInt variable. Now if we assigned the value of myInt in the parent, the child scope would be updated with the same value as the child scope’s myInt variable points to the parent scope’s myInt variable. However, if we were to assign the value of the myInt variable in the child scope, then the link of that variable to the parent scope would be broken as the variable myInt in the child scope now points to the value 6 and not to the parent scope’s myInt variable. But, if we defined a variable model in the parent scope, then the child scope will also have a variable model that points to the model variable in the parent scope. Updating the value of $scope.model.myInt in the parent scope would change the model variable in the child scope too as the variable is pointed to the model variable in the parent scope. Now changing the value of $scope.model.myInt in the child scope would ALSO change the value in the parent scope. This is because the model reference in the child scope is pointed to the scope variable in the parent. We did no new assignment to the model variable in the child scope. We only changed an attribute of the model variable. Since the model variable (in the child scope) points to the model variable in the parent scope, we have successfully changed the value of myInt in the parent scope. Thus the value of $scope.model.myInt in the parent scope becomes the “single source of truth“. This is a tricky concept, thus it is considered good practice to NOT use scope inheritance. More info on prototypal inheritance in Angular can be found in the “JavaScript Prototypal Inheritance” section at the following URL: https://github.com/angular/angular.js/wiki/Understanding­Scopes. Building It: An Angular JS application using a .NET Web API Backend Now that we have a perspective on the basic components of an MVVM application built using Angular, let’s build something useful. We will build an application that can be used to send out SMS messages to a given phone number. The following diagram describes the architecture of the application we are going to build: Fig 7: Broad application architecture We are going to add an HTML Partial to our project. This partial will contain the form fields that will accept the phone number and message that needs to be sent as an SMS. It will also display all the messages that have previously been sent. All the executable code that is run on the occurrence of events (button clicks etc.) in the view resides in the controller. The controller interacts with the ASP.NET WebAPI to get a history of SMS messages, add a message etc. through a REST based API. For the purposes of simplicity, we will use an in memory data structure for the purposes of creating this application. Thus, the tasks ahead of us are: Creating the REST WebApi with GET, PUT, POST, DELETE methods. Creating the SmsView.html partial Creating the SmsController controller with methods that are called from the SmsView.html partial Add a new route that loads the controller and the partial. 1. Creating the REST WebAPI This is a simple task that should be quite straightforward to any .NET developer. The following listing shows our ApiController: public class SmsMessage { public string to { get; set; } public string message { get; set; } } public class SmsResource : SmsMessage { public int smsId { get; set; } } public class SmsResourceController : ApiController { public static Dictionary<int, SmsResource> messages = new Dictionary<int, SmsResource>(); public static int currentId = 0; // GET api/<controller> public List<SmsResource> Get() { List<SmsResource> result = new List<SmsResource>(); foreach (int key in messages.Keys) { result.Add(messages[key]); } return result; } // GET api/<controller>/5 public SmsResource Get(int id) { if (messages.ContainsKey(id)) return messages[id]; return null; } // POST api/<controller> public List<SmsResource> Post([FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { SmsResource res = (SmsResource) value; res.smsId = currentId++; messages.Add(res.smsId, res); //SentlyPlusSmsSender.SendMessage(value.to, value.message); return Get(); } } // PUT api/<controller>/5 public List<SmsResource> Put(int id, [FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { if (messages.ContainsKey(id)) { //Update the message messages[id].message = value.message; messages[id].to = value.message; } return Get(); } } // DELETE api/<controller>/5 public List<SmsResource> Delete(int id) { if (messages.ContainsKey(id)) { messages.Remove(id); } return Get(); } } Once this class is defined, we should be able to access the WebAPI by a simple GET request using the browser: http://localhost:port/api/SmsResource Notice the commented line: //SentlyPlusSmsSender.SendMessage The SentlyPlusSmsSender class is defined in the attached solution. We have shown this line as commented as we want to explain the core Angular concepts. If you load the attached solution, this line is uncommented in the source and an actual SMS will be sent! By default, the API returns XML. For consumption of the API in Angular, we would like it to return JSON. To change the default to JSON, we make the following change to WebApiConfig.cs file located in the App_Start folder. public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var appXmlType = config.Formatters.XmlFormatter. SupportedMediaTypes. FirstOrDefault( t => t.MediaType == "application/xml"); config.Formatters.XmlFormatter.SupportedMediaTypes.Remove(appXmlType); } } We now have our backend REST Api which we can consume from Angular! 2. Creating the SmsView.html partial This simple partial will define two fields: the destination phone number (international format starting with a +) and the message. These fields will be bound to model.phoneNumber and model.message. We will also add a button that we shall hook up to sendMessage() in the controller. A list of all previously sent messages (bound to model.allMessages) will also be displayed below the form input. The following code shows the code for the partial: <!--­­ If model.errorMessage is defined, then render the error div -­­> <div class="alert alert-­danger alert-­dismissable" style="margin­-top: 30px;" ng­-show="model.errorMessage != undefined"> <button type="button" class="close" data­dismiss="alert" aria­hidden="true">&times;</button> <strong>Error!</strong> <br /> {{ model.errorMessage }} </div> <!--­­ The input fields bound to the model --­­> <div class="well" style="margin-­top: 30px;"> <table style="width: 100%;"> <tr> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Phone number (eg; +44 7778 609466)" ng­-model="model.phoneNumber" class="form-­control" style="width: 90%" onkeypress="return checkPhoneInput();" /> </td> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Message" ng­-model="model.message" class="form-­control" style="width: 90%" /> </td> <td style="text-­align: center;"> <button class="btn btn-­danger" ng-­click="sendMessage();" ng-­disabled="model.isAjaxInProgress" style="margin­right: 5px;">Send</button> <img src="/Content/ajax-­loader.gif" ng­-show="model.isAjaxInProgress" /> </td> </tr> </table> </div> <!--­­ The past messages ­­--> <div style="margin-­top: 30px;"> <!­­-- The following div is shown if there are no past messages --­­> <div ng­-show="model.allMessages.length == 0"> No messages have been sent yet! </div> <!--­­ The following div is shown if there are some past messages --­­> <div ng-­show="model.allMessages.length == 0"> <table style="width: 100%;" class="table table-­striped"> <tr> <td>Phone Number</td> <td>Message</td> <td></td> </tr> <!--­­ The ng-­repeat directive is line the repeater control in .NET, but as you can see this partial is pure HTML which is much cleaner --> <tr ng-­repeat="message in model.allMessages"> <td>{{ message.to }}</td> <td>{{ message.message }}</td> <td> <button class="btn btn-­danger" ng-­click="delete(message.smsId);" ng­-disabled="model.isAjaxInProgress">Delete</button> </td> </tr> </table> </div> </div> The above code is commented and should be self explanatory. Conditional rendering is achieved through using the ng-­show=”condition” attribute on various div tags. Input fields are bound to the model and the send button is bound to the sendMessage() function in the controller as through the ng­click=”sendMessage()” attribute defined on the button tag. While AJAX calls are taking place, the controller sets model.isAjaxInProgress to true. Based on this variable, buttons are disabled through the ng-­disabled directive which is added as an attribute to the buttons. The ng-­repeat directive added as an attribute to the tr tag causes the table row to be rendered multiple times much like an ASP.NET repeater. 3. Creating the SmsController controller The penultimate piece of our application is the controller which responds to events from our view and interacts with our MVC4 REST WebAPI. The following listing shows the code we need to add to /app/js/controllers.js. Note that controller definitions can be chained. Also note that this controller “asks for” the $http service. The $http service is a simple way in Angular to do AJAX. So far we have only encountered modules, controllers, views and directives in Angular. The $http is new entity in Angular called a service. More information on Angular services can be found at the following URL: http://docs.angularjs.org/guide/dev_guide.services.understanding_services. .controller('SmsController', ['$scope', '$http', function ($scope, $http) { //We define the model $scope.model = {}; //We define the allMessages array in the model //that will contain all the messages sent so far $scope.model.allMessages = []; //The error if any $scope.model.errorMessage = undefined; //We initially load data so set the isAjaxInProgress = true; $scope.model.isAjaxInProgress = true; //Load all the messages $http({ url: '/api/smsresource', method: "GET" }). success(function (data, status, headers, config) { this callback will be called asynchronously //when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { //called asynchronously if an error occurs //or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); $scope.delete = function (id) { //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource/' + id, method: "DELETE" }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } $scope.sendMessage = function () { $scope.model.errorMessage = undefined; var message = ''; if($scope.model.message != undefined) message = $scope.model.message.trim(); if ($scope.model.phoneNumber == undefined || $scope.model.phoneNumber == '' || $scope.model.phoneNumber.length < 10 || $scope.model.phoneNumber[0] != '+') { $scope.model.errorMessage = "You must enter a valid phone number in international format. Eg: +44 7778 609466"; return; } if (message.length == 0) { $scope.model.errorMessage = "You must specify a message!"; return; } //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource', method: "POST", data: { to: $scope.model.phoneNumber, message: $scope.model.message } }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status // We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } }]); We can see from the previous listing how the functions that are called from the view are defined in the controller. It should also be evident how easy it is to make AJAX calls to consume our MVC4 REST WebAPI. Now we are left with the final piece. We need to define a route that associates a particular path with the view we have defined and the controller we have defined. 4. Add a new route that loads the controller and the partial This is the easiest part of the puzzle. We simply define another route in the /app/js/app.js file: $routeProvider.when('/sms', { templateUrl: '/app/partials/smsview.html', controller: 'SmsController' }); Conclusion In this article we have seen how much of the server side functionality in the MVC4 framework can be moved to the browser thus delivering a snappy and fast user interface. We have seen how we can build client side HTML only views that avoid the messy syntax offered by server side Razor views. We have built a functioning app from the ground up. The significant advantage of this approach to building web apps is that the front end can be completely platform independent. Even though we used ASP.NET to create our REST API, we could just easily have used any other language such as Node.js, Ruby etc without changing a single line of our front end code. Angular is a rich framework and we have only touched on basic functionality required to create a SPA. For readers who wish to delve further into the Angular framework, we would recommend the following URL as a starting point: http://docs.angularjs.org/misc/started. To get started with the code for this project: Sign up for an account at http://plus.sent.ly (free) Add your phone number Go to the “My Identies Page” Note Down your Sender ID, Consumer Key and Consumer Secret Download the code for this article at: https://docs.google.com/file/d/0BzjEWqSE31yoZjZlV0d0R2Y3eW8/edit?usp=sharing Change the values of Sender Id, Consumer Key and Consumer Secret in the web.config file Run the project through Visual Studio!

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >