Search Results

Search found 13299 results on 532 pages for 'github api'.

Page 231/532 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • How do I correctly install dulwich to get hg-git working on Windows?

    - by Joshua Flanagan
    I'm trying to use the hg-git Mercurial extension on Windows (Windows 7 64-bit, to be specific). I have Mercurial and Git installed. I have Python 2.5 (32-bit) installed. I followed the instructions on http://hg-git.github.com/ to install the extension. The initial easy_install failed because it was unable to compile dulwich without Visual Studio 2003. I installed dulwich manually by: git clone git://git.samba.org/jelmer/dulwich.git cd dulwich c:\Python25\python setup.py --pure install Now when I run easy_install hg-git, it succeeds (since the dulwich dependency is satisfied). In my C:\Users\username\Mercurial.ini, I have: [extensions] hgext.bookmarks = hggit = When I type 'hg' at a command prompt, I see: "* failed to import extension hggit: No module named hggit" Looking under my c:\Python25 folder, the only reference to hggit I see is Lib\site-packages\hg_git-0.2.1-py2.5.egg. Is this supposed to be extracted somewhere, or should it work as-is? Since that failed, I attempted the "more involved" instructions from the hg-git page that suggested cloning git://github.com/schacon/hg-git.git and referencing the path in my Mercurial configuration. I cloned the repo, and changed my extensions file to look like: [extensions] hgext.bookmarks = hggit = c:\code\hg-git\hggit Now when I run hg, I see: * failed to import extension hggit from c:\code\hg-git\hggit: No module named dulwich.errors. Ok, so that tells me that it is finding hggit now, because I can see in hg-git\hggit\git_handler.py that it calls from dulwich.errors import HangupException That makes me think dulwich is not installed correctly, or not in the path. Update: From Python command line: import dulwich yields Import Error: No module named dulwich However, under C:\Python25\Lib\site-packages, I do have a dulwich-0.5.0-py2.5.egg folder which appears to be populated. This was created by the steps mentioned above. Is there an additional step I need to take to make it part of the Python "path"?

    Read the article

  • iOS LocationManager is not updating location (Titanium Appcelerator module)

    - by vale4674
    I've made Appcelerator Titanium Module for fetching device's rotaion and location. Source can be found on GitHub. The problem is that it fetches only one cached location but device motion data is OK and it is refreshing. I don't use delegate, I pull that data in my Titanium Javascript Code. If I set "City Run" in Simulator - Debug - Location nothing happens. The same cached location is returning. Pulling of location is OK because I tried with native app wich does this: textView.text = [NSString stringWithFormat:@"%f %f\n%@", locationManager.location.coordinate.longitude, locationManager.location.coordinate.latitude, textView.text]; And it is working in simulator and on device. But the same code as you can see on GitHub is not working as Titanium module. Any ideas? EDIT: I am looking at GeolocationModule src and I see nothing special there. As I said, my code in my module has to work since it is working in native app. "Only" problem is that it is not updating location and it always returns me that cached location.

    Read the article

  • Best way to fork SVN project with Git

    - by Jeremy Thomerson
    I have forked an SVN project using Git because I needed to add features that they didn't want. But at the same time, I wanted to be able to continue pulling in features or fixes that they added to the upstream version down into my fork (where they don't conflict). So, I have my Git project with the following branches: master - the branch I actually build and deploy from feature_* - feature branches where I work or have worked on new things, which I then merge to master when complete vendor-svn - my local-only git-svn branch that allows me to "git svn rebase" from their svn repo vendor - my local branch that i merge vendor-svn into. then i push this (vendor) branch to the public git repo (github) So, my flow is something like this: git checkout vendor-svn git svn rebase git checkout vendor git merge vendor-svn git push origin vendor Now, the question comes here: I need to review each commit that they made (preferably individually since at this point I'm about twenty commits behind them) before merging them into master. I know that I could run git checkout master; git merge vendor, but this would pull in all changes and commit them, without me being able to see if they conflict with what I need. So, what's the best way to do this? Git seems like a great tool for handling forks of projects since you can pull and push from multiple repos - I'm just not experienced with it enough to know the best way of doing this. Here's the original SVN project I'm talking about: https://appkonference.svn.sourceforge.net/svnroot/appkonference My fork is at github.com/jthomerson/AsteriskAudioKonf (sorry - I couldn't make it a link since I'm a new user here)

    Read the article

  • Google Code Jam 2010 Large DataSets Take Too Long to Submit

    - by Travis
    Hey Guys, I'm participating in the 2010 code jam and I solved two of the problems for the small data sets, but I'm not even close to solving the large data sets in the 8 minute time frame. I'm wondering if anyone out there has solved the large data set: What hardware were you running on? What language were you running on? What performance tuning techniques did you do on your code to run as fast as possible? I'm writing the solutions in Ruby, which is not my day to day language, and executing them on my Macbook Pro. My solutions for problem A and problem C are on github at http://github.com/tjboudreaux/codejam2010. I'd appreciate any suggestions that you may have. FWIW, I have alot of experience in C++ from college, my primary language is PHP, and my "sandbox" language is Ruby. Was I just a bit ambitious by taking a shot at this in Ruby, not knowing where the language struggles for performance, or does anyone see anything that's a redflag as to why I can't complete the large dataset in time to submit.

    Read the article

  • Running Flask framework on App Engine: Could not find module app.cgi

    - by Linc
    I'm running this Flask example app on App Engine: http://github.com/gigq/flasktodo You can see on the github page that app.cgi is in the main directory for this project. However, when I run this code I get an error complaining about a missing app.cgi: ERROR 2010-05-01 16:43:47,006 dev_appserver.py:2109] Encountered error loading module "app.cgi": <type 'exceptions.ImportError'>: Could not find module app.cgi Traceback (most recent call last): File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 2096, in LoadTargetModule module_code = import_hook.get_code(module_fullname) File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 1279, in Decorate return func(self, *args, **kwargs) File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 1956, in get_code full_path, search_path, submodule = self.GetModuleInfo(fullname) File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 1279, in Decorate return func(self, *args, **kwargs) File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 1908, in GetModuleInfo submodule, search_path = self.GetParentSearchPath(fullname) File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 1279, in Decorate return func(self, *args, **kwargs) File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 1887, in GetParentSearchPath parent_package = self.GetParentPackage(fullname) File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 1279, in Decorate return func(self, *args, **kwargs) File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 1864, in GetParentPackage raise ImportError('Could not find module %s' % fullname) ImportError: Could not find module app.cgi How do I indicate to dev_appserver.py where to look to find it?

    Read the article

  • how to push different local git branches to heroku/master

    - by lsiden
    Heroku has a policy of ignoring all branches but 'master'. While I'm sure Heroku's designers have excellent reasons for for this policy (I'm guessing for storage and performance optimization), the consequence to me as a developer is that whatever local topic branch I may be working on, I would like an easy way to switch Heroku's master to that local topic branch and do a "git push heroku -f" to over-write master on Heroku. What I got from reading the "Pushing Refspecs" section of http://progit.org/book/ch9-5.html is git push -f heroku local-topic-branch:refs/heads/master What I'd really like is a way to set this up in the config file so that "git push heroku" always does the above, replacing local-topic-branch with the name of whatever my current branch happens to be. If anyone knows how to accomplish that, please let me know! The caveat for this, of course, is that this is only sensible if I am the only one who can push to that Heroku app/repository. A test or QA team might manage such a repository to try out different candidate branches, but they would have to coordinate so that they all agree on what branch they are pushing to it on any given day. Needless to say, it would also be a very good idea to have a separate remote repository (like Github) without this restriction for backing everything up to. I'd call that one "origin" and use "heroku" for Heroku so that "git push" always backs up everything to origin, and "git push heroku" pushes whatever branch I'm currently on to Heroku's master branch, overwriting it if necessary. Can anybody tell me if this would work? [remote "heroku"] url = [email protected]:my-app.git push = +refs/heads/*:refs/heads/master I'd like to hear from someone more experienced before I begin to experiment, although I suppose I could create a dummy app on Heroku and experiment with that. As for fetching, I don't really care if the Heroku repository is write-only. I still have a separate repository, like Github, for backup and cloning of all my work. Footnote: This question is similar to, but not quite the same as http://stackoverflow.com/questions/1489393/good-git-deployment-using-branches-strategy-with-heroku

    Read the article

  • homebrew path issue

    - by Shaun Stanislaus
    Master:~ shaunstanislaus$ ruby <(curl -fsSkL raw.github.com/mxcl/homebrew/go) ==> This script will install: /usr/local/bin/brew /usr/local/Library/... /usr/local/share/man/man1/brew.1 Press enter to continue ==> Downloading and Installing Homebrew... remote: Counting objects: 82368, done. remote: Compressing objects: 100% (39323/39323), done. remote: Total 82368 (delta 56782), reused 65301 (delta 42220) Receiving objects: 100% (82368/82368), 11.68 MiB | 1.59 MiB/s, done. Resolving deltas: 100% (56782/56782), done. From https://github.com/mxcl/homebrew * [new branch] master -> origin/master HEAD is now at 2ea1a0e smpeg: depends on gtk ==> Installation successful! You should run `brew doctor' *before* you install anything. Now type: brew help Master:~ shaunstanislaus$ brew doctor -bash: /usr/local/bin/brew: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby: bad interpreter: No such file or directory Master:~ shaunstanislaus$ echo $PATH /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/Users/shaunstanislaus/Library/Application Support/GoodSync:/opt/local/bin:/opt/local/sbin:/usr/local/sbin:/Users/shaunstanislaus/.ec2/bin:/Users/shaunstanislaus/.rvm/bin /usr/local/bin/brew: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby: bad interpreter: No such file or directory how do i fix this path issue? i can't use brew command and i think i previously symlink to wrong location. please advice, thank you.

    Read the article

  • Git : Failed at pushing to remote server, ' REPOSITORY_PATH ' is not a git command

    - by Judarkness
    I'm using Git with TortoiseGit on Windows XP, and I have a remote bare repository on Windows Vista 64bit version. When I tried to push my local files to remote bare repository, I got the following error message. git.exe push "origin" master:master git: 'C:/Git_Repository/.git' is not a git command. See 'git --help'. fatal: The remote end hung up unexpectedly the arbitrary URL is : username@serverip:C:/Git_Repository/.git The same arbitrary URL worked just fine while doing clone/fetch/pull. Access from a local directory in remote machine to this bare repository has no problem either so I belive there is something wrong with my path. I can push/pull at GitHub correctly but I was using URL provide by GitHub. Does anyone know what's wrong with my configuration? Here is my remote .git/config [core] repositoryformatversion = 0 filemode = false bare = true logallrefupdates = true ignorecase = true hideDotFiles = dotGitOnly Here is my local .git/config [core] repositoryformatversion = 0 filemode = false bare = false logallrefupdates = true symlinks = false ignorecase = true hideDotFiles = dotGitOnly [remote "origin"] fetch = +refs/heads url = username@serverip:C:/Git_Repository/.git [branch "master"] remote = origin merge = refs/heads/master Thanks for the reminding

    Read the article

  • Association and model data saving problem

    - by Zhlobopotam
    Developing with cakephp 1.3 (latest from github). There are 2 models bind with hasAndBelongsToMany: documents and tags. Document can have many tags in other words. I've add a new document submitting form there user can enter a list of tags separated with commas (new tag will be added, if not exist already). I looked at cakephp bakery 2.0 source code on github and found the solution. But it seems that something is wrong. class Document extends AppModel { public $hasAndBelongsToMany = array('Tag'); public function beforeSave($options = array()) { if (isset($this->data[$this->alias]['tags']) && !empty($this- >data[$this->alias]['tags'])) { $tagIds = $this->Tag->saveDocTags($this->data[$this->alias] ['tags']); unset($this->data[$this->alias]['tags']); $this->data[$this->Tag->alias][$this->Tag->alias] = $tagIds; } return true; } } class Tag extends AppModel { public $hasAndBelongsToMany = array ('Document'); public function saveDocTags($commalist = '') { if ($commalist == '') return null; $tags = explode(',',$commalist); if (empty($tags)) return null; $existing = $this->find('all', array( 'conditions' => array('title' => $tags) )); $return = Set::extract($existing,'/Tag/id'); if (sizeof($existing) == sizeof($tags)) { return $return; } $existing = Set::extract($existing,'/Tag/title'); foreach ($tags as $tag) { if (!in_array($tag, $existing)) { $this->create(array('title' => $tag)); $this->save(); $return[] = $this->id; } } return $return; } } So, new tags creation works well but document model can't save association data and tells: SQL Error: 1054: Unknown column 'Array' in 'field list' Query: INSERT INTO documents (title, content, shortnfo, date, status) VALUES ('Document with tags', '', '', Array, 1) Any ideas how to solve this problem?

    Read the article

  • Spree customize/extend user roles and permissions

    - by swapnil
    I am trying to specify some custom roles in Spree for example role 'client' and extend the permissions to access the admin section for this role. This user will be able to access only those Product created by that user. Concept is letting a user with role 'client' manage only products and other certain Models. To start with I added CanCan plugin and defined a RoleAbility Class in role_ability.rb Just following this post : Spree Custom Roles Permissions class RoleAbility include CanCan::Ability def initialize(user) user ||= User.new if user.has_role? 'admin' can :manage, :all elsif user.has_role? 'client_admin' can :read, Product can :admin, Product end end end Added this to an initializer : config/initializers/spree.rb Ability.register_ability(RetailerAbility) Also extended admin_products_controller_decorator.rb :app/controllersadmin_products_controller_decorator.rb Admin::ProductsController.class_eval do def authorize_admin authorize! :admin, Product authorize! params[:action].to_sym, Product end end But I am getting flash message 'Authorisation Failure' Trying to find some luck, I referred following links A github gist for Customizing Spree Roles : https://gist.github.com/1277326 Here's a similar issue what I am facing : http://groups.google.com/group/spree-user/browse_thread/thread/1e819e10410d03c5/23b269e09c7ed47e All efforts in vain... Any pointers of what is going on here highly appreciated ? Thanks in advance.

    Read the article

  • paperclip overwrites / resets S3 permissions for non-bucket-owners

    - by adriandz
    I have opened this as an issue on Github (http://github.com/thoughtbot/paperclip/issues/issue/225) but on the chance that I'm just doing this wrong, I thought I'd also ask about it here. If someone can tell me where I'm going wrong, I can close the issue and save the Paperclip guys some trouble. Issue: When using S3 for storage, and you wish your bucket to allow access to other users to whom you have granted access, Paperclip appears to overwrite the permissions on the bucket, removing access to these users. Process for duplication: Create a bucket in S3 and set up a Rails app with Paperclip to use this bucket for storage Add a user (for example, [email protected], the user for the video encoding service Zencoder) to the bucket, and grant this user List and Read/Write permissions. Upload a file. Refresh the permissions. The user you added will be gone. As well, a user "Everyone" with read permissions will have been added. The end result is that you cannot, so far as I can tell, retain desired permissions on your bucket when using Paperclip and S3. Can anyone help?

    Read the article

  • Using Cucumber With Modular Sinatra Apps

    - by Rob Conery
    I'm building out a medium-sized application using Sinatra and all was well when I had a single app.rb file and I followed Aslak's guidance up on Github: http://wiki.github.com/aslakhellesoy/cucumber/sinatra As the app grew a bit larger and the app.rb file started to bulge, I refactored out a lot of of the bits into "middleware" style modules using Sinatra::Base, mapping things using a rack-up file (config.ru) etc. The app works nicely - but my specs blew up as there was no more app.rb file for webrat to run against (as defined in the link above). I've tried to find examples on how to work this - and I think I'm just not used to the internal guts of Cuke just yet as I can't find a single way to have it cover all the apps. I tried just pointing to "config.ru" instead of app.rb - but that doesn't work. What I ended up doing - which is completely hackish - is to have a separate app.rb file in my support directory, which has all the requires stuff so I can at least test the model stuff. I can also specify routes in there - but that's not at all what I want to do. So - the question is: how can I get Cucumber to properly work with the modular app approach?

    Read the article

  • Trouble using termextraction ge

    - by mathee
    I'm trying to install this gem: http://github.com/alexrabarts/term_extraction. It required nokogiri, which I tried installing. I'm getting this as the output: >gem install nokogiri Successfully installed nokogiri-1.4.2.1-x86-mswin32 1 gem installed Installing ri documentation for nokogiri-1.4.2.1-x86-mswin32... No definition for parse_memory No definition for parse_file No definition for parse_with No definition for get_options No definition for set_options Installing RDoc documentation for nokogiri-1.4.2.1-x86-mswin32... No definition for parse_memory No definition for parse_file No definition for parse_with No definition for get_options No definition for set_options I was able to install the termextraction gem (per the README on git repo): >gem install alexrabarts-term_extraction -s http://gems.github.com Successfully installed alexrabarts-term_extraction-0.1.4 1 gem installed Installing ri documentation for alexrabarts-term_extraction-0.1.4... Installing RDoc documentation for alexrabarts-term_extraction-0.1.4... The issue is that I'm trying to test it out, but I'm getting an "uninitialized constant" error when I use it: ActionView::TemplateError (uninitialized constant ApplicationHelper::TermExtraction) on line #3 of app/views/questions/new. haml: 1: %h1 New question 2: -msg = "testing this context thing let's see what it gives me" 3: -getTerms(msg) 4: #new-question-form 5: .box-background 6: -form_for(@question) do |f| app/helpers/application_helper.rb:42:in `getTerms' app/views/questions/new.haml:3:in `_run_haml_app47views47questions47new46haml' haml (2.2.23) lib/haml/helpers/action_view_mods.rb:13:in `render' haml (2.2.23) lib/haml/helpers/action_view_mods.rb:13:in `render' app/controllers/questions_controller.rb:91:in `new' haml (2.2.23) lib/sass/plugin/rails.rb:20:in `process' Here is application_helper.rb: module ApplicationHelper def getTerms(context) yahoo = TermExtraction::Yahoo.new(:api_key => 'myAPIkey', :context => context) end end I'm not sure what the issue is. Any insight would be greatly appreciated!

    Read the article

  • Error installing FeedZirra

    - by Gautam
    Hi, I am new to Ruby on Rails. I am excited about Feed parsing but when I install FeedZirra I am getting this error. I use Windows 7 and Ruby 1.8.7. Please help. Thanks in advance. C:\Ruby187>gem sources -a http://gems.github.com http://gems.github.com added to sources C:\Ruby187>gem install pauldix-feedzirra Building native extensions. This could take a while... ERROR: Error installing pauldix-feedzirra: ERROR: Failed to build gem native extension. C:/Ruby187/bin/ruby.exe extconf.rb checking for curl-config... no checking for main() in -lcurl... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=C:/Ruby187/bin/ruby --with-curl-dir --without-curl-dir --with-curl-include --without-curl-include=${curl-dir}/include --with-curl-lib --without-curl-lib=${curl-dir}/lib --with-curllib --without-curllib extconf.rb:12: Can't find libcurl or curl/curl.h (RuntimeError) Try passing --with-curl-dir or --with-curl-lib and --with-curl-include options to extconf. Gem files will remain installed in C:/Ruby187/lib/ruby/gems/1.8/gems/taf2-curb-0 .5.4.0 for inspection. Results logged to C:/Ruby187/lib/ruby/gems/1.8/gems/taf2-curb-0.5.4.0/ext/gem_ma ke.out

    Read the article

  • Facebook dialog appears but always says "An error has occurred"

    - by Conor James
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> ... <head> ... <script src="http://connect.facebook.net/en_US/all.js#xfbml=1"></script> </head> <body> <div id="fb-root"></div> <fb:like id="fb-like" href="http://www.example.com" layout="button_count" show_faces="false" width="100"></fb:like> ... myscript.js: FB.ui( { method: 'stream.publish', message: 'getting educated about Facebook Connect', attachment: { name: 'Connect', caption: 'The Facebook Connect JavaScript SDK', description: ( 'A small JavaScript library that allows you to harness ' + 'the power of Facebook, bringing the user\'s identity, ' + 'social graph and distribution power to your site.' ), href: 'http://github.com/facebook/connect-js' }, action_links: [ { text: 'Code', href: 'http://github.com/facebook/connect-js' } ], user_message_prompt: 'Share your thoughts about Connect' }, function(response) { if (response && response.post_id) { alert('Post was published.'); } else { alert('Post was not published.'); } } ); I have Facebook like button on my page which works fine. But when I call the FB.ui method above from my JavaScript source, the Facebook dialog pops up but displays this error message: **An error occurred. Please try again later.** This has happened repeatedly for two days since I started trying to implement it. Not a very helpful error message. Any idea what might cause it or how to narrow down the problem?

    Read the article

  • Etiquette: Version bump my fork of opensource project?

    - by Ross
    This question is about etiquette and open source projects. I have forked an application from github and added two new features. The first feature has been request frequently elsewhere. I have added it. Code & implementation are clean (I think). The second feature is more of a hack. It will be of use to others, but the implementation is a little dirty in useage and more so in code. I need the feature but I don't have the skills to fully implement it properly or to a level that could be considered a worth while contrabution to the main project. How should the versioning work? Do I just bump up my version numbers care-free and push to my master branch? It is annoying to know which version is running, modifed or original, as both have the same version number. But will it be confusing when, months later, my github page has a version number the same as the original but both are actually completely different. (I have made pull requests etc. but that is not the context of my question.) The project I have forked uses ruby jeweler so has a versioning format of: Jeweler tracks the version of your project. It assumes you will be using a version in the format x.y.z. x is the 'major' version, y is the 'minor' version, and z is the patch version. Is this standard for other projects/langauges too? Are my changes patches? Thanks

    Read the article

  • What is a recommended Android utility class collection?

    - by Sebastian Roth
    I often find myself writing very similar code across projects. And more often than not, I copy stuff over from old projects. Things like: Create images with round corners read density into a static variable & re-use. 4 lines.. disable / hide multiple views / remote views at once. Example: } public static void disableViews(RemoteViews views, int... ids) { for (int id : ids) { views.setInt(id, SET_VISIBILITY, View.GONE); } } public static void showViews(RemoteViews views, int... ids) { for (int id : ids) { views.setInt(id, SET_VISIBILITY, View.VISIBLE); } } I'd love to package these kind of functions into 1 letter / 2 letter class names, i.e. V.showViews(RemoteViews views, int... ids) would be easy to write & remember I hope. I'm searching for Github recommendations, links and if nothing is found, I perhaps will start a small project on github to collect.

    Read the article

  • Resumable Upload in Ruby on Rails

    - by user253011
    Hi, I have been searching for a way for resumable file upload in RoR. In conclusion, I found out other than Java Applet no client-side-and-cross-platform agent can access the file system in such a way that to request the file from the position where the upload got terminated (due to any reason) with some exceptions like http://github.com/taf2/resume-up/tree/master (built in native Ruby, but requires google gears which is not "reliable" yet when it comes to cross platform almost same story as of ActiveX!) Since the only reliable option left is java applet, is there any good tutorial/forum/documentation for those paid java applets, such as "thin slice upload" etc. to make it work with rails application. I have found one http : // github . com / dassi / mediaclue , its a non-multi-ligual-German-Application in which they used jumploader. But in that application, I am unable to see resumable functionality. Scratching my head against their documentation, i found out http : // jumploader.com / doc_resume.html It tells that jumploader has resume functionality in Cross session resume, the one i am looking for (if the user close the browser the new session gets hold on uncompleted uploaded files from the old session against the user id). But I cant find any example on their demos page which actually pause/RESUME functionality in a continuous manner! Is it even possible to achieve that kind of resumable functionality. Please tell me about any options/example/demos preferable deployed in rails. I shall be very much obliged. ~ Thanks

    Read the article

  • JavaScript library not working in IE, can't see error information.

    - by Wolfy87
    Hi there. I have been writing a JavaScript library for a few weeks now and it works brilliantly in Firefox, Chrome and Safari. I had not tested it in IE until recently. I do not own a Windows box so after testing it on my friends and realising it wasnt working I started going over my code for things that could be causing it to break. So far I have found nothing. I could not find any descriptions of the errors in the browser while I was there either. So I wondered if anyone could run my test script in an IE browser (6, 7 or 8) and let me know any information they can find as to why it crashed. Please ignore any information saying it works in IE6, I put that up there after testing it through http://ipinfo.info/netrenderer/ I just assumed it was working because I could set transparency and size via my script and see it run in this tool. Here is the link to my GitHub repository: https://github.com/Wolfy87/Spark If you download it and run spark.html it will attempt to run all of my functions from the library. So if anyone could be kind enough to run it in IE and either let me know what errors they are getting and possibly how to fix them then I will be extreamly grateful. Thank you in advance. EDIT: Here is it's website http://sparkjs.co.uk/

    Read the article

  • Help me understand making maven project w/ non-maven jar dependencies usable by others

    - by deet
    Hi, I'm in the process of learning maven (and java packaging & distribution) with a new oss project I'm making as practice. Here's my situation, all java of course: My main project is ProjectA, maven-based in a github repository. I have also created one utility project, maven-based, in github: ProjectB. ProjectA depends on a project I have heavily modified that originally was from a google-code ant-based repository, ProjectC. So, how do I set up the build for ProjectA such that someone can download ProjectA.jar and use it without needing to install jars for ProjectB and ProjectC, and also how do I set up the build such that someone could check out ProjectA and run only 'mvn package' for a full compile? (Additionally, what should I do with my modified version of ProjectC? include the class files directly into ProjectA, or fork the project into something that could then be used by as a maven dependency?) I've been reading around, links such as this SO question and this SO question, but I'm unclear how those relate to my particular circumstance. So, any help would be appreciated. Thanks!

    Read the article

  • pathogen#infect not updating the runtimepath

    - by Taylor Price
    I have started working with pathogen.vim with gvim on Windows, following Tim Pope's setup guide at his github repository here. However, I'm running into the problem that pathogen#infect() does not seem to be modifying the runtimepath (as seen by running :echo &runtimepath in gvim). The simple test case _vimrc that I came up with is as follows. Please note that pathogen gets loaded just fine. "Set a base directory. let $BASE_DIR='H:\development\github\vimrc' "Source pathogen since it's not in the normal autoload directory. source $BASE_DIR\autoload\pathogen.vim "Start up pathogen call pathogen#infect() "call pathogen#infect('$BASE_DIR\functions') Neither running pathogen#infect() without an argument (which should add the bundles directory under the vimfiles directory) nor specifying a directory to contain files works. Substituting the pathogen#infect() call with pathogen#runtime_prepend_subdirectories('$BASE_DIR\functions'), which is what pathogen#infect() does fails to change the runtimepath as well. Any ideas that I've missed? Any more information that would be helpful? My repository with the non-trivial example is here.

    Read the article

  • CodePlex Daily Summary for Friday, March 12, 2010

    CodePlex Daily Summary for Friday, March 12, 2010New Projects.NET DEPENDENCY INJECTION: Abel Perez Enterprise FrameworkAutodocs - WCF REST Automatic API Documentation Generator: Autodocs is an automatic API documentation generator for .NET applications that use Windows Communication Foundation (WCF) to establish REST API's.BlockBlock: Block Block is a free game. You know Lumines and you will like BlockBlock.C4F XNA ASCII Post-Processing: This is the source code for the Coding4Fun article "XNA Effects – ASCII Art in 3D"ChequePrinter: this is ChequePrinterCompiladores MSIL usando Phoenix (PLP 2008.1 - CIn/UFPE): Este projeto foi feito com o intuito de explorar a plataforma Microsoft Phoenix para a construção de compiladores para MSIL de duas linguagens de E...CRM External View: CRM External View enables more robust control over exposing Microsoft CRM data (in a form of views) for external parties. The solution uses web ser...CS Project2: This is for the projectDotNetNuke IM Module of Facebook Like Messenger: Help you integrate 123 Web Messenger into DotNetNuke, and add a powerful 1-to-1 IM Software named "Facebook Messenger Style Web Chat Bar" at the bo...DotNetNuke® RadPanelBar: DNNRadPanelBar makes it easy to add telerik RadPanelBar functionality to your module or skin. Licensing permits anyone to use the components (incl...DotNetNuke® Skin Blocks: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by Armand Datema of Schwingsoft. This skin uses a bit of jQu...Drilltrough and filtering on SSAS-cubes in SSRS: We will describe a technique to create Reporting services (SSRS) reports that use Analysis services (SSAS) cubes as data sources, have a very intu...Ecosystem Diagnosis & Treatment: The Ecosystem DIagnosis & Treatment community provides tools, analyses and applications of the medical model to natural resource problems. EDT sof...ExIf 35: A utility for use by film photographers for keeping track of critical facts about images taken on a roll of film, just as digital cameras do automa...FabricadeTI: Desenvolvimento do framework FabricadeTI.Find and Replace word in the sentences: This program used Java Development Kid 6.0 and i were using HighLighter class. It was completed code with source code and then everybody can use in...Flash Nut: Flash Nut is a flash card program. You can build and review decks of flash cards. The project is a vs2008 wpf application.Free DotNetNuke Chat Module (Popup Mode): With this free DotNetNuke Chat Module (Popup Mode), master will assist to integrate DotNetNuke with 123 Flash Chat seamlessly, and add a popup mode...Free DotNetNuke IM of 123 Web Messenger -- Web-based Friend List: With this FREE application, you could integrate DNN website Database with 123 Web Messenger seamlessly and embed a web-based Friends List into anyw...Free DotNetNuke Live Help Module: With DotNetNuke Live Help Module, integrate 123 Live Help into DotNetNuke website and add Live Chat Button anywhere you like. Let visitors to chat ...G52GRP Videowall: NottinghamHappy Turtle Plugins for BVI :: Repository Based Versioning for Visual Studio: The Happy Turtle project creates plugins for the Build Version Increment Add-In for Visual Studio (BVI). The focus is to automatically version asse...Hasher: Hasher es capaz de generar el hash MD5 y SHA de textos de hasta 100.000 caracteres y ficheros. También te permitirá comprobar dos hash para verifi...Infragistics Silverlight Extended Controls: This project is a group of controls that extend or add functionality to the Infragistics Silverlight control suite. This control requires Infragis...Insert Video Jnr: This is a baby version of my Video plugin, it is intended for Hosted Wordpress blogs only and shouldn't be used with other blog providers.jccc .NET smart framework: jccc .NET smart framework allows the creation of fast connections to MSSQL or MYSQL databases, and the data manipulation by using of c# class's tha...LytScript: 函数式脚本语言Microsoft - DDD NLayerApp .NET 4.0 Example (Microsoft Spain): DDD NLayered App .NET 4.0 Example By Microsoft - Spain Domain Driven Design NLayered App .NET 4.0 Example Implementation Example of our local Arc...mimiKit: Lightweight ASP.NET MVC / Javascript Framework for creating mobile applications PHPWord: With PHPWord you can easily create a Word document with PHP. PHPWord creates docx Files that can include all major word functions like TextElements...Protocol Transition with BizTalk: An example solution the shows how todo Protocol Transition with BizTalk. This also shows you how to create a WCF extension to allow this to happen.Raid Runner: Raid Runner makes it easier to run and manage raid in World of Warcraft. It is a Silverlight application developed in c#SQL Server Authentication Troubleshooter: SQL Server Authentication Troubleshooter is a tool to help investigate a root cause of ‘Login Failed’ error in SQL Server. There could be number of...SuperviseObjects: SuperviseObjects consists of a collection which is derived from ObservableCollection<T>. This collection fires ItemPropertyChanging and ItemPropert...Viuto: Viuto.NET project aims to create a fully track and trace application. It is developed in: - Java & C: Firmware - C#: Parser - Asp.net: Tracki...Zealand IT MSBuild Tasks: Zealand IT MSBuild Tasks is a collection that you cannot do without if you are serious about continous integration. Ever wish you could specify an...New ReleasesASP.NET: ASP.NET MVC 2 RTM: This release contains the source code for ASP.NET MVC 2 RTM as well as the ASP.NET MVC Futures project. The futures project contains features that ...C#Mail: Higuchi.Mail.dll (2010.3.11 ver): Higuchi.Mail.dll at 2010-3-11 version.C#Mail: Higuchi.MailServer.dll (2010.3.11 ver): Higuchi.MailServer.dll at 2010.3.11 version.C4F XNA ASCII Post-Processing: XNA ASCII FPS v1 - Full Version: This is the full, complete example of the XNA ASCII FPS.C4F XNA ASCII Post-Processing: XNA ASCII FPS v1.0 - Base Project: This is the base project to be used by those who plan to follow along the Coding4Fun article.CRM External View: 1.0: Release 1.0DevTreks -social budgeting that improves lives and livelihoods: Social Budgeting Web Software, DevTreks alpha 3c: Alpha 3c upgrades custom/virtual uris (devpacks), temp uris, and zip packages. This is believed to be the first fully functional/performant release.DotNetNuke® RadPanelBar: DNNRadPanelBar 1.0.0: DNNRadPanelBar makes it easy to add telerik RadPanelBar functionality to your module or skin. Licensing permits anyone to use the components (inclu...Drilltrough and filtering on SSAS-cubes in SSRS: Release 1: Release 1ExIf 35: ExIf 35: Daily build of ExIf 35Family Tree Analyzer: Version 1.0.3.0: Version 1.0.3.0 Added options to check for updates on load and on help menu Disable use of US census for now until dealt with years being differen...Family Tree Analyzer: Version 1.0.4.0: Version 1.0.4.0 Added support for display of Ahnenfatel numbers Added filter to hide individuals from Lost Cousins report that have been flagged a...Flash Nut: Flash Nut 1.0 Setup: Flash Nut SetupFluent Validation for .NET: 1.2 RC: This is the release candidate for FluentValidation 1.2. If no bugs are found within the next couple of weeks, then this will become the 1.2 Final b...Free DotNetNuke Chat Module (Popup Mode): Download DNN Chat Module (Popup Mode)+Source Code: Feel free to download DotNetNuke Chat Module (Popup Mode), integrating DotNetNuke with 123 Flash Chat Software, and add a free popup mode flash cha...Free DotNetNuke Live Help Module: Download DNN Live Support Module and Source Code: In Readme file, there are detailed Installation and Integration Manual for you. This module is compatible with DotNetNuke v5.x.Happy Turtle Plugins for BVI :: Repository Based Versioning for Visual Studio: Happy Turtle 1.0.44927: This is the first release of the SVN based version incrementor. How To InstallMake sure that Build Version Increment v2.2.10065.1524 or newer is i...Hasher: 1.0: Versión inicial de la aplicación: Obtención de hash MD5 y SHA. Codificación en tiempo real de textos de hasta 100.000 caracteres. Codificación ...Jamolina: PhotosynthDemo: PhotosynthDemoMapWindow GIS: MapWindow 6.0 msi (March 11): This fixes an PixelToProj problem for the Extended Buffer case, as well as adding fixes to the WKBFeatureReader to fix an X,Y reversal and some ext...Math.NET Numerics: 2010.3.11.291 Build: Latest alpha buildMicrosoft - DDD NLayerApp .NET 4.0 Example (Microsoft Spain): V0.5 - N-Layer DDD Sample App: Required Software (Microsoft Base Software needed for Development environment) Unity Application Block 1.2 - October 2008 http://www.microsoft.com/...MiniTwitter: 1.09.2: MiniTwitter 1.09.2 更新内容 修正 タイムラインを削除すると落ちるバグを修正 稀にタイムラインのスクロールが出来ないバグを修正Nestoria.NET: Nestoria.NET 0.8: Provides access to the Nestoria API. Documentation contains a basic getting started guide. Please visit Darren Edge's blog for ongoing developmen...Pod Thrower: Version 1.0: Here is version 1.0. It has all the features I was looking to do in it. Please let me know if you use this and if you would like any changes.SharePoint Ad Rotator: SPAdRotator 2.0 Beta: This new release of the Ad Rotator contains many new features. One major new feature is that jQuery has been added to do image rotation without hav...SharePoint Objects: Democode Ton Stegeman: These download contains sample code for some SharePoint 2007 blog posts: TST.Themes_Build20100311.zip contains a feature receiver that registers Sh...SharePoint Taxonomy Extensions: SharePoint Taxonomy Extensions 1.2: Make Taxonomy Extensions useable in every list type. Not only in document libraries.SharePoint Video Player Web Part & SharePoint Video Library: Version 3.0.0: Absolutely killer feature - installing multiple players on a page without any loss of performance.SilverLight Interface for Mapserver: SLMapViewer v. 1.0: SLMapviewer sample application version 1.0. This new release includes the following enhancements: Silverlight 3.0 native Added a new init parame...Spark View Engine: Spark v1.1: Changes since RC1Built against ASP.NET MVC 2 RTMSPSS .NET interop library: 2.0: This new version supports SPSS 15, and includes spssio32.dll and other native .dll dependencies so that it works out of the box without SPSS being ...stefvanhooijdonk.com: SharePoint2010.ProfilePicturesLoader: So, with the help of Reflector, I wrote a small tool that would import all our profile pictures and update the user profiles. http://wp.me/pMnlQ-6G SuperviseObjects: SuperviseObjects 1.0: First releaseTortoiseSVN Addin for Visual Studio: TortoiseSVN Addin 1.0.5: Feature: Visual Studio/svn action synchronization on Item in Solution explorer like add, move, delete and rename. Note: Move action does not rememb...VCC: Latest build, v2.1.30311.0: Automatic drop of latest buildVivoSocial: VivoSocial 7.0.4: Business Management ■This release fixes a Could not load type error on the main view of the module. Groups ■Group requests were failing in some i...WikiPlex – a Regex Wiki Engine: WikiPlex 1.3: Info: Official Version: 1.3.0.215 | Full Release Notes Documentation - This new documentation includes Full Markup Guide with Examples Articles ...Zealand IT MSBuild Tasks: Zealand IT MSBuild Tasks: Initial beta release of Zealand IT MSBuild Tasks. Contains the following tasks: RunAs - Same as Exec task, but provides parameters for impersonat...ZoomBarPlus: V1 (Beta): This is the initial release. It should be considered a beta test version as it has not been tested for very long on my device.Most Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NET Ajax LibraryASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsUmbraco CMSRawrN2 CMSBlogEngine.NETFasterflect - A Fast and Simple Reflection APIjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryFarseer Physics EngineCaliburn: An Application Framework for WPF and SilverlightSharePoint Team-Mailer

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 3 – Table per Concrete Type (TPC) and Choosing Strategy Guidelines

    - by mortezam
    This is the third (and last) post in a series that explains different approaches to map an inheritance hierarchy with EF Code First. I've described these strategies in previous posts: Part 1 – Table per Hierarchy (TPH) Part 2 – Table per Type (TPT)In today’s blog post I am going to discuss Table per Concrete Type (TPC) which completes the inheritance mapping strategies supported by EF Code First. At the end of this post I will provide some guidelines to choose an inheritance strategy mainly based on what we've learned in this series. TPC and Entity Framework in the Past Table per Concrete type is somehow the simplest approach suggested, yet using TPC with EF is one of those concepts that has not been covered very well so far and I've seen in some resources that it was even discouraged. The reason for that is just because Entity Data Model Designer in VS2010 doesn't support TPC (even though the EF runtime does). That basically means if you are following EF's Database-First or Model-First approaches then configuring TPC requires manually writing XML in the EDMX file which is not considered to be a fun practice. Well, no more. You'll see that with Code First, creating TPC is perfectly possible with fluent API just like other strategies and you don't need to avoid TPC due to the lack of designer support as you would probably do in other EF approaches. Table per Concrete Type (TPC)In Table per Concrete type (aka Table per Concrete class) we use exactly one table for each (nonabstract) class. All properties of a class, including inherited properties, can be mapped to columns of this table, as shown in the following figure: As you can see, the SQL schema is not aware of the inheritance; effectively, we’ve mapped two unrelated tables to a more expressive class structure. If the base class was concrete, then an additional table would be needed to hold instances of that class. I have to emphasize that there is no relationship between the database tables, except for the fact that they share some similar columns. TPC Implementation in Code First Just like the TPT implementation, we need to specify a separate table for each of the subclasses. We also need to tell Code First that we want all of the inherited properties to be mapped as part of this table. In CTP5, there is a new helper method on EntityMappingConfiguration class called MapInheritedProperties that exactly does this for us. Here is the complete object model as well as the fluent API to create a TPC mapping: public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } }          public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } }          public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } }      public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; }              protected override void OnModelCreating(ModelBuilder modelBuilder)     {         modelBuilder.Entity<BankAccount>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("BankAccounts");         });         modelBuilder.Entity<CreditCard>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("CreditCards");         });                 } } The Importance of EntityMappingConfiguration ClassAs a side note, it worth mentioning that EntityMappingConfiguration class turns out to be a key type for inheritance mapping in Code First. Here is an snapshot of this class: namespace System.Data.Entity.ModelConfiguration.Configuration.Mapping {     public class EntityMappingConfiguration<TEntityType> where TEntityType : class     {         public ValueConditionConfiguration Requires(string discriminator);         public void ToTable(string tableName);         public void MapInheritedProperties();     } } As you have seen so far, we used its Requires method to customize TPH. We also used its ToTable method to create a TPT and now we are using its MapInheritedProperties along with ToTable method to create our TPC mapping. TPC Configuration is Not Done Yet!We are not quite done with our TPC configuration and there is more into this story even though the fluent API we saw perfectly created a TPC mapping for us in the database. To see why, let's start working with our object model. For example, the following code creates two new objects of BankAccount and CreditCard types and tries to add them to the database: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount();     CreditCard creditCard = new CreditCard() { CardType = 1 };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Running this code throws an InvalidOperationException with this message: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. The reason we got this exception is because DbContext.SaveChanges() internally invokes SaveChanges method of its internal ObjectContext. ObjectContext's SaveChanges method on its turn by default calls AcceptAllChanges after it has performed the database modifications. AcceptAllChanges method merely iterates over all entries in ObjectStateManager and invokes AcceptChanges on each of them. Since the entities are in Added state, AcceptChanges method replaces their temporary EntityKey with a regular EntityKey based on the primary key values (i.e. BillingDetailId) that come back from the database and that's where the problem occurs since both the entities have been assigned the same value for their primary key by the database (i.e. on both BillingDetailId = 1) and the problem is that ObjectStateManager cannot track objects of the same type (i.e. BillingDetail) with the same EntityKey value hence it throws. If you take a closer look at the TPC's SQL schema above, you'll see why the database generated the same values for the primary keys: the BillingDetailId column in both BankAccounts and CreditCards table has been marked as identity. How to Solve The Identity Problem in TPC As you saw, using SQL Server’s int identity columns doesn't work very well together with TPC since there will be duplicate entity keys when inserting in subclasses tables with all having the same identity seed. Therefore, to solve this, either a spread seed (where each table has its own initial seed value) will be needed, or a mechanism other than SQL Server’s int identity should be used. Some other RDBMSes have other mechanisms allowing a sequence (identity) to be shared by multiple tables, and something similar can be achieved with GUID keys in SQL Server. While using GUID keys, or int identity keys with different starting seeds will solve the problem but yet another solution would be to completely switch off identity on the primary key property. As a result, we need to take the responsibility of providing unique keys when inserting records to the database. We will go with this solution since it works regardless of which database engine is used. Switching Off Identity in Code First We can switch off identity simply by placing DatabaseGenerated attribute on the primary key property and pass DatabaseGenerationOption.None to its constructor. DatabaseGenerated attribute is a new data annotation which has been added to System.ComponentModel.DataAnnotations namespace in CTP5: public abstract class BillingDetail {     [DatabaseGenerated(DatabaseGenerationOption.None)]     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } As always, we can achieve the same result by using fluent API, if you prefer that: modelBuilder.Entity<BillingDetail>()             .Property(p => p.BillingDetailId)             .HasDatabaseGenerationOption(DatabaseGenerationOption.None); Working With The Object Model Our TPC mapping is ready and we can try adding new records to the database. But, like I said, now we need to take care of providing unique keys when creating new objects: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount()      {          BillingDetailId = 1                          };     CreditCard creditCard = new CreditCard()      {          BillingDetailId = 2,         CardType = 1     };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Polymorphic Associations with TPC is Problematic The main problem with this approach is that it doesn’t support Polymorphic Associations very well. After all, in the database, associations are represented as foreign key relationships and in TPC, the subclasses are all mapped to different tables so a polymorphic association to their base class (abstract BillingDetail in our example) cannot be represented as a simple foreign key relationship. For example, consider the the domain model we introduced here where User has a polymorphic association with BillingDetail. This would be problematic in our TPC Schema, because if User has a many-to-one relationship with BillingDetail, the Users table would need a single foreign key column, which would have to refer both concrete subclass tables. This isn’t possible with regular foreign key constraints. Schema Evolution with TPC is Complex A further conceptual problem with this mapping strategy is that several different columns, of different tables, share exactly the same semantics. This makes schema evolution more complex. For example, a change to a base class property results in changes to multiple columns. It also makes it much more difficult to implement database integrity constraints that apply to all subclasses. Generated SQLLet's examine SQL output for polymorphic queries in TPC mapping. For example, consider this polymorphic query for all BillingDetails and the resulting SQL statements that being executed in the database: var query = from b in context.BillingDetails select b; Just like the SQL query generated by TPT mapping, the CASE statements that you see in the beginning of the query is merely to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type). TPC's SQL Queries are Union Based As you can see in the above screenshot, the first SELECT uses a FROM-clause subquery (which is selected with a red rectangle) to retrieve all instances of BillingDetails from all concrete class tables. The tables are combined with a UNION operator, and a literal (in this case, 0 and 1) is inserted into the intermediate result; (look at the lines highlighted in yellow.) EF reads this to instantiate the correct class given the data from a particular row. A union requires that the queries that are combined, project over the same columns; hence, EF has to pad and fill up nonexistent columns with NULL. This query will really perform well since here we can let the database optimizer find the best execution plan to combine rows from several tables. There is also no Joins involved so it has a better performance than the SQL queries generated by TPT where a Join is required between the base and subclasses tables. Choosing Strategy GuidelinesBefore we get into this discussion, I want to emphasize that there is no one single "best strategy fits all scenarios" exists. As you saw, each of the approaches have their own advantages and drawbacks. Here are some rules of thumb to identify the best strategy in a particular scenario: If you don’t require polymorphic associations or queries, lean toward TPC—in other words, if you never or rarely query for BillingDetails and you have no class that has an association to BillingDetail base class. I recommend TPC (only) for the top level of your class hierarchy, where polymorphism isn’t usually required, and when modification of the base class in the future is unlikely. If you do require polymorphic associations or queries, and subclasses declare relatively few properties (particularly if the main difference between subclasses is in their behavior), lean toward TPH. Your goal is to minimize the number of nullable columns and to convince yourself (and your DBA) that a denormalized schema won’t create problems in the long run. If you do require polymorphic associations or queries, and subclasses declare many properties (subclasses differ mainly by the data they hold), lean toward TPT. Or, depending on the width and depth of your inheritance hierarchy and the possible cost of joins versus unions, use TPC. By default, choose TPH only for simple problems. For more complex cases (or when you’re overruled by a data modeler insisting on the importance of nullability constraints and normalization), you should consider the TPT strategy. But at that point, ask yourself whether it may not be better to remodel inheritance as delegation in the object model (delegation is a way of making composition as powerful for reuse as inheritance). Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. EF acts as a buffer between the domain and relational models, but that doesn’t mean you can ignore persistence concerns when designing your classes. SummaryIn this series, we focused on one of the main structural aspect of the object/relational paradigm mismatch which is inheritance and discussed how EF solve this problem as an ORM solution. We learned about the three well-known inheritance mapping strategies and their implementations in EF Code First. Hopefully it gives you a better insight about the mapping of inheritance hierarchies as well as choosing the best strategy for your particular scenario. Happy New Year and Happy Code-Firsting! References ADO.NET team blog Java Persistence with Hibernate book a { color: #5A99FF; } a:visited { color: #5A99FF; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } .exception { background-color: #f0f0f0; font-style: italic; padding-bottom: 5px; padding-left: 5px; padding-top: 5px; padding-right: 5px; }

    Read the article

  • Coding With Windows Azure IaaS

    - by Hisham El-bereky
    This post will focus on some advanced programming topics concerned with IaaS (Infrastructure as a Service) which provided as windows azure virtual machine (with its related resources like virtual disk and virtual network), you know that windows azure started as PaaS cloud platform but regarding to some business cases which need to have full control over their virtual machine, so windows azure directed toward providing IaaS. Sometimes you will need to manage your cloud IaaS through code may be for these reasons: Working on hyper-cloud system by providing bursting connector to windows azure virtual machines Providing multi-tenant system which consume windows azure virtual machine Automated process on your on-premises or cloud service which need to utilize some virtual resources We are going to implement the following basic operation using C# code: List images Create virtual machine List virtual machines Restart virtual machine Delete virtual machine Before going to implement the above operations we need to prepare client side and windows azure subscription to communicate correctly by providing management certificate (x.509 v3 certificates) which permit client access to resources in your Windows Azure subscription, whilst requests made using the Windows Azure Service Management REST API require authentication against a certificate that you provide to Windows Azure More info about setting management certificate located here. And to install .cer on other client machine you will need the .pfx file, or if not exist by exporting .cer as .pfx Note: You will need to install .net 4.5 on your machine to try the code So let start This post built on the post sent by Michael Washam "Advanced Windows Azure IaaS – Demo Code", so I'm here to declare some points and to add new operation which is not exist in Michael's demo The basic C# class object used here as client to azure REST API for IaaS service is HttpClient (Provides a base class for sending HTTP requests and receiving HTTP responses from a resource identified by a URI) this object must be initialized with the required data like certificate, headers and content if required. Also I'd like to refer here that the code is based on using Asynchronous programming with calls to azure which enhance the performance and gives us the ability to work with complex calls which depends on more than one sub-call to achieve some operation The following code explain how to get certificate and initializing HttpClient object with required data like headers and content HttpClient GetHttpClient() { X509Store certificateStore = null; X509Certificate2 certificate = null; try { certificateStore = new X509Store(StoreName.My, StoreLocation.CurrentUser); certificateStore.Open(OpenFlags.ReadOnly); string thumbprint = ConfigurationManager.AppSettings["CertThumbprint"]; var certificates = certificateStore.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, false); if (certificates.Count > 0) { certificate = certificates[0]; } } finally { if (certificateStore != null) certificateStore.Close(); }   WebRequestHandler handler = new WebRequestHandler(); if (certificate!= null) { handler.ClientCertificates.Add(certificate); HttpClient httpClient = new HttpClient(handler); //And to set required headers lik x-ms-version httpClient.DefaultRequestHeaders.Add("x-ms-version", "2012-03-01"); httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/xml")); return httpClient; } return null; }  Let us keep the object httpClient as reference object used to call windows azure REST API IaaS service. For each request operation we need to define: Request URI HTTP Method Headers Content body (1) List images The List OS Images operation retrieves a list of the OS images from the image repository Request URI https://management.core.windows.net/<subscription-id>/services/images] Replace <subscription-id> with your windows Id HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None.  C# Code List<String> imageList = new List<String>(); //replace _subscriptionid with your WA subscription String uri = String.Format("https://management.core.windows.net/{0}/services/images", _subscriptionid);  HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri);  if (responseStream != null) {      XDocument xml = XDocument.Load(responseStream);      var images = xml.Root.Descendants(ns + "OSImage").Where(i => i.Element(ns + "OS").Value == "Windows");      foreach (var image in images)      {      string img = image.Element(ns + "Name").Value;      imageList.Add(img);      } } More information about the REST call (Request/Response) located here on this link http://msdn.microsoft.com/en-us/library/windowsazure/jj157191.aspx (2) Create Virtual Machine Creating virtual machine required service and deployment to be created first, so creating VM should be done through three steps incase hosted service and deployment is not created yet Create hosted service, a container for service deployments in Windows Azure. A subscription may have zero or more hosted services Create deployment, a service that is running on Windows Azure. A deployment may be running in either the staging or production deployment environment. It may be managed either by referencing its deployment ID, or by referencing the deployment environment in which it's running. Create virtual machine, the previous two steps info required here in this step I suggest here to use the same name for service, deployment and service to make it easy to manage virtual machines Note: A name for the hosted service that is unique within Windows Azure. This name is the DNS prefix name and can be used to access the hosted service. For example: http://ServiceName.cloudapp.net// 2.1 Create service Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices HTTP Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) are located here http://msdn.microsoft.com/en-us/library/windowsazure/gg441304.aspx C# code The following method show how to create hosted service async public Task<String> NewAzureCloudService(String ServiceName, String Location, String AffinityGroup, String subscriptionid) { String requestID = String.Empty;   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices", subscriptionid); HttpClient http = GetHttpClient();   System.Text.ASCIIEncoding ae = new System.Text.ASCIIEncoding(); byte[] svcNameBytes = ae.GetBytes(ServiceName);   String locationEl = String.Empty; String locationVal = String.Empty;   if (String.IsNullOrEmpty(Location) == false) { locationEl = "Location"; locationVal = Location; } else { locationEl = "AffinityGroup"; locationVal = AffinityGroup; }   XElement srcTree = new XElement("CreateHostedService", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("ServiceName", ServiceName), new XElement("Label", Convert.ToBase64String(svcNameBytes)), new XElement(locationEl, locationVal) ); ApplyNamespace(srcTree, ns);   XDocument CSXML = new XDocument(srcTree); HttpContent content = new StringContent(CSXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; } 2.2 Create Deployment Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deploymentslots/<deployment-slot-name> <deployment-slot-name> with staging or production, depending on where you wish to deploy your service package <service-name> provided as input from the previous step HTTP Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) are located here http://msdn.microsoft.com/en-us/library/windowsazure/ee460813.aspx C# code The following method show how to create hosted service deployment async public Task<String> NewAzureVMDeployment(String ServiceName, String VMName, String VNETName, XDocument VMXML, XDocument DNSXML) { String requestID = String.Empty;     String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments", _subscriptionid, ServiceName); HttpClient http = GetHttpClient(); XElement srcTree = new XElement("Deployment", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("Name", ServiceName), new XElement("DeploymentSlot", "Production"), new XElement("Label", ServiceName), new XElement("RoleList", null) );   if (String.IsNullOrEmpty(VNETName) == false) { srcTree.Add(new XElement("VirtualNetworkName", VNETName)); }   if(DNSXML != null) { srcTree.Add(new XElement("DNS", new XElement("DNSServers", DNSXML))); }   XDocument deploymentXML = new XDocument(srcTree); ApplyNamespace(srcTree, ns);   deploymentXML.Descendants(ns + "RoleList").FirstOrDefault().Add(VMXML.Root);     String fixedXML = deploymentXML.ToString().Replace(" xmlns=\"\"", ""); HttpContent content = new StringContent(fixedXML); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); }   return requestID; } 2.3 Create Virtual Machine Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<cloudservice-name>/deployments/<deployment-name>/roles <cloudservice-name> and <deployment-name> are provided as input from the previous steps Http Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) located here http://msdn.microsoft.com/en-us/library/windowsazure/jj157186.aspx C# code async public Task<String> NewAzureVM(String ServiceName, String VMName, XDocument VMXML) { String requestID = String.Empty;   String deployment = await GetAzureDeploymentName(ServiceName);   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roles", _subscriptionid, ServiceName, deployment);   HttpClient http = GetHttpClient(); HttpContent content = new StringContent(VMXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml"); HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; } (3) List Virtual Machines To list virtual machine hosted on windows azure subscription we have to loop over all hosted services to get its hosted virtual machines To do that we need to execute the following operations: listing hosted services listing hosted service Virtual machine 3.1 Listing Hosted Services Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. More info about this HTTP request located here on this link http://msdn.microsoft.com/en-us/library/windowsazure/ee460781.aspx C# Code async private Task<List<XDocument>> GetAzureServices(String subscriptionid) { String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices ", subscriptionid); List<XDocument> services = new List<XDocument>();   HttpClient http = GetHttpClient();   Stream responseStream = await http.GetStreamAsync(uri);   if (responseStream != null) { XDocument xml = XDocument.Load(responseStream); var svcs = xml.Root.Descendants(ns + "HostedService"); foreach (XElement r in svcs) { XDocument vm = new XDocument(r); services.Add(vm); } }   return services; }  3.2 Listing Hosted Service Virtual Machines Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>/roles/<role-name> HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. More info about this HTTP request here http://msdn.microsoft.com/en-us/library/windowsazure/jj157193.aspx C# Code async public Task<XDocument> GetAzureVM(String ServiceName, String VMName, String subscriptionid) { String deployment = await GetAzureDeploymentName(ServiceName); XDocument vmXML = new XDocument();   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roles/{3}", subscriptionid, ServiceName, deployment, VMName);   HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri); if (responseStream != null) { vmXML = XDocument.Load(responseStream); }   return vmXML; }  So the final method which can be used to list all virtual machines is: async public Task<XDocument> GetAzureVMs() { List<XDocument> services = await GetAzureServices(); XDocument vms = new XDocument(); vms.Add(new XElement("VirtualMachines")); ApplyNamespace(vms.Root, ns); foreach (var svc in services) { string ServiceName = svc.Root.Element(ns + "ServiceName").Value;   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deploymentslots/{2}", _subscriptionid, ServiceName, "Production");   try { HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri);   if (responseStream != null) { XDocument xml = XDocument.Load(responseStream); var roles = xml.Root.Descendants(ns + "RoleInstance"); foreach (XElement r in roles) { XElement svcnameel = new XElement("ServiceName", ServiceName); ApplyNamespace(svcnameel, ns); r.Add(svcnameel); // not part of the roleinstance vms.Root.Add(r); } } } catch (HttpRequestException http) { // no vms with cloud service } } return vms; }  (4) Restart Virtual Machine Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>/roles/<role-name>/Operations HTTP Method POST (HTTP 1.1) Headers x-ms-version: 2012-03-01 Content-Type: application/xml Body <RestartRoleOperation xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <OperationType>RestartRoleOperation</OperationType> </RestartRoleOperation>  More details about this http request here http://msdn.microsoft.com/en-us/library/windowsazure/jj157197.aspx  C# Code async public Task<String> RebootVM(String ServiceName, String RoleName) { String requestID = String.Empty;   String deployment = await GetAzureDeploymentName(ServiceName); String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roleInstances/{3}/Operations", _subscriptionid, ServiceName, deployment, RoleName);   HttpClient http = GetHttpClient();   XElement srcTree = new XElement("RestartRoleOperation", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("OperationType", "RestartRoleOperation") ); ApplyNamespace(srcTree, ns);   XDocument CSXML = new XDocument(srcTree); HttpContent content = new StringContent(CSXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; }  (5) Delete Virtual Machine You can delete your hosted virtual machine by deleting its deployment, but I prefer to delete its hosted service also, so you can easily manage your virtual machines from code 5.1 Delete Deployment Request URI https://management.core.windows.net/< subscription-id >/services/hostedservices/< service-name >/deployments/<Deployment-Name> HTTP Method DELETE (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. C# code async public Task<HttpResponseMessage> DeleteDeployment( string deploymentName) { string xml = string.Empty; String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}", _subscriptionid, deploymentName, deploymentName); HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await http.DeleteAsync(uri); return responseMessage; }  5.2 Delete Hosted Service Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name> HTTP Method DELETE (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. C# code async public Task<HttpResponseMessage> DeleteService(string serviceName) { string xml = string.Empty; String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}", _subscriptionid, serviceName); Log.Info("Windows Azure URI (http DELETE verb): " + uri, typeof(VMManager)); HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await http.DeleteAsync(uri); return responseMessage; }  And the following is the method which can used to delete both of deployment and service async public Task<string> DeleteVM(string vmName) { string responseString = string.Empty;   // as a convention here in this post, a unified name used for service, deployment and VM instance to make it easy to manage VMs HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await DeleteDeployment(vmName);   if (responseMessage != null) {   string requestID = responseMessage.Headers.GetValues("x-ms-request-id").FirstOrDefault(); OperationResult result = await PollGetOperationStatus(requestID, 5, 120); if (result.Status == OperationStatus.Succeeded) { responseString = result.Message; HttpResponseMessage sResponseMessage = await DeleteService(vmName); if (sResponseMessage != null) { OperationResult sResult = await PollGetOperationStatus(requestID, 5, 120); responseString += sResult.Message; } } else { responseString = result.Message; } } return responseString; }  Note: This article is subject to be updated Hisham  References Advanced Windows Azure IaaS – Demo Code Windows Azure Service Management REST API Reference Introduction to the Azure Platform Representational state transfer Asynchronous Programming with Async and Await (C# and Visual Basic) HttpClient Class

    Read the article

  • Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN

    - by ScottGu
    This morning we released a major set of updates to Windows Azure.  These updates included: Web Sites: General Availability Release of Windows Azure Web Sites with SLA Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines) MSDN: No more credit card requirement for sign-up All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Web Sites: General Availability Release of Windows Azure Web Sites I’m incredibly excited to announce the General Availability release of Windows Azure Web Sites. The Windows Azure Web Sites service is perfect for hosting a web presence, building customer engagement solutions, and delivering business web apps.  Today’s General Availability release means we are taking off the “preview” tag from the Free and Standard (formerly called reserved) tiers of Windows Azure Web Sites.  This means we are providing: A 99.9% monthly SLA (Service Level Agreement) for the Standard tier Microsoft Support available on a 24x7 basis (with plans that range from developer plans to enterprise Premier support) The Free tier runs in a shared compute environment and supports up to 10 web sites. While the Free tier does not come with an SLA, it works great for rapid development and testing and enables you to quickly spike out ideas at no cost. The Standard tier, which was called “Reserved” during the preview, runs using dedicated per-customer VM instances for great performance, isolation and scalability, and enables you to host up to 500 different Web sites within them.  You can easily scale your Standard instances on-demand using the Windows Azure Management Portal.  You can adjust VM instance sizes from a Small instance size (1 core, 1.75GB of RAM), up to a Medium instance size (2 core, 3.5GB of RAM), or Large instance (4 cores and 7 GB RAM).  You can choose to run between 1 and 10 Standard instances, enabling you to easily scale up your web backend to 40 cores of CPU and 70GB of RAM: Today’s release also includes general availability support for custom domain SSL certificate bindings for web sites running using the Standard tier. Customers will be able to utilize certificates they purchase for their custom domains and use either SNI or IP based SSL encryption. SNI encryption is available for all modern browsers and does not require an IP address.  SSL certificates can be used for individual sites or wild-card mapped across multiple sites (we charge extra for the use of a SSL cert – but the fee is per-cert and not per site which means you pay once for it regardless of how many sites you use it with).  Today’s release also includes the following new features: Auto-Scale support Today’s Windows Azure release adds preview support for Auto-Scaling web sites.  This enables you to setup automatic scale rules based on the activity of your instances – allowing you to automatically scale down (and save money) when they are below a CPU threshold you define, and automatically scale up quickly when traffic increases.  See below for more details. 64-bit and 32-bit mode support You can now choose to run your standard tier instances in either 32-bit or 64-bit mode (previously they only ran in 32-bit mode).  This enables you to address even more memory within individual web applications. Memory dumps Memory dumps can be very useful for diagnosing issues and debugging apps. Using a REST API, you can now get a memory dump of your sites, which you can then use for investigating issues in Visual Studio Debugger, WinDbg, and other tools. Scaling Sites Independently Prior to today’s release, all sites scaled up/down together whenever you scaled any site in a sub-region. So you may have had to keep your proof-of-concept or testing sites in a separate sub-region if you wanted to keep them in the Free tier. This will no longer be necessary.  Windows Azure Web Sites can now mix different tier levels in the same geographic sub-region. This allows you, for example, to selectively move some of your sites in the West US sub-region up to Standard tier when they require the features, scalability, and SLA of the Standard tier. Full pricing details on Windows Azure Web Sites can be found here.  Note that the “Shared Tier” of Windows Azure Web Sites remains in preview mode (and continues to have discounted preview pricing).  Mobile Services: General Availability Release of Windows Azure Mobile Services I’m incredibly excited to announce the General Availability release of Windows Azure Mobile Services.  Mobile Services is perfect for building scalable cloud back-ends for Windows 8.x, Windows Phone, Apple iOS, Android, and HTML/JavaScript applications.  Customers We’ve seen tremendous adoption of Windows Azure Mobile Services since we first previewed it last September, and more than 20,000 customers are now running mobile back-ends in production using it.  These customers range from startups like Yatterbox, to university students using Mobile Services to complete apps like Sly Fox in their spare time, to media giants like Verdens Gang finding new ways to deliver content, and telcos like TalkTalk Business delivering the up-to-the-minute information their customers require.  In today’s Build keynote, we demonstrated how TalkTalk Business is using Windows Azure Mobile Services to deliver service, outage and billing information to its customers, wherever they might be. Partners When we unveiled the source control and Custom API features I blogged about two weeks ago, we enabled a range of new scenarios, one of which is a more flexible way to work with third party services.  The following blogs, samples and tutorials from our partners cover great ways you can extend Mobile Services to help you build rich modern apps: New Relic allows developers to monitor and manage the end-to-end performance of iOS and Android applications connected to Mobile Services. SendGrid eliminates the complexity of sending email from Mobile Services, saving time and money, while providing reliable delivery to the inbox. Twilio provides a telephony infrastructure web service in the cloud that you can use with Mobile Services to integrate phone calls, text messages and IP voice communications into your mobile apps. Xamarin provides a Mobile Services add on to make it easy building cross-platform connected mobile aps. Pusher allows quickly and securely add scalable real-time messaging functionality to Mobile Services-based web and mobile apps. Visual Studio 2013 and Windows 8.1 This week during //build/ keynote, we demonstrated how Visual Studio 2013, Mobile Services and Windows 8.1 make building connected apps easier than ever. Developers building Windows 8 applications in Visual Studio can now connect them to Windows Azure Mobile Services by simply right clicking then choosing Add Connected Service. You can either create a new Mobile Service or choose existing Mobile Service in the Add Connected Service dialog. Once completed, Visual Studio adds a reference to Mobile Services SDK to your project and generates a Mobile Services client initialization snippet automatically. Add Push Notifications Push Notifications and Live Tiles are a key to building engaging experiences. Visual Studio 2013 and Mobile Services make it super easy to add push notifications to your Windows 8.1 app, by clicking Add a Push Notification item: The Add Push Notification wizard will then guide you through the registration with the Windows Store as well as connecting your app to a new or existing mobile service. Upon completion of the wizard, Visual Studio will configure your mobile service with the WNS credentials, as well as add sample logic to your client project and your mobile service that demonstrates how to send push notifications to your app. Server Explorer Integration In Visual Studio 2013 you can also now view your Mobile Services in the the Server Explorer. You can add tables, edit, and save server side scripts without ever leaving Visual Studio, as shown on the image below: Pricing With today’s general availability release we are announcing that we will be offering Mobile Services in three tiers – Free, Standard, and Premium.  Each tier is metered using a simple pricing model based on the # of API calls (bandwidth is included at no extra charge), and the Standard and Premium tiers are backed by 99.9% monthly SLAs.  You can elastically scale up or down the number of instances you have of each tier to increase the # of API requests your service can support – allowing you to efficiently scale as your business grows. The following table summarizes the new pricing model (full pricing details here):   You can find the full details of the new pricing model here. Build Conference Talks The //BUILD/ conference will be packed with sessions covering every aspect of developing connected applications with Mobile Services. The best part is that, even if you can’t be with us in San Francisco, every session is being streamed live. Be sure not to miss these talks: Mobile Services – Soup to Nuts — Josh Twist Building Cross-Platform Apps with Windows Azure Mobile Services — Chris Risner Connected Windows Phone Apps made Easy with Mobile Services — Yavor Georgiev Build Connected Windows 8.1 Apps with Mobile Services — Nick Harris Who’s that user? Identity in Mobile Apps — Dinesh Kulkarni Building REST Services with JavaScript — Nathan Totten Going Live and Beyond with Windows Azure Mobile Services — Kirill Gavrylyuk , Paul Batum Protips for Windows Azure Mobile Services — Chris Risner AutoScale: Dynamically scale up/down your app based on real-world usage One of the key benefits of Windows Azure is that you can dynamically scale your application in response to changing demand. In the past, though, you have had to either manually change the scale of your application, or use additional tooling (such as WASABi or MetricsHub) to automatically scale your application. Today, we’re announcing that AutoScale will be built-into Windows Azure directly.  With today’s release it is now enabled for Cloud Services, Virtual Machines and Web Sites (Mobile Services support will come soon). Auto-scale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance. Once configured it will regularly adjust the number of instances running in response to the load in your application. Currently, we support two different load metrics: CPU percentage Storage queue depth (Cloud Services and Virtual Machines only) We’ll enable automatic scaling on even more scale metrics in future updates. When to use Auto-Scale The following are good criteria for services/apps that will benefit from the use of auto-scale: The service/app can scale horizontally (e.g. it can be duplicated to multiple instances) The service/app load changes over time If your app meets these criteria, then you should look to leverage auto-scale. How to Enable Auto-Scale To enable auto-scale, simply navigate to the Scale tab in the Windows Azure Management Portal for the app/service you wish to enable.  Within the scale tab turn the Auto-Scale setting on to either CPU or Queue (for Cloud Services and VMs) to enable Auto-Scale.  Then change the instance count and target CPU settings to configure the Auto-Scale ranges you want to maintain. The image below demonstrates how to enable Auto-Scale on a Windows Azure Web-Site.  I’ve configured the web-site so that it will run using between 1 and 5 VM instances.  The exact # used will depend on the aggregate CPU of the VMs using the 40-70% range I’ve configured below.  If the aggregate CPU goes above 70%, then Windows Azure will automatically add new VMs to the pool (up to the maximum of 5 instances I’ve configured it to use).  If the aggregate CPU drops below 40% then Windows Azure will automatically start shutting down VMs to save me money: Once you’ve turned auto-scale on, you can return to the Scale tab at any point and select Off to manually set the number of instances. Using the Auto-Scale Preview With today’s update you can now, in just a few minutes, have Windows Azure automatically adjust the number of instances you have running  in your apps to keep your service performant at an even better cost. Auto-scale is being released today as a preview feature, and will be free until General Availability. During preview, each subscription is limited to 10 separate auto-scale rules across all of the resources they have (Web sites, Cloud services or Virtual Machines). If you hit the 10 limit, you can disable auto-scale for any resource to enable it for another. Alerts and Notifications Starting today we are now providing the ability to configure threshold based alerts on monitoring metrics. This feature is available for compute services (cloud services, VM, websites and mobiles services). Alerts provide you the ability to get proactively notified of active or impending issues within your application.  You can define alert rules for: Virtual machine monitoring metrics that are collected from the host operating system (CPU percentage, network in/out, disk read bytes/sec and disk write bytes/sec) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. Cloud service monitoring metrics that are collected from the host operating system (same as VM), monitoring metrics from the guest VM (from performance counters within the VM) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. For Web Sites and Mobile Services, alerting rules can be configured on monitoring metrics from monitoring endpoint urls (response time and uptime) that you have configured. Creating Alert Rules You can add an alert rule for a monitoring metric by navigating to the Setting -> Alerts tab in the Windows Azure Management Portal. Click on the Add Rule button to create an alert rule. Give the alert rule a name and optionally add a description. Then pick the service which you want to define the alert rule on: The next step in the alert creation wizard will then filter the monitoring metrics based on the service you selected:   Once created the rule will show up in your alerts list within the settings tab: The rule above is defined as “not activated” since it hasn’t tripped over the CPU threshold we set.  If the CPU on the above machine goes over the limit, though, I’ll get an email notifying me from an Windows Azure Alerts email address ([email protected]). And when I log into the portal and revisit the alerts tab I’ll see it highlighted in red.  Clicking it will then enable me to see what is causing it to fail, as well as view the history of when it has happened in the past. Alert Notifications With today’s initial preview you can now easily create alerting rules based on monitoring metrics and get notified on active or impending issues within your application that require attention. During preview, each subscription is limited to 10 alert rules across all of the services that support alert rules. No More Credit Card Requirement for MSDN Subscribers Earlier this month (during TechEd 2013), Windows Azure announced that MSDN users will get Windows Azure Credits every month that they can use for any Windows Azure services they want. You can read details about this in my previous Dev/Test blog post. Today we are making further updates to enable an easier Windows Azure signup for MSDN users. MSDN users will now not be required to provide payment information (e.g. no credit card) during sign-up, so long as they use the service within the included monetary credit for the billing period. For usage beyond the monetary credit, they can enable overages by providing the payment information and remove the spending limit. This enables a super easy, one page sign-up experience for MSDN users.  Simply sign-up for your Windows Azure trial using the same Microsoft ID that you use to manage your MSDN account, then complete the one page sign-up form below and you will be able to spend your free monthly MSDN credits (up to $150 each month) on any Windows Azure resource for dev/test:   This makes it trivially easy for every MDSN customer to start using Windows Azure today.  If you haven’t signed up yet, I definitely recommend checking it out. Summary Today’s release includes a ton of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >