Search Results

Search found 26454 results on 1059 pages for 'post parameter'.

Page 690/1059 | < Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >

  • There is no Key attribute in EF CTP 5

    - by Spence
    According to the blog post here Data Annotations in the Entity Framework there should be an attribute for a column called "Key" which allows you to mark the primary key of an entity. However I cannot locate this in .Net 3.5 or .Net 4.0. What have I missed? I've included the reference to EntityFramework.dll and I've checked all the attributes under System.ComponentModel.DataAnnotations but I cannot locate it. I have set my project to .Net 4.0 full (not client profile). Any ideas?

    Read the article

  • Validating only selected fields using ASP.NET MVC 2 and Data Annotations

    - by thinknow
    I'm using Data Annotations with ASP.NET MVC 2 as demonstrated in this post: http://weblogs.asp.net/scottgu/archive/2010/01/15/asp-net-mvc-2-model-validation.aspx Everything works fine when creating / updating an entity where all required property values are specified in the form and valid. However, what if I only want to update some of the fields? For example, let's say I have an Account entity with 20 fields, but I only want to update Username and Password? ModelState.IsValid validates against all the properties, regardless of whether they are referenced in the submitted form. How can I get it to validate only the fields that are referenced in the form?

    Read the article

  • Integrating a blog into asp.net website

    - by ScottK
    I have a website that I would like to integrate a blog into. I have seen lots of options available and not sure which one to jump into. What I want to do is have the most recent post on my home page and have users navigate to www.mysite.com/blog to see all posts. I would also like to have a sidebar on the homepage with links to 10 most recent posts. Where should I start? Should I use wordpress or an asp.net engine? Should I use rss feeds to get information to homepage?

    Read the article

  • How to set AeDebug to get a minidump with the name of the process ?

    - by JC Martin
    I have to perform some post mortem debugging on a C++ project. Known way to perform is to set the cdb debugger as a minidump generator and to process the dumps collects afterwards. I read nearly the whole web and I didn't find a solution to produce a minidump with the name of the process that has crashed Is there a way to set AeDebug\Debugger registry variable in such a manner that cdb generates a dump file with the name of the process ? When I encapsulate the call to cdb.exe in a batch file, it starts well but stays blocked on the symbol searching. I must perform a Ctrl+C in order to stop the batch, then the minidump, with the correct process name, is created... but of course I can't set up such a thing in an unattended production environment... Has anybody done that before ?

    Read the article

  • Not all parameters get sent in jquery ajax call

    - by rksprst
    I have a strange error where my jquery ajax request doesn't submit all the parameters. $.ajax({ url: "/ajax/doAssignTask", type: 'GET', contentType: "application/json", data: { "just_a_task": just_a_task, "fb_post_date": fb_post_date, "task_fb_postId": task_fb_postId, "sedia_task_guid": sedia_task_guid, "itemGuid": itemGuid, "itemType": itemType, "taskName": taskName, "assignedToUserGuid": assignedToUserGuid, "taskDescription": taskDescription }, success: function(data, status) { //success code }, error: function(xhr, desc, err) { //error code } }); But using firebug (and debugging) I can see that only these variables are posted: assignedToUserGuid itemGuid itemType just_a_task taskDescription taskName It's missing fb_post_date, task_fb_postId, and sedia_task_guid I have no idea what would cause it to post only some items and not others? Anyone know? Data is sent to asp.net controller that returns jsonresult (hence the contentType) Any help is appreciated. Thanks!

    Read the article

  • Is Rails default CSRF protection insecure

    - by schickb
    By default the form post CSRF protection in Rails creates an authenticity token for a user that only changes when the user's session changes. One of our customers did a security audit of our site and flagged that as an issue. The auditor's statement was that if we also had a XSS vulnerability that an attacker could grab another user's authenticity token and make use of it for CSRF attacks until the user's session expired. But is seems to me that if we had an XSS vulnerability like that an attacker could just as easily grab another user's session cookie and login as that user directly. Or even just make call to our REST Api as the user being attacked. No secondary CSRF attack needed. Have I missed something? Is there a real problem with the default CSRF protection in Rails?

    Read the article

  • VS2010 method comment block pretty format.

    - by camelCase
    I have just installed the lastest RC of VS2010, which for me represents a shift from VS2008. A new comment formatting feature that I was looking forward to appears to be missing. About 6 months ago I read a Scott Gu blog post that mentioned a new VS2010 feature that would format /// style method comment blocks into more readable formatted regions inline with other code. The Scott Gu blog did not provide a screen shot but I was expecting the VS2010 editor to remove the XML tags from the /// method comment block and render just the essential text comment. Was this feature pulled during the Beta, or is there an option switch urking somewhere?

    Read the article

  • Silverlight and Active Directory

    - by Refracted Paladin
    I am planning to familiarize(read teach) myself with Silverlight by building an in-house app for managing our employees. I, obviously, would need this to interact with Active Directory on some level. What are my options? Has anyone tried this before? I am currently going to explore using Services(WCF???) to do the AD interaction portion? Thoughts? There is also this SO Post on using PowerShell to interact with AD. Maybe that is a possibility? Thanks,

    Read the article

  • Stop loading detail grid at page load.

    - by pirzada
    How can I stop DetailGrid from not loading at page load. Because I have button in master grid to load Detail Grid. Below is code for Detail Grid. jQuery().ready(function () { jQuery("#list10_d").jqGrid({ height: 100, url: '/JQSandbox/MyFullSubGridData?id=0', datatype: "json", mtype: 'POST', colNames: ['Index', 'Name', 'Department', 'Hire Date', 'Supervisor'], colModel: [ { name: 'employeeID', index: 'employeeID', width: 10 }, { width: 30 }, { name: 'employeeDepartment', index: 'employeeDepartment', width: 30 }, { name: 'employeeHiredate', index: 'employeeHiredate', width: 40, sortable: false }, { name: 'employeeSup', index: 'employeeSup', formatter: 'checkbox', align: 'center', width: 30, search: false } ], loadtext: "", loadui: "block", width: 500, rowNum: 5, rowList: [5, 10, 20], pager: '#pager10_d', sortname: 'employeeID', viewrecords: true, sortorder: "asc", multiselect: true, caption: "Detail Grid: 1" }).navGrid('#pager10_d', { add: false, edit: false, del: false }); });

    Read the article

  • Getting-started: Setup Database for Node.js

    - by Emile Petrone
    I am new to node.js but am excited to try it out. I am using Express as a web framework, and Jade as a template engine. Both were easy to get setup following this tutorial from Node Camp. However the one problem I am finding is I can't find a simple tutorial for getting a DB set up. I am trying to build a basic chat application (store session and message). Does anyone know of a good tutorial? This other SO post talks about dbs to use- but as this is very different from the Django/MySQL world I've been in, I want to make sure I understand what is going on. Thanks!

    Read the article

  • Architecture choice about representation of collections in Business Objects

    - by Rajarshi
    I have made certain choices in my architecture which I request the community to review and comment. I am breaking up the post in smaller sections to make it easier to understand the context and then suggest/comment. I am sorry that the post is long, but is required to explain the context. What am I building A typical business application where there are application users, security roles, business operation/action rights based on roles and several business modules like Stock Receive, Stock Transfer, Sale Order, Sale Invoice, Sale Return, Stock Audit etc. and several reports. The application is a WinForm application since it has a lot of rich and responsive UI requirements and has to operate in disconnected mode (with a local SQL Server), most of the time. What have I done I have built a framework - nothing to boast about, but just a set of libraries that serves the repetative requirements of my application, e.g. authentication, role based authorization, data access, validation, exception handling, logging, change status tracking, presentation model compliance and reasonable loose coupling between components. No, I have not written everything from scratch, you can say I have consolidated many things together like some concepts from CSLA, Martin Fowler for Presentation Model, blocks from Enterprise Library, Unity etc. to build a set of libraries that will help my developers be productive quickly without having to look up Google for many of the technical requirements. I have tried to keep the framework generic so that it can be used in typical business applications and also tried to follow some best practices that will support the same Business Objects to be used in an ASP.NET MVC environment also. My present architecture serves my objectives well, and have built several modules (on WinForm) without much trouble. The architecture also lent itself well to build some usable prototype on ASP.NET MVC with the same set of business objects, without changing a single line of code. My Dilemma I have used Custom Business Objects since that gives me a clearer OOP representation of the problem scope in my solution scope, and helps me visualize my entire solution as collection of objects with data and behavior rather than having a set of relational data (DataSet) and implement behaviours (business logic, validation) etc. separately. With rich databinding support in .NET 2.0 binding Custom Business Objects to UI was a breeze. Now while building my business objects, I am still in a dilemma about representation of collections in business objects. Currently I am using DataSets to represent collections while I have seen many suggestions to implement custom collections. For example, in my vision, a typical Sale Invoice Object will contain 'Sales Invoice Items' as a collection. Now theoritically, I can accept that the each 'Sales Invoice Item' should have its own behavior along with their data (ItemCode, Name, Qty, Price etc.) but typically managing of Sale Invoice Items in a Sale Invoice is handled by the Sale Invoice Object itself, e.g. adding/removing Items from collection. Additionally, we can also put business logic/rules for the Sales Invoice Items like "Qty should not be greater than the ordered qty", "Price should be max 10% above the price in Sale Order" etc. in the Sale Invoice object itself. With that kind of a vision, I felt that most business object child collections can be managed by the parent itself, including add/remove from collection as well and implementing business logic for the collection items, hence the collection items hold nothing but data. Additionally, typical collections are represented in UI in Grids, where ability to support DataBinding becomes very important for any collection. Implementing a custom collection, in that case would also mean, I have to implement robust DataBinding support as well, for the collection, which is of course time consuming. Now, considering child collection behaviors are implemented in the parent and the need for DataBinding of child collections, I chose DataSet to represent any child collection in my business objects. In the above example of Sale Invoice I will have 'Invoice Number', 'Date', 'Customer' etc. as attributes of the 'Sale Invoice' but 'InvoiceItems' as a DataSet. Of course, when I say DataSet, it is not a vanilla dataset but an extended DataSet that supports business rule validation and the same role based security model of my framework to allow/deny any business operation to rows/columns of the DataSet, automatically. This approach has allowed easier collection management and databinding in my business objects and my developers are able to deliver modules rapidly. Questions Do you feel that the approach is reasonable? Do you see any shortcomings of this approach? I am recently thinking of using 'Typed DataSets' as child collections, for easier representation in code, that will allow me to write 'currentInvoice.InvoiceItems' (for the DataTable) and 'invoiceItem.ProductCode' or 'invoiceItem.Qty', instead of 'drow["ProductCode"].ToString()' or '(int)drow["Qty"]' etc. Does this choice have any demerits? Thank you if you have read so far and a salute if you still have the Energy to answer.

    Read the article

  • Detecting HTML5/CSS3 Features using Modernizr

    - by dwahlin
    HTML5, CSS3, and related technologies such as canvas and web sockets bring a lot of useful new features to the table that can take Web applications to the next level. These new technologies allow applications to be built using only HTML, CSS, and JavaScript allowing them to be viewed on a variety of form factors including tablets and phones. Although HTML5 features offer a lot of promise, it’s not realistic to develop applications using the latest technologies without worrying about supporting older browsers in the process. If history has taught us anything it’s that old browsers stick around for years and years which means developers have to deal with backward compatibility issues. This is especially true when deploying applications to the Internet that target the general public. This begs the question, “How do you move forward with HTML5 and CSS3 technologies while gracefully handling unsupported features in older browsers?” Although you can write code by hand to detect different HTML5 and CSS3 features, it’s not always straightforward. For example, to check for canvas support you need to write code similar to the following:   <script> window.onload = function () { if (canvasSupported()) { alert('canvas supported'); } }; function canvasSupported() { var canvas = document.createElement('canvas'); return (canvas.getContext && canvas.getContext('2d')); } </script> If you want to check for local storage support the following check can be made. It’s more involved than it should be due to a bug in older versions of Firefox. <script> window.onload = function () { if (localStorageSupported()) { alert('local storage supported'); } }; function localStorageSupported() { try { return ('localStorage' in window && window['localStorage'] != null); } catch(e) {} return false; } </script> Looking through the previous examples you can see that there’s more than meets the eye when it comes to checking browsers for HTML5 and CSS3 features. It takes a lot of work to test every possible scenario and every version of a given browser. Fortunately, you don’t have to resort to writing custom code to test what HTML5/CSS3 features a given browser supports. By using a script library called Modernizr you can add checks for different HTML5/CSS3 features into your pages with a minimal amount of code on your part. Let’s take a look at some of the key features Modernizr offers.   Getting Started with Modernizr The first time I heard the name “Modernizr” I thought it “modernized” older browsers by added missing functionality. In reality, Modernizr doesn’t actually handle adding missing features or “modernizing” older browsers. The Modernizr website states, “The name Modernizr actually stems from the goal of modernizing our development practices (and ourselves)”. Because it relies on feature detection rather than browser sniffing (a common technique used in the past – that never worked that great), Modernizr definitely provides a more modern way to test features that a browser supports and can even handle loading additional scripts called shims or polyfills that fill in holes that older browsers may have. It’s a great tool to have in your arsenal if you’re a web developer. Modernizr is available at http://modernizr.com. Two different types of scripts are available including a development script and custom production script. To generate a production script, the site provides a custom script generation tool rather than providing a single script that has everything under the sun for HTML5/CSS3 feature detection. Using the script generation tool you can pick the specific test functionality that you need and ignore everything that you don’t need. That way the script is kept as small as possible. An example of the custom script download screen is shown next. Notice that specific CSS3, HTML5, and related feature tests can be selected. Once you’ve downloaded your custom script you can add it into your web page using the standard <script> element and you’re ready to start using Modernizr. <script src="Scripts/Modernizr.js" type="text/javascript"></script>   Modernizr and the HTML Element Once you’ve add a script reference to Modernizr in a page it’ll go to work for you immediately. In fact, by adding the script several different CSS classes will be added to the page’s <html> element at runtime. These classes define what features the browser supports and what features it doesn’t support. Features that aren’t supported get a class name of “no-FeatureName”, for example “no-flexbox”. Features that are supported get a CSS class name based on the feature such as “canvas” or “websockets”. An example of classes added when running a page in Chrome is shown next:   <html class=" js flexbox canvas canvastext webgl no-touch geolocation postmessage websqldatabase indexeddb hashchange history draganddrop websockets rgba hsla multiplebgs backgroundsize borderimage borderradius boxshadow textshadow opacity cssanimations csscolumns cssgradients cssreflections csstransforms csstransforms3d csstransitions fontface generatedcontent video audio localstorage sessionstorage webworkers applicationcache svg inlinesvg smil svgclippaths"> Here’s an example of what the <html> element looks like at runtime with Internet Explorer 9:   <html class=" js no-flexbox canvas canvastext no-webgl no-touch geolocation postmessage no-websqldatabase no-indexeddb hashchange no-history draganddrop no-websockets rgba hsla multiplebgs backgroundsize no-borderimage borderradius boxshadow no-textshadow opacity no-cssanimations no-csscolumns no-cssgradients no-cssreflections csstransforms no-csstransforms3d no-csstransitions fontface generatedcontent video audio localstorage sessionstorage no-webworkers no-applicationcache svg inlinesvg smil svgclippaths">   When using Modernizr it’s a common practice to define an <html> element in your page with a no-js class added as shown next:   <html class="no-js">   You’ll see starter projects such as HTML5 Boilerplate (http://html5boilerplate.com) or Initializr (http://initializr.com) follow this approach (see my previous post for more information on HTML5 Boilerplate). By adding the no-js class it’s easy to tell if a browser has JavaScript enabled or not. If JavaScript is disabled then no-js will stay on the <html> element. If JavaScript is enabled, no-js will be removed by Modernizr and a js class will be added along with other classes that define supported/unsupported features. Working with HTML5 and CSS3 Features You can use the CSS classes added to the <html> element directly in your CSS files to determine what style properties to use based upon the features supported by a given browser. For example, the following CSS can be used to render a box shadow for browsers that support that feature and a simple border for browsers that don’t support the feature: .boxshadow #MyContainer { border: none; -webkit-box-shadow: #666 1px 1px 1px; -moz-box-shadow: #666 1px 1px 1px; } .no-boxshadow #MyContainer { border: 2px solid black; }   If a browser supports box-shadows the boxshadow CSS class will be added to the <html> element by Modernizr. It can then be associated with a given element. This example associates the boxshadow class with a div with an id of MyContainer. If the browser doesn’t support box shadows then the no-boxshadow class will be added to the <html> element and it can be used to render a standard border around the div. This provides a great way to leverage new CSS3 features in supported browsers while providing a graceful fallback for older browsers. In addition to using the CSS classes that Modernizr provides on the <html> element, you also use a global Modernizr object that’s created. This object exposes different properties that can be used to detect the availability of specific HTML5 or CSS3 features. For example, the following code can be used to detect canvas and local storage support. You can see that the code is much simpler than the code shown at the beginning of this post. It also has the added benefit of being tested by a large community of web developers around the world running a variety of browsers.   $(document).ready(function () { if (Modernizr.canvas) { //Add canvas code } if (Modernizr.localstorage) { //Add local storage code } }); The global Modernizr object can also be used to test for the presence of CSS3 features. The following code shows how to test support for border-radius and CSS transforms:   $(document).ready(function () { if (Modernizr.borderradius) { $('#MyDiv').addClass('borderRadiusStyle'); } if (Modernizr.csstransforms) { $('#MyDiv').addClass('transformsStyle'); } });   Several other CSS3 feature tests can be performed such as support for opacity, rgba, text-shadow, CSS animations, CSS transitions, multiple backgrounds, and more. A complete list of supported HTML5 and CSS3 tests that Modernizr supports can be found at http://www.modernizr.com/docs.   Loading Scripts using Modernizr In cases where a browser doesn’t support a specific feature you can either provide a graceful fallback or load a shim/polyfill script to fill in missing functionality where appropriate (more information about shims/polyfills can be found at https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills). Modernizr has a built-in script loader that can be used to test for a feature and then load a script if the feature isn’t available. The script loader is built-into Modernizr and is also available as a standalone yepnope script (http://yepnopejs.com). It’s extremely easy to get started using the script loader and it can really simplify the process of loading scripts based on the availability of a particular browser feature. To load scripts dynamically you can use Modernizr’s load() function which accepts properties defining the feature to test (test property), the script to load if the test succeeds (yep property), the script to load if the test fails (nope property), and a script to load regardless of if the test succeeds or fails (both property). An example of using load() with these properties is show next: Modernizr.load({ test: Modernizr.canvas, yep: 'html5CanvasAvailable.js’, nope: 'excanvas.js’, both: 'myCustomScript.js' }); In this example Modernizr is used to not only load scripts but also to test for the presence of the canvas feature. If the target browser supports the HTML5 canvas then the html5CanvasAvailable.js script will be loaded along with the myCustomScript.js script (use of the yep property in this example is a bit contrived – it was added simply to demonstrate how the property can be used in the load() function). Otherwise, a polyfill script named excanvas.js will be loaded to add missing canvas functionality for Internet Explorer versions prior to 9. Once excanvas.js is loaded the myCustomScript.js script will be loaded. Because Modernizr handles loading scripts, you can also use it in creative ways. For example, you can use it to load local scripts when a 3rd party Content Delivery Network (CDN) such as one provided by Google or Microsoft is unavailable for whatever reason. The Modernizr documentation provides the following example that demonstrates the process for providing a local fallback for jQuery when a CDN is down:   Modernizr.load([ { load: '//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.js', complete: function () { if (!window.jQuery) { Modernizr.load('js/libs/jquery-1.6.4.min.js'); } } }, { // This will wait for the fallback to load and // execute if it needs to. load: 'needs-jQuery.js' } ]); This code attempts to load jQuery from the Google CDN first. Once the script is downloaded (or if it fails) the function associated with complete will be called. The function checks to make sure that the jQuery object is available and if it’s not Modernizr is used to load a local jQuery script. After all of that occurs a script named needs-jQuery.js will be loaded. Conclusion If you’re building applications that use some of the latest and greatest features available in HTML5 and CSS3 then Modernizr is an essential tool. By using it you can reduce the amount of custom code required to test for browser features and provide graceful fallbacks or even load shim/polyfill scripts for older browsers to help fill in missing functionality. 

    Read the article

  • Base-camp Style Subdomains and IDs of Models

    - by Newy
    I have an app that has Basecamp-style subdomains, that is, I have Projects, Users, Apples and Oranges. The Users, Apples and Oranges are all keyed to a Project and only exist in the http://project.myapp.com. I added a project_id to Users, Apples and Oranges and everything works, except of course that the ids of those three objects increment globally, and throughout my app I lookup objects by that id. This doesn't seem like best practice. Should I instead do lookups by a secondary key? How does that affect efficiency? If there's a good blog post that covers this, would be wesome.

    Read the article

  • Best/Most Comprehensive API for Stocks/Financial Data

    - by Wilco
    What is the most recommended free/public API for accessing financial market stats and stock quotes (preferrably real-time quotes)? I'm not too picky about how it's exposed (SOAP, REST, some proprietary XML setup, etc.), as long as it's got some decent documentation. I'm planning to build a simple web dashboard in PHP with some basic data (basically a quick-n-dirty homepage), but may want to grow it into a full blown web app eventually. Any thoughts? As I find some, I'll post a list here (feel free to comment if you've used any of them before): Free opentick (soprano) Not Free XigniteRealTime

    Read the article

  • How can I throttle user login attempts in PHP

    - by jasondavis
    I was just reading this post http://stackoverflow.com/questions/549/the-definitive-guide-to-website-authentication-beta#477585 on Preventing Rapid-Fire Login Attempts. Best practice #1: A short time delay that increases with the number of failed attempts, like: 1 failed attempt = no delay 2 failed attempts = 2 sec delay 3 failed attempts = 4 sec delay 4 failed attempts = 8 sec delay 5 failed attempts = 16 sec delay etc. DoS attacking this scheme would be very impractical, but on the other hand, potentially devastating, since the delay increases exponentially. I am curious how I could implement something like this for my login system in PHP?

    Read the article

  • Convert json to a string using jquery

    - by becomingGuru
    I have a nested json. I want to post it as a form input value. But, seems like jquery puts "Object object" string into the value. It seems easier to pass around the string and convert into the native form I need, than dealing with json as I don't need to change anything once it is generated. What is the simplest way to convert a json var json = { "firstName": "John", "lastName": "Smith", "age": 25, "address": { "streetAddress": "21 2nd Street", "city": "New York", "state": "NY", "postalCode": "10021" }, "phoneNumber": [ { "type": "home", "number": "212 555-1234" }, { "type": "fax", "number": "646 555-4567" } ], "newSubscription": false, "companyName": null }; into its string form? var json = '{ "firstName": "John", "lastName": "Smith", "age": 25, "address": { "streetAddress": "21 2nd Street", "city": "New York", "state": "NY", "postalCode": "10021" }, "phoneNumber": [ { "type": "home", "number": "212 555-1234" }, { "type": "fax", "number": "646 555-4567" } ], "newSubscription": false, "companyName": null }' Following doesn't do what I need: Json.stringify()

    Read the article

  • tweetmeme and asp.net

    - by Alexander
    How can I incorporate tweetmeme so I can get that retweet button on an .aspx page? I know the code looks something like this: <div style=’float:right; padding: 5px 5px 5px 5px’> <script type=’text/javascript’> tweetmeme_url=&#39;<data:post.url/>&#39;; tweetmeme_style = ‘compact‘; </script> <script src=’http://tweetmeme.com/i/scripts/button.js’ type=’text/javascript’> </script> </div> but we can't add a script inside the body of a html page

    Read the article

  • Android MediaPlayer crashing app

    - by user1555863
    I have an android app with a button that plays a sound. the code for playing the sound: if (mp != null) { mp.release(); } mp = MediaPlayer.create(this, R.raw.match); mp.start(); mp is a field in the activity: public class Game extends Activity implements OnClickListener { /** Called when the activity is first created. */ //variables: MediaPlayer mp; //... The app runs ok, but after clicking the button about 200 times on the emulator, app crashed and gave me this error https://dl.dropbox.com/u/5488790/error.txt (couldn't figure how to post it here so it will appear decently) i am assuming this is because the MediaPlayer object is consuming up too much memory, but isn't mp.release() supposed to take care of this? What am i doing wrong here?

    Read the article

  • Bluimp file upload send data to the server on .fileupload

    - by MyName
    OK I've Googled...Googled and Googled again with no avail on how I can send data to the server like an ajax call's data option for the file upload: $('#file_upload').fileupload({ dataType: 'json', url: "@(Url.Action("UploadFiles", "ExcelUpload"))", // formData: function(form) { return [{ name: "dataTable", value : "@(Model)"}];}, progressall: function (e, data) { $(this).find('.progressbar').progressbar({ value: parseInt(data.loaded / data.total * 100, 10) }); }, done: function (e, data) { BadFile(e, data); } }); controller would look something like this: [HttpPost] public ContentResult UploadFiles(Mytype param1, MyType Param2) { .. } I want to do something similar to this: $.ajax({ url: "@(Url.Action("Action", "Controller"))", type: "post", data: { param1: value, param2: @(Model) } }); On the fileupload callback. Is this possible? How can I pass values to the ServerSide? Should I switch to a different uploader..? Please help me out! I need to resolve this as soon as possible.

    Read the article

  • What does !! (double exclamation point) mean?

    - by molecules
    In the code below, from a blog post by Alias, I noticed the use of the double exclamation point !!. I was wondering what it meant and where I could go in the future to find explanations for Perl syntax like this. (Yes, I already searched for '!!' at perlsyn). package Foo; use vars qw{$DEBUG}; BEGIN { $DEBUG = 0 unless defined $DEBUG; } use constant DEBUG => !! $DEBUG; sub foo { debug('In sub foo') if DEBUG; ... } UPDATE Thanks for all of your answers. Here is something else I just found that is related The List Squash Operator x!!

    Read the article

  • github url style

    - by Alex Le
    Hi all, I wanted to have users within my website to have their own URL like http://mysite.com/username (similar to GitHub, e.g. my account is http:// github. com/sr3d). This would help with SEO since every profile is under the same domain, as apposed to the sub-domain approach. My site is running on Rails and Nginx/Passenger. Currently I have a solution using a bunch of rewrite in the nginx.conf file, and hard-coded controller names (with namespace support as well). I can share include the nginx.conf here if you guys want to take a look. I wanted to know if there's a better way of making the URL pretty like that. (If you suggest a better place to post this question then please let me know) Cheers, Alex

    Read the article

  • Figuring out if overflow:auto would have been triggered on a div

    - by jerrygarciuh
    Hi folks, // Major edit, sorry in bed with back pain, screwed up post One of the ad agencies I code for had me set up an alternate scrolling solution because you know how designers hate things that just work but aren't beautiful. The scrolling solution is applied to divs with overflow:hidden and uses jQuery's scrollTo(). So, this is married in places to their CMS. What I have not been able to sort yet is how to hide the scrolling UI when overflow:auto would not have been triggered by the CMS content. The divs have set heights and widths. Can i detect hidden content? Or measure the div contents' height? Any ideas? TIA JG

    Read the article

  • TFS API Change WorkItem CreatedDate And ChangedDate To Historic Dates

    - by Tarun Arora
    There may be times when you need to modify the value of the fields “System.CreatedDate” and “System.ChangedDate” on a work item. Richard Hundhausen has a great blog with ample of reason why or why not you should need to set the values of these fields to historic dates. In this blog post I’ll show you, Create a PBI WorkItem linked to a Task work item by pre-setting the value of the field ‘System.ChangedDate’ to a historic date Change the value of the field ‘System.Created’ to a historic date Simulate the historic burn down of a task type work item in a sprint Explain the impact of updating values of the fields CreatedDate and ChangedDate on the Sprint burn down chart Rules of Play      1. You need to be a member of the Project Collection Service Accounts              2. You need to use ‘WorkItemStoreFlags.BypassRules’ when you instantiate the WorkItemStore service // Instanciate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules);      3. You cannot set the ChangedDate         - Less than the changed date of previous revision         - Greater than current date Walkthrough The walkthrough contains 5 parts 00 – Required References 01 – Connect to TFS Programmatically 02 – Create a Work Item Programmatically 03 – Set the values of fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates 04 – Results of our experiment Lets get started………………………………………………… 00 – Required References Microsoft.TeamFoundation.dll Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll 01 – Connect to TFS Programmatically I have a in depth blog post on how to connect to TFS programmatically in case you are interested. However, the code snippet below will enable you to connect to TFS using the Team Project Picker. // Services I need access to globally private static TfsTeamProjectCollection _tfs; private static ProjectInfo _selectedTeamProject; private static WorkItemStore _wis; // Connect to TFS Using Team Project Picker public static bool ConnectToTfs() { var isSelected = false; // The user is allowed to select only one project var tfsPp = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPp.ShowDialog(); // The TFS project collection _tfs = tfsPp.SelectedTeamProjectCollection; if (tfsPp.SelectedProjects.Any()) { // The selected Team Project _selectedTeamProject = tfsPp.SelectedProjects[0]; isSelected = true; } return isSelected; } 02 – Create a Work Item Programmatically In the below code snippet I have create a Product Backlog Item and a Task type work item and then link them together as parent and child. Note – You will have to set the ChangedDate to a historic date when you created the work item. Remember, If you try and set the ChangedDate to a value earlier than last assigned you will receive the following exception… TF26212: Team Foundation Server could not save your changes. There may be problems with the work item type definition. Try again or contact your Team Foundation Server administrator. If you notice below I have added a few seconds each time I have modified the ‘ChangedDate’ just to avoid running into the exception listed above. // Create Linked Work Items and return Ids private static List<int> CreateWorkItemsProgrammatically() { // Instantiate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules); // List of work items to return var listOfWorkItems = new List<int>(); // Create a new Product Backlog Item var p = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Product Backlog Item"]); p.Title = "This is a new PBI"; p.Description = "Description"; p.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); p.AreaPath = _selectedTeamProject.Name; p["Effort"] = 10; // Just double checking that ByPassRules is set to true if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (p.Validate().Count == 0) { p.Save(); listOfWorkItems.Add(p.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var t = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Task"]); t.Title = "This is a task"; t.Description = "Task Description"; t.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); t.AreaPath = _selectedTeamProject.Name; t["Remaining Work"] = 10; if (_wis.BypassRules) { t.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (t.Validate().Count == 0) { t.Save(); listOfWorkItems.Add(t.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in t.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var linkTypEnd = _wis.WorkItemLinkTypes.LinkTypeEnds["Child"]; p.Links.Add(new WorkItemLink(linkTypEnd, t.Id) {ChangedDate = Convert.ToDateTime("2012-01-01").AddSeconds(20)}); if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01").AddSeconds(20); } if (p.Validate().Count == 0) { p.Save(); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } return listOfWorkItems; } 03 – Set the value of “Created Date” and Change the value of “Changed Date” to Historic Dates The CreatedDate can only be changed after a work item has been created. If you try and set the CreatedDate to a historic date at the time of creation of a work item, it will not work. // Lets do a work item effort burn down simulation by updating the ChangedDate & CreatedDate to historic Values private static void WorkItemChangeSimulation(IEnumerable<int> listOfWorkItems) { foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); switch (wi.Type.Name) { case "ProductBacklogItem": if (wi.State.ToLower() == "new") wi.State = "Approved"; // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed Date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; case "Task": // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; } } // A mock sprint start date var sprintStart = DateTime.Today.AddDays(-5); // A mock sprint end date var sprintEnd = DateTime.Today.AddDays(5); // What is the total Sprint duration var totalSprintDuration = (sprintEnd - sprintStart).Days; // How much of the sprint have we already covered var noOfDaysIntoSprint = (DateTime.Today - sprintStart).Days; // Get the effort assigned to our tasks var totalEffortRemaining = QueryTaskTotalEfforRemaining(listOfWorkItems); // Defining how much effort to burn every day decimal dailyBurnRate = totalEffortRemaining / totalSprintDuration < 1 ? 1 : totalEffortRemaining / totalSprintDuration; // we have just created one task var totalNoOfTasks = 1; var simulation = sprintStart; var currentDate = DateTime.Today.Date; // Carry on till effort has been burned down from sprint start to today while (simulation.Date != currentDate.Date) { var dailyBurnRate1 = dailyBurnRate; // A fixed amount needs to be burned down each day while (dailyBurnRate1 > 0) { // burn down bit by bit from all unfinished task type work items foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); var isDirty = false; // Set the status to in progress if (wi.State.ToLower() == "to do") { wi.State = "In Progress"; isDirty = true; } // Ensure that there is enough effort remaining in tasks to burn down the daily burn rate if (QueryTaskTotalEfforRemaining(listOfWorkItems) > dailyBurnRate1) { // If there is less than 1 unit of effort left in the task, burn it all if (Convert.ToDecimal(wi["Remaining Work"]) <= 1) { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } else { // How much to burn from each task? var toBurn = (dailyBurnRate / totalNoOfTasks) < 1 ? 1 : (dailyBurnRate / totalNoOfTasks); // Check that the task has enough effort to allow burnForTask effort if (Convert.ToDecimal(wi["Remaining Work"]) >= toBurn) { wi["Remaining Work"] = Convert.ToDecimal(wi["Remaining Work"]) - toBurn; dailyBurnRate1 = dailyBurnRate1 - toBurn; isDirty = true; } else { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } } } else { dailyBurnRate1 = 0; } if (isDirty) { if (Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).Date == simulation.Date) { wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(20); } else { wi.Fields["System.ChangedDate"].Value = simulation.AddSeconds(20); } wi.Save(); } } } // Increase date by 1 to perform daily burn down by day simulation = Convert.ToDateTime(simulation).AddDays(1); } } // Get the Total effort remaining in the current sprint private static decimal QueryTaskTotalEfforRemaining(List<int> listOfWorkItems) { var unfinishedWorkInCurrentSprint = _wis.GetQueryDefinition( new Guid(QueryAndGuid.FirstOrDefault(c => c.Key == "Unfinished Work").Value)); var parameters = new Dictionary<string, object> { { "project", _selectedTeamProject.Name } }; var q = new Query(_wis, unfinishedWorkInCurrentSprint.QueryText, parameters); var results = q.RunLinkQuery(); var wis = new List<WorkItem>(); foreach (var result in results) { var _wi = _wis.GetWorkItem(result.TargetId); if (_wi.Type.Name == "Task" && listOfWorkItems.Contains(_wi.Id)) wis.Add(_wi); } return wis.Sum(r => Convert.ToDecimal(r["Remaining Work"])); }   04 – The Results If you are still reading, the results are beautiful! Image 1 – Create work item with Changed Date pre-set to historic date Image 2 – Set the CreatedDate to historic date (Same as the ChangedDate) Image 3 – Simulate of effort burn down on a task via the TFS API   Image 4 – The history of changes on the Task. So, essentially this task has burned 1 hour per day Sprint Burn Down Chart – What’s not possible? The Sprint burn down chart is calculated from the System.AuthorizedDate and not the System.ChangedDate/System.CreatedDate. So, though you can change the System.ChangedDate and System.CreatedDate to historic dates you will not be able to synthesize the sprint burn down chart. Image 1 – By changing the Created Date and Changed Date to ‘18/Oct/2012’ you would have expected the burn down to have been impacted, but it won’t be, because the sprint burn down chart uses the value of field ‘System.AuthorizedDate’ to calculate the unfinished work points. The AsOf queries that are used to calculate the unfinished work points use the value of the field ‘System.AuthorizedDate’. Image 2 – Using the above code I burned down 1 hour effort per day over 5 days from the task work item, I would have expected the sprint burn down to show a constant burn down, instead the burn down shows the effort exhausted on the 24th itself. Simply because the burn down is calculated using the ‘System.AuthorizedDate’. Now you would ask… “Can I change the value of the field System.AuthorizedDate to a historic date” Unfortunately that’s not possible! You will run into the exception ValidationException –  “TF26194: The value for field ‘Authorized Date’ cannot be changed.” Conclusion - You need to be a member of the Project Collection Service account group in order to set the fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates - You need to instantiate the WorkItemStore using the flag ByPassValidation - The System.ChangedDate needs to be set to a historic date at the time of work item creation. You cannot reset the ChangedDate to a date earlier than the existing ChangedDate and you cannot reset the ChangedDate to a date greater than the current date time. - The System.CreatedDate can only be reset after a work item has been created. You cannot set the CreatedDate at the time of work item creation. The CreatedDate cannot be greater than the current date. You can however reset the CreatedDate to a date earlier than the existing value. - You will not be able to synthesize the Sprint burn down chart by changing the value of System.ChangedDate and System.CreatedDate to historic dates, since the burn down chart uses AsOf queries to calculate the unfinished work points which internally uses the System.AuthorizedDate and NOT the System.ChangedDate & System.CreatedDate - System.AuthorizedDate cannot be set to a historic date using the TFS API Read other posts on using the TFS API here… Enjoy!

    Read the article

  • Workaround for the Mono PrivateFontCollection.AddFontFile bug

    - by CommuSoft
    When I call the PrivateFontCollection.AddFontFile method in Mono.net It always returns a standard font-family. This bug has already been reported on several websites, but as far as I know without a way to solve it. The bug itself isn't fixed in the Mono-libraries yet. Is there any workaround for it? EDIT: As a reaction on henchman's answer I will post the code: PrivateFontCollection pfc = new PrivateFontCollection(); pfc.AddFontFile("myFontFamily.ttf"); myFontFamily = pfc.Families[0x00]; Font myFont = new Font(myFontFamily,14.0f); I know this code will work fine on the Microsoft.Net framework, but when executing on Mono, it just gives a standard font-family (I think it is Arial) with the name of myFontFamily.ttf

    Read the article

  • OOP Design Question - Where/When do you Validate properties?

    - by JW
    I have read a few books on OOP DDD/PoEAA/Gang of Four and none of them seem to cover the topic of validation - it seems to be always assumed that data is valid. I gather from the answers to this post (http://stackoverflow.com/questions/1651964/oop-design-question-validating-properties) that a client should only attempt to set a valid property value on a domain object. This person has asked a similar question that remains unanswered: http://bytes.com/topic/php/answers/789086-php-oop-setters-getters-data-validation#post3136182 So how do you ensure it is valid? Do you have a 'validator method' alongside every getter and setter? isValidName() setName() getName() I seem to be missing some key basic knowledge about OOP data validation - can you point me to a book that covers this topic in detail? - ie. covering different types of validation / invariants/ handling feedback / to use Exceptions or not etc

    Read the article

< Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >