Search Results

Search found 4839 results on 194 pages for 'designer cs'.

Page 100/194 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Java Logger API

    - by Koppar
    This is a more like a tip rather than technical write up and serves as a quick intro for newbies. The logger API helps to diagnose application level or JDK level issues at runtime. There are 7 levels which decide the detailing in logging (SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST). Its best to start with highest level and as we narrow down, use more detailed logging for a specific area. SEVERE is the highest and FINEST is the lowest. This may not make sense until we understand some jargon. The Logger class provides the ability to stream messages to an output stream in a format that can be controlled by the user. What this translates to is, I can create a logger with this simple invocation and use it add debug messages in my class: import java.util.logging.*; private static final Logger focusLog = Logger.getLogger("java.awt.focus.KeyboardFocusManager"); if (focusLog.isLoggable(Level.FINEST)) { focusLog.log(Level.FINEST, "Calling peer setCurrentFocusOwner}); LogManager acts like a book keeper and all the getLogger calls are forwarded to LogManager. The LogManager itself is a singleton class object which gets statically initialized on JVM start up. More on this later. If there is no existing logger with the given name, a new one is created. If there is one (and not yet GC’ed), then the existing Logger object is returned. By default, a root logger is created on JVM start up. All anonymous loggers are made as the children of the root logger. Named loggers have the hierarchy as per their name resolutions. Eg: java.awt.focus is the parent logger for java.awt.focus.KeyboardFocusManager etc. Before logging any message, the logger checks for the log level specified. If null is specified, the log level of the parent logger will be set. However, if the log level is off, no log messages would be written, irrespective of the parent’s log level. All the messages that are posted to the Logger are handled as a LogRecord object.i.e. FocusLog.log would create a new LogRecord object with the log level and message as its data members). The level of logging and thread number are also tracked. LogRecord is passed on to all the registered Handlers. Handler is basically a means to output the messages. The output may be redirected to either a log file or console or a network logging service. The Handler classes use the LogManager properties to set filters and formatters. During initialization or JVM start up, LogManager looks for logging.properties file in jre/lib and sets the properties if the file is provided. An alternate location for properties file can also be specified by setting java.util.logging.config.file system property. This can be set in Java Control Panel ? Java ? Runtime parameters as -Djava.util.logging.config.file = <mylogfile> or passed as a command line parameter java -Djava.util.logging.config.file = C:/Sunita/myLog The redirection of logging depends on what is specified rather registered as a handler with JVM in the properties file. java.util.logging.ConsoleHandler sends the output to system.err and java.util.logging.FileHandler sends the output to file. File name of the log file can also be specified. If you prefer XML format output, in the configuration file, set java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter and if you prefer simple text, set set java.util.logging.FileHandler.formatter =java.util.logging.SimpleFormatter Below is the default logging Configuration file: ############################################################ # Default Logging Configuration File # You can use a different file by specifying a filename # with the java.util.logging.config.file system property. # For example java -Djava.util.logging.config.file=myfile ############################################################ ############################################################ # Global properties ############################################################ # "handlers" specifies a comma separated list of log Handler # classes. These handlers will be installed during VM startup. # Note that these classes must be on the system classpath. # By default we only configure a ConsoleHandler, which will only # show messages at the INFO and above levels. handlers= java.util.logging.ConsoleHandler # To also add the FileHandler, use the following line instead. #handlers= java.util.logging.FileHandler, java.util.logging.ConsoleHandler # Default global logging level. # This specifies which kinds of events are logged across # all loggers. For any given facility this global level # can be overriden by a facility specific level # Note that the ConsoleHandler also has a separate level # setting to limit messages printed to the console. .level= INFO ############################################################ # Handler specific properties. # Describes specific configuration info for Handlers. ############################################################ # default file output is in user's home directory. java.util.logging.FileHandler.pattern = %h/java%u.log java.util.logging.FileHandler.limit = 50000 java.util.logging.FileHandler.count = 1 java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter # Limit the message that are printed on the console to INFO and above. java.util.logging.ConsoleHandler.level = INFO java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter ############################################################ # Facility specific properties. # Provides extra control for each logger. ############################################################ # For example, set the com.xyz.foo logger to only log SEVERE # messages: com.xyz.foo.level = SEVERE Since I primarily use this method to track focus issues, here is how I get detailed awt focus related logging. Just set the logger name to java.awt.focus.level=FINEST and change the default log level to FINEST. Below is a basic sample program. The sample programs are from http://www2.cs.uic.edu/~sloan/CLASSES/java/ and have been modified to illustrate the logging API. By changing the .level property in the logging.properties file, one can control the output written to the logs. To play around with the example, try changing the levels in the logging.properties file and notice the difference in messages going to the log file. Example --------KeyboardReader.java------------------------------------------------------------------------------------- import java.io.*; import java.util.*; import java.util.logging.*; public class KeyboardReader { private static final Logger mylog = Logger.getLogger("samples.input"); public static void main (String[] args) throws java.io.IOException { String s1; String s2; double num1, num2, product; // set up the buffered reader to read from the keyboard BufferedReader br = new BufferedReader (new InputStreamReader (System.in)); System.out.println ("Enter a line of input"); s1 = br.readLine(); if (mylog.isLoggable(Level.SEVERE)) { mylog.log (Level.SEVERE,"The line entered is " + s1); } if (mylog.isLoggable(Level.INFO)) { mylog.log (Level.INFO,"The line has " + s1.length() + " characters"); } if (mylog.isLoggable(Level.FINE)) { mylog.log (Level.FINE,"Breaking the line into tokens we get:"); } int numTokens = 0; StringTokenizer st = new StringTokenizer (s1); while (st.hasMoreTokens()) { s2 = st.nextToken(); numTokens++; if (mylog.isLoggable(Level.FINEST)) { mylog.log (Level.FINEST, " Token " + numTokens + " is: " + s2); } } } } ----------MyFileReader.java---------------------------------------------------------------------------------------- import java.io.*; import java.util.*; import java.util.logging.*; public class MyFileReader extends KeyboardReader { private static final Logger mylog = Logger.getLogger("samples.input.file"); public static void main (String[] args) throws java.io.IOException { String s1; String s2; // set up the buffered reader to read from the keyboard BufferedReader br = new BufferedReader (new FileReader ("MyFileReader.txt")); s1 = br.readLine(); if (mylog.isLoggable(Level.SEVERE)) { mylog.log (Level.SEVERE,"ATTN The line is " + s1); } if (mylog.isLoggable(Level.INFO)) { mylog.log (Level.INFO, "The line has " + s1.length() + " characters"); } if (mylog.isLoggable(Level.FINE)) { mylog.log (Level.FINE,"Breaking the line into tokens we get:"); } int numTokens = 0; StringTokenizer st = new StringTokenizer (s1); while (st.hasMoreTokens()) { s2 = st.nextToken(); numTokens++; if (mylog.isLoggable(Level.FINEST)) { mylog.log (Level.FINEST,"Breaking the line into tokens we get:"); mylog.log (Level.FINEST," Token " + numTokens + " is: " + s2); } } //end of while } // end of main } // end of class ----------MyFileReader.txt------------------------------------------------------------------------------------------ My first logging example -------logging.properties------------------------------------------------------------------------------------------- handlers= java.util.logging.ConsoleHandler, java.util.logging.FileHandler .level= FINEST java.util.logging.FileHandler.pattern = java%u.log java.util.logging.FileHandler.limit = 50000 java.util.logging.FileHandler.count = 1 java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter java.util.logging.ConsoleHandler.level = FINEST java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter java.awt.focus.level=ALL ------Output log------------------------------------------------------------------------------------------- May 21, 2012 11:44:55 AM MyFileReader main SEVERE: ATTN The line is My first logging example May 21, 2012 11:44:55 AM MyFileReader main INFO: The line has 24 characters May 21, 2012 11:44:55 AM MyFileReader main FINE: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 1 is: My May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 2 is: first May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 3 is: logging May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 4 is: example Invocation command: "C:\Program Files (x86)\Java\jdk1.6.0_29\bin\java.exe" -Djava.util.logging.config.file=logging.properties MyFileReader References Further technical details are available here: http://docs.oracle.com/javase/1.4.2/docs/guide/util/logging/overview.html#1.0 http://docs.oracle.com/javase/1.4.2/docs/api/java/util/logging/package-summary.html http://www2.cs.uic.edu/~sloan/CLASSES/java/

    Read the article

  • A Week of DNN – March 19, 2010

    - by Rob Chartier
    DotNetNuke 5.3.0 Released! New Features Templated User Profiles - User profile pages are now publicly viewable, and layout is controlled by the Admin. Photo field in User Profile - Users can upload a photo to their profile.  We also added support for User Specific data storage.  User Messaging - Users can send direct messages to other system users.  This also includes an out-of-the-box asynchronous, provider based, message platform.  You will see more of this in future releases. Search Engine Sitemap Provider - The sitemap now allows module admins to plug in sitemap logic for individual modules. Taxonomy Manager - Administrators can create flat or hierarchical taxonomies that can be shared and used across modules.  Supporting SEO and Social features at the core is an important piece for DotNetNuke moving forward. (Last Minute Update: 5.3.1 will be released with some last minute updates early next week) DotNetNuke as a Scalable Content management System (CMS) Power, Reliability & Feature Richness – DotNetNuke an Open Source Framework How to Search Engine Optimize dotnetnuke dotnetnuke Training Video – Setting DNN Security DotNetNuke Module Template [CS] (Free) XsltDb - DotNetNuke XSLT module with database and ajax support (Free) Create a non-Award Winning DotNetNuke Skin (part 1, part 2, part 3) Test Driven example module nearly refactored to Web Forms MVP Ajax Search v1.0.0 Released! (Live Demo) Tutorials: Backup DNN, Restore DNN, Move DNN from Backup (By Mitchel Sellers) A tag cloud based on the new 5.3 Taxonomy Engage: Tell-a-Friend 1.1 released (FREE module)  549 DotNetNuke Videos: DNN Creative Magazine Issue 54 Out Now  http://www.dotnetnuke.com/Community/Forums/tabid/795/forumid/112/threadid/355615/scope/posts/Default.aspx

    Read the article

  • Objective-C As A First OOP Language?

    - by Daniel Scocco
    I am just finishing the second semester of my CS degree. So far I learned C, all the fundamental algorithms and data structures (e.g., searching, sorting, linked lists, heaps, hash tables, trees, graphs, etc). Next year we'll start with OOP, using either Java or C++. Recently I got some ideas for some iPhone apps and got itchy to start working on them. However I heard some bad things about Objectice-C in the past, so I am wondering if learning it as my first OOP language could be a problem. Not to mention that I think it will be hard to find books/online courses that teach basic OOP concepts using Objective-C to illustrate the concepts (as opposed to books using Java or C++, which are plenty), so this could be another problem. In summary: should I start learning Objective-C and OOP concepts right now by my own, or wait one more semester until I learn Java/C++ at university and then jump into Objective-C? Update: For those interested in getting started with OOP via Objective-C I just found some nice tutorials inside Apple's Developer Library - http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/OOP_ObjC/Introduction/Introduction.html

    Read the article

  • ASP.NET 4 Hosting :: ValidateRequest=”false” not working in .Net 4.0 (VS.Net 2010)

    - by mbridge
    When we migrated our project from .NET 3.5 to .NET 4.0, we can get this error: Error: System.Web.HttpRequestValidationException A potentially dangerous Request.Form value was detected from the client (ctl00$CC$txtAnswer=\”… World\r\n\r\nI am doing Testin…\”).”} System.Web.HttpRequestValidationException at System.Web.HttpRequest.ValidateString(String value, String collectionKey, RequestValidationSource requestCollection)    at System.Web.HttpRequest.ValidateNameValueCollection(NameValueCollection nvc, RequestValidationSource requestCollection)    at System.Web.HttpRequest.get_Form()    at System.Web.HttpRequest.get_HasForm()    at System.Web.UI.Page.GetCollectionBasedOnMethod(Boolean dontReturnNull)    at System.Web.UI.Page.DeterminePostBackMode()    at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)    at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)    at System.Web.UI.Page.ProcessRequest()    at System.Web.UI.Page.ProcessRequestWithNoAssert(HttpContext context)    at System.Web.UI.Page.ProcessRequest(HttpContext context)    at ASP.displaypost_aspx.ProcessRequest(HttpContext context) in c:\Windows\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files\root\a37c2f81\cfc4c927\App_Web_i2rujncl.9.cs:line 0    at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()    at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) What is the Cause? In ASP.NET 4, by default, request validation is enabled for all requests, because it is enabled before the BeginRequest phase of an HTTP request. As a result, request validation applies to requests for all ASP.NET resources, not just .aspx page requests. This includes requests such as Web service calls and custom HTTP handlers. Request validation is also active when custom HTTP modules are reading the contents of an HTTP request. Solution: To revert to the behavior of the ASP.NET 2.0 request validation feature, add the following setting in the Web.config file: <system.web>  <httpRuntime requestValidationMode=”2.0? /> </system.web>

    Read the article

  • ASP.NET MVC ‘Extendable-hooks’ – ControllerActionInvoker class

    - by nmarun
    There’s a class ControllerActionInvoker in ASP.NET MVC. This can be used as one of an hook-points to allow customization of your application. Watching Brad Wilsons’ Advanced MP3 from MVC Conf inspired me to write about this class. What MSDN says: “Represents a class that is responsible for invoking the action methods of a controller.” Well if MSDN says it, I think I can instill a fair amount of confidence into what the class does. But just to get to the details, I also looked into the source code for MVC. Seems like the base class Controller is where an IActionInvoker is initialized: 1: protected virtual IActionInvoker CreateActionInvoker() { 2: return new ControllerActionInvoker(); 3: } In the ControllerActionInvoker (the O-O-B behavior), there are different ‘versions’ of InvokeActionMethod() method that actually call the action method in question and return an instance of type ActionResult. 1: protected virtual ActionResult InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary<string, object> parameters) { 2: object returnValue = actionDescriptor.Execute(controllerContext, parameters); 3: ActionResult result = CreateActionResult(controllerContext, actionDescriptor, returnValue); 4: return result; 5: } I guess that’s enough on the ‘behind-the-screens’ of this class. Let’s see how we can use this class to hook-up extensions. Say I have a requirement that the user should be able to get different renderings of the same output, like html, xml, json, csv and so on. The user will type-in the output format in the url and should the get result accordingly. For example: http://site.com/RenderAs/ – renders the default way (the razor view) http://site.com/RenderAs/xml http://site.com/RenderAs/csv … and so on where RenderAs is my controller. There are many ways of doing this and I’m using a custom ControllerActionInvoker class (even though this might not be the best way to accomplish this). For this, my one and only route in the Global.asax.cs is: 1: routes.MapRoute("RenderAsRoute", "RenderAs/{outputType}", 2: new {controller = "RenderAs", action = "Index", outputType = ""}); Here the controller name is ‘RenderAsController’ and the action that’ll get called (always) is the Index action. The outputType parameter will map to the type of output requested by the user (xml, csv…). I intend to display a list of food items for this example. 1: public class Item 2: { 3: public int Id { get; set; } 4: public string Name { get; set; } 5: public Cuisine Cuisine { get; set; } 6: } 7:  8: public class Cuisine 9: { 10: public int CuisineId { get; set; } 11: public string Name { get; set; } 12: } Coming to my ‘RenderAsController’ class. I generate an IList<Item> to represent my model. 1: private static IList<Item> GetItems() 2: { 3: Cuisine cuisine = new Cuisine { CuisineId = 1, Name = "Italian" }; 4: Item item = new Item { Id = 1, Name = "Lasagna", Cuisine = cuisine }; 5: IList<Item> items = new List<Item> { item }; 6: item = new Item {Id = 2, Name = "Pasta", Cuisine = cuisine}; 7: items.Add(item); 8: //... 9: return items; 10: } My action method looks like 1: public IList<Item> Index(string outputType) 2: { 3: return GetItems(); 4: } There are two things that stand out in this action method. The first and the most obvious one being that the return type is not of type ActionResult (or one of its derivatives). Instead I’m passing the type of the model itself (IList<Item> in this case). We’ll convert this to some type of an ActionResult in our custom controller action invoker class later. The second thing (a little subtle) is that I’m not doing anything with the outputType value that is passed on to this action method. This value will be in the RouteData dictionary and we’ll use this in our custom invoker class as well. It’s time to hook up our invoker class. First, I’ll override the Initialize() method of my RenderAsController class. 1: protected override void Initialize(RequestContext requestContext) 2: { 3: base.Initialize(requestContext); 4: string outputType = string.Empty; 5:  6: // read the outputType from the RouteData dictionary 7: if (requestContext.RouteData.Values["outputType"] != null) 8: { 9: outputType = requestContext.RouteData.Values["outputType"].ToString(); 10: } 11:  12: // my custom invoker class 13: ActionInvoker = new ContentRendererActionInvoker(outputType); 14: } Coming to the main part of the discussion – the ContentRendererActionInvoker class: 1: public class ContentRendererActionInvoker : ControllerActionInvoker 2: { 3: private readonly string _outputType; 4:  5: public ContentRendererActionInvoker(string outputType) 6: { 7: _outputType = outputType.ToLower(); 8: } 9: //... 10: } So the outputType value that was read from the RouteData, which was passed in from the url, is being set here in  a private field. Moving to the crux of this article, I now override the CreateActionResult method. 1: protected override ActionResult CreateActionResult(ControllerContext controllerContext, ActionDescriptor actionDescriptor, object actionReturnValue) 2: { 3: if (actionReturnValue == null) 4: return new EmptyResult(); 5:  6: ActionResult result = actionReturnValue as ActionResult; 7: if (result != null) 8: return result; 9:  10: // This is where the magic happens 11: // Depending on the value in the _outputType field, 12: // return an appropriate ActionResult 13: switch (_outputType) 14: { 15: case "json": 16: { 17: JavaScriptSerializer serializer = new JavaScriptSerializer(); 18: string json = serializer.Serialize(actionReturnValue); 19: return new ContentResult { Content = json, ContentType = "application/json" }; 20: } 21: case "xml": 22: { 23: XmlSerializer serializer = new XmlSerializer(actionReturnValue.GetType()); 24: using (StringWriter writer = new StringWriter()) 25: { 26: serializer.Serialize(writer, actionReturnValue); 27: return new ContentResult { Content = writer.ToString(), ContentType = "text/xml" }; 28: } 29: } 30: case "csv": 31: controllerContext.HttpContext.Response.AddHeader("Content-Disposition", "attachment; filename=items.csv"); 32: return new ContentResult 33: { 34: Content = ToCsv(actionReturnValue as IList<Item>), 35: ContentType = "application/ms-excel" 36: }; 37: case "pdf": 38: string filePath = controllerContext.HttpContext.Server.MapPath("~/items.pdf"); 39: controllerContext.HttpContext.Response.AddHeader("content-disposition", 40: "attachment; filename=items.pdf"); 41: ToPdf(actionReturnValue as IList<Item>, filePath); 42: return new FileContentResult(StreamFile(filePath), "application/pdf"); 43:  44: default: 45: controllerContext.Controller.ViewData.Model = actionReturnValue; 46: return new ViewResult 47: { 48: TempData = controllerContext.Controller.TempData, 49: ViewData = controllerContext.Controller.ViewData 50: }; 51: } 52: } A big method there! The hook I was talking about kinda above actually is here. This is where different kinds / formats of output get returned based on the output type requested in the url. When the _outputType is not set (string.Empty as set in the Global.asax.cs file), the razor view gets rendered (lines 45-50). This is the default behavior in most MVC applications where-in a view (webform/razor) gets rendered on the browser. As you see here, this gets returned as a ViewResult. But then, for an outputType of json/xml/csv, a ContentResult gets returned, while for pdf, a FileContentResult is returned. Here are how the different kinds of output look like: This is how we can leverage this feature of ASP.NET MVC to developer a better application. I’ve used the iTextSharp library to convert to a pdf format. Mike gives quite a bit of detail regarding this library here. You can download the sample code here. (You’ll get an option to download once you open the link). Verdict: Hot chocolate: $3; Reebok shoes: $50; Your first car: $3000; Being able to extend a web application: Priceless.

    Read the article

  • Ten - oh, wait, eleven - Eleven things you should know about the ASP.NET Fall 2012 Update

    - by Jon Galloway
    Today, just a little over two months after the big ASP.NET 4.5 / ASP.NET MVC 4 / ASP.NET Web API / Visual Studio 2012 / Web Matrix 2 release, the first preview of the ASP.NET Fall 2012 Update is out. Here's what you need to know: There are no new framework bits in this release - there's no change or update to ASP.NET Core, ASP.NET MVC or Web Forms features. This means that you can start using it without any updates to your server, upgrade concerns, etc. This update is really an update to the project templates and Visual Studio tooling, conceptually similar to the ASP.NET MVC 3 Tools Update. It's a relatively lightweight install. It's a 41MB download. I've installed it many times and usually takes 5-7 minutes; it's never required a reboot. It adds some new project templates to ASP.NET MVC: Facebook Application and Single Page Application templates. It adds a lot of cool enhancements to ASP.NET Web API. It adds some tooling that makes it easy to take advantage of features like SignalR, Friendly URLs, and Windows Azure Authentication. Most of the new features are installed via NuGet packages. Since ASP.NET is open source, nightly NuGet packages are available, and the roadmap is published, most of this has really been publicly available for a while. The official name of this drop is the ASP.NET Fall 2012 Update BUILD Prerelease. Please do not attempt to say that ten times fast. While the EULA doesn't prohibit it, it WILL legally change your first name to Scott. As with all new releases, you can find out everything you need to know about the Fall Update at http://asp.net/vnext (especially the release notes!) I'm going to be showing all of this off, assisted by special guest code monkey Scott Hanselman, this Friday at BUILD: Bleeding edge ASP.NET: See what is next for MVC, Web API, SignalR and more… (and I've heard it will be livestreamed). Let's look at some of those things in more detail. No new bits ASP.NET 4.5, MVC 4 and Web API have a lot of great core features. I see the goal of this update release as making it easier to put those features to use to solve some useful scenarios by taking advantage of NuGet packages and template code. If you create a new ASP.NET MVC application using one of the new templates, you'll see that it's using the ASP.NET MVC 4 RTM NuGet package (4.0.20710.0): This means you can install and use the Fall Update without any impact on your existing projects and no worries about upgrading or compatibility. New Facebook Application Template ASP.NET MVC 4 (and ASP.NET 4.5 Web Forms) included the ability to authenticate your users via OAuth and OpenID, so you could let users log in to your site using a Facebook account. One of the new changes in the Fall Update is a new template that makes it really easy to create full Facebook applications. You could create Facebook application in ASP.NET already, you'd just need to go through a few steps: Search around to find a good Facebook NuGet package, like the Facebook C# SDK (written by my friend Nathan Totten and some other Facebook SDK brainiacs). Read the Facebook developer documentation to figure out how to authenticate and integrate with them. Write some code, debug it and repeat until you got something working. Get started with the application you'd originally wanted to write. What this template does for you: eliminate steps 1-3. Erik Porter, Nathan and some other experts built out the Facebook Application template so it automatically pulls in and configures the Facebook NuGet package and makes it really easy to take advantage of it in an ASP.NET MVC application. One great example is the the way you access a Facebook user's information. Take a look at the following code in a File / New / MVC / Facebook Application site. First, the Home Controller Index action: [FacebookAuthorize(Permissions = "email")] public ActionResult Index(MyAppUser user, FacebookObjectList<MyAppUserFriend> userFriends) { ViewBag.Message = "Modify this template to jump-start your Facebook application using ASP.NET MVC."; ViewBag.User = user; ViewBag.Friends = userFriends.Take(5); return View(); } First, notice that there's a FacebookAuthorize attribute which requires the user is authenticated via Facebook and requires permissions to access their e-mail address. It binds to two things: a custom MyAppUser object and a list of friends. Let's look at the MyAppUser code: using Microsoft.AspNet.Mvc.Facebook.Attributes; using Microsoft.AspNet.Mvc.Facebook.Models; // Add any fields you want to be saved for each user and specify the field name in the JSON coming back from Facebook // https://developers.facebook.com/docs/reference/api/user/ namespace MvcApplication3.Models { public class MyAppUser : FacebookUser { public string Name { get; set; } [FacebookField(FieldName = "picture", JsonField = "picture.data.url")] public string PictureUrl { get; set; } public string Email { get; set; } } } You can add in other custom fields if you want, but you can also just bind to a FacebookUser and it will automatically pull in the available fields. You can even just bind directly to a FacebookUser and check for what's available in debug mode, which makes it really easy to explore. For more information and some walkthroughs on creating Facebook applications, see: Deploying your first Facebook App on Azure using ASP.NET MVC Facebook Template (Yao Huang Lin) Facebook Application Template Tutorial (Erik Porter) Single Page Application template Early releases of ASP.NET MVC 4 included a Single Page Application template, but it was removed for the official release. There was a lot of interest in it, but it was kind of complex, as it handled features for things like data management. The new Single Page Application template that ships with the Fall Update is more lightweight. It uses Knockout.js on the client and ASP.NET Web API on the server, and it includes a sample application that shows how they all work together. I think the real benefit of this application is that it shows a good pattern for using ASP.NET Web API and Knockout.js. For instance, it's easy to end up with a mess of JavaScript when you're building out a client-side application. This template uses three separate JavaScript files (delivered via a Bundle, of course): todoList.js - this is where the main client-side logic lives todoList.dataAccess.js - this defines how the client-side application interacts with the back-end services todoList.bindings.js - this is where you set up events and overrides for the Knockout bindings - for instance, hooking up jQuery validation and defining some client-side events This is a fun one to play with, because you can just create a new Single Page Application and hit F5. Quick, easy install (with one gotcha) One of the cool engineering changes for this release is a big update to the installer to make it more lightweight and efficient. I've been running nightly builds of this for a few weeks to prep for my BUILD demos, and the install has been really quick and easy to use. The install takes about 5 minutes, has never required a reboot for me, and the uninstall is just as simple. There's one gotcha, though. In this preview release, you may hit an issue that will require you to uninstall and re-install the NuGet VSIX package. The problem comes up when you create a new MVC application and see this dialog: The solution, as explained in the release notes, is to uninstall and re-install the NuGet VSIX package: Start Visual Studio 2012 as an Administrator Go to Tools->Extensions and Updates and uninstall NuGet. Close Visual Studio Navigate to the ASP.NET Fall 2012 Update installation folder: For Visual Studio 2012: Program Files\Microsoft ASP.NET\ASP.NET Web Stack\Visual Studio 2012 For Visual Studio 2012 Express for Web: Program Files\Microsoft ASP.NET\ASP.NET Web Stack\Visual Studio Express 2012 for Web Double click on the NuGet.Tools.vsix to reinstall NuGet This took me under a minute to do, and I was up and running. ASP.NET Web API Update Extravaganza! Uh, the Web API team is out of hand. They added a ton of new stuff: OData support, Tracing, and API Help Page generation. OData support Some people like OData. Some people start twitching when you mention it. If you're in the first group, this is for you. You can add a [Queryable] attribute to an API that returns an IQueryable<Whatever> and you get OData query support from your clients. Then, without any extra changes to your client or server code, your clients can send filters like this: /Suppliers?$filter=Name eq ‘Microsoft’ For more information about OData support in ASP.NET Web API, see Alex James' mega-post about it: OData support in ASP.NET Web API ASP.NET Web API Tracing Tracing makes it really easy to leverage the .NET Tracing system from within your ASP.NET Web API's. If you look at the \App_Start\WebApiConfig.cs file in new ASP.NET Web API project, you'll see a call to TraceConfig.Register(config). That calls into some code in the new \App_Start\TraceConfig.cs file: public static void Register(HttpConfiguration configuration) { if (configuration == null) { throw new ArgumentNullException("configuration"); } SystemDiagnosticsTraceWriter traceWriter = new SystemDiagnosticsTraceWriter() { MinimumLevel = TraceLevel.Info, IsVerbose = false }; configuration.Services.Replace(typeof(ITraceWriter), traceWriter); } As you can see, this is using the standard trace system, so you can extend it to any other trace listeners you'd like. To see how it works with the built in diagnostics trace writer, just run the application call some API's, and look at the Visual Studio Output window: iisexpress.exe Information: 0 : Request, Method=GET, Url=http://localhost:11147/api/Values, Message='http://localhost:11147/api/Values' iisexpress.exe Information: 0 : Message='Values', Operation=DefaultHttpControllerSelector.SelectController iisexpress.exe Information: 0 : Message='WebAPI.Controllers.ValuesController', Operation=DefaultHttpControllerActivator.Create iisexpress.exe Information: 0 : Message='WebAPI.Controllers.ValuesController', Operation=HttpControllerDescriptor.CreateController iisexpress.exe Information: 0 : Message='Selected action 'Get()'', Operation=ApiControllerActionSelector.SelectAction iisexpress.exe Information: 0 : Operation=HttpActionBinding.ExecuteBindingAsync iisexpress.exe Information: 0 : Operation=QueryableAttribute.ActionExecuting iisexpress.exe Information: 0 : Message='Action returned 'System.String[]'', Operation=ReflectedHttpActionDescriptor.ExecuteAsync iisexpress.exe Information: 0 : Message='Will use same 'JsonMediaTypeFormatter' formatter', Operation=JsonMediaTypeFormatter.GetPerRequestFormatterInstance iisexpress.exe Information: 0 : Message='Selected formatter='JsonMediaTypeFormatter', content-type='application/json; charset=utf-8'', Operation=DefaultContentNegotiator.Negotiate iisexpress.exe Information: 0 : Operation=ApiControllerActionInvoker.InvokeActionAsync, Status=200 (OK) iisexpress.exe Information: 0 : Operation=QueryableAttribute.ActionExecuted, Status=200 (OK) iisexpress.exe Information: 0 : Operation=ValuesController.ExecuteAsync, Status=200 (OK) iisexpress.exe Information: 0 : Response, Status=200 (OK), Method=GET, Url=http://localhost:11147/api/Values, Message='Content-type='application/json; charset=utf-8', content-length=unknown' iisexpress.exe Information: 0 : Operation=JsonMediaTypeFormatter.WriteToStreamAsync iisexpress.exe Information: 0 : Operation=ValuesController.Dispose API Help Page When you create a new ASP.NET Web API project, you'll see an API link in the header: Clicking the API link shows generated help documentation for your ASP.NET Web API controllers: And clicking on any of those APIs shows specific information: What's great is that this information is dynamically generated, so if you add your own new APIs it will automatically show useful and up to date help. This system is also completely extensible, so you can generate documentation in other formats or customize the HTML help as much as you'd like. The Help generation code is all included in an ASP.NET MVC Area: SignalR SignalR is a really slick open source project that was started by some ASP.NET team members in their spare time to add real-time communications capabilities to ASP.NET - and .NET applications in general. It allows you to handle long running communications channels between your server and multiple connected clients using the best communications channel they can both support - websockets if available, falling back all the way to old technologies like long polling if necessary for old browsers. SignalR remains an open source project, but now it's being included in ASP.NET (also open source, hooray!). That means there's real, official ASP.NET engineering work being put into SignalR, and it's even easier to use in an ASP.NET application. Now in any ASP.NET project type, you can right-click / Add / New Item... SignalR Hub or Persistent Connection. And much more... There's quite a bit more. You can find more info at http://asp.net/vnext, and we'll be adding more content as fast as we can. Watch my BUILD talk to see as I demonstrate these and other features in the ASP.NET Fall 2012 Update, as well as some other even futurey-er stuff!

    Read the article

  • Polygonal Triangulation - algorithm with O(n log n) complexity

    - by Arthur Wulf White
    I wish to triangulate a polygon I only have the outline of (p0, p1, p2 ... pn) like described in this question: polygon triangulation algorithm and this webpage: http://cgm.cs.mcgill.ca/~godfried/teaching/cg-projects/97/Ian/algorithm2.html I do not wish to learn the subject and have a deep understanding of it at the moment. I only want to see an effective algorithm that can be used out of the box. The one described in the site seems to be of somewhat high complexity O(n) for finding one ear. I heard this could be done in O(n log n) time. Is there any well known easy to use algorithm that I can translate port to use in my engine that runs with somewhat reasonable complexity? The reason I need to triangulate is that I wish to feel out a 2d-outline and render it 3d. Much like we fill out a 2d-outline in paint. I could use sprites. This would not serve cause I am planning to play with the resulting model on the z-axis, giving it different heights in the different areas. I would love to try the books that were mentioned, although I suspect that is not the answer most readers are hoping for when they read this Q & A format. Mostly I like to see a code snippet I can cut and paste with some modifications and start running.

    Read the article

  • I have only two languages on my resume - how bad is this?

    - by Karl
    Hi there! I have a question that can be best answered here, given the vast experience some of you guys have! I am going to finish my bachelor's degree in CS and let's face it, I am just comfortable with C++ and Python. C++ - I have no experience to show for and I can't quote the C++ standard like some of the guys on SO do but yet I am comfortable with the language basics and the stuff that mostly matters. With Python, I have demonstrated work experience with a good company, so I can safely put that. I have never touched C, though I have been meaning to do it now. So I cannot write C on my resume because I have not done it ever. Sure I can finish K & R and get a sense of the language in a month, but I don't feel like writing it cause that would be being unfaithful to myself. So the big question is, are two languages on a a resume considered OK or that is usually a bad sign? Most resumes I have seen mention lots of languages and hence my question. Under the language section of my resume, I just mention: C++ and Python and that kinda looks empty! What are your views on this and what do you feel about such a situation? PS: I really don't want to write every single library or API I am familiar with. Or should I?

    Read the article

  • SSO Configuration MMC Snap-in

    - by Christopher House
    This may be old news to most people but I've been away from BizTalk for about a year, so this was a welcome development for me.  The other day, I was discussing with my client the various options for storing configuration data required by our project.  I brought up SSO as it's something I've used with success on previous projects.  The client hadn't previously used SSO and was concerned about the maintainability of configuration stored in SSO.  I offered to do a quick POC to demonstrate storing/retrieving/maintaining configuration via SSO.  As I set about creating the POC, I needed to download Richard Seroter's SSO configuration tool, since that's what I've used previously for managing SSO data.  I went to google to track it down and was pleasantly surprised to discover that Microsoft has finally released an MMC snap-in for maintaining SSO applications. The download contains three components.  The first is the MMC snap-in which allows you to create/delete applications as well as name/value pairs within an application.  Next is a C# class file, SSOConfigHelper.cs, which can be used to retrieve values from an SSO application.  Finally, there's an MSBuild task that allows you to deploy SSO application data with your builds. I didn't see any information as to which versions are supported, I'm using it in a BizTalk 2009 environment and it seems to work quite nicely.  The download package is available here.

    Read the article

  • Adventures in MVVM &ndash; My ViewModel Base

    - by Brian Genisio's House Of Bilz
    More Adventures in MVVM First, I’d like to say: THIS IS NOT A NEW MVVM FRAMEWORK. I tend to believe that MVVM support code should be specific to the system you are building and the developers working on it.  I have yet to find an MVVM framework that does everything I want it to without doing too much.  Don’t get me wrong… there are some good frameworks out there.  I just like to pick and choose things that make sense for me.  I’d also like to add that some of these features only work in WPF.  As of Silveright 4, they don’t support binding to dynamic properties, so some of the capabilities are lost. That being said, I want to share my ViewModel base class with the world.  I have had several conversations with people about the problems I have solved using this ViewModel base.  A while back, I posted an article about some experiments with a “Rails Inspired ViewModel”.  What followed from those ideas was a ViewModel base class that I take with me and use in my projects.  It has a lot of features, all designed to reduce the friction in writing view models. I have put the code out on Codeplex under the project: ViewModelSupport. Finally, this article focuses on the ViewModel and only glosses over the View and the Model.  Without all three, you don’t have MVVM.  But this base class is for the ViewModel, so that is what I am focusing on. Features: Automatic Command Plumbing Property Change Notification Strongly Typed Property Getter/Setters Dynamic Properties Default Property values Derived Properties Automatic Method Execution Command CanExecute Change Notification Design-Time Detection What about Silverlight? Automatic Command Plumbing This feature takes the plumbing out of creating commands.  The common pattern for commands in a ViewModel is to have an Execute method as well as an optional CanExecute method.  To plumb that together, you create an ICommand Property, and set it in the constructor like so: Before public class AutomaticCommandViewModel { public AutomaticCommandViewModel() { MyCommand = new DelegateCommand(Execute_MyCommand, CanExecute_MyCommand); } public void Execute_MyCommand() { // Do something } public bool CanExecute_MyCommand() { // Are we in a state to do something? return true; } public DelegateCommand MyCommand { get; private set; } } With the base class, this plumbing is automatic and the property (MyCommand of type ICommand) is created for you.  The base class uses the convention that methods be prefixed with Execute_ and CanExecute_ in order to be plumbed into commands with the property name after the prefix.  You are left to be expressive with your behavior without the plumbing.  If you are wondering how CanExecuteChanged is raised, see the later section “Command CanExecute Change Notification”. After public class AutomaticCommandViewModel : ViewModelBase { public void Execute_MyCommand() { // Do something } public bool CanExecute_MyCommand() { // Are we in a state to do something? return true; } }   Property Change Notification One thing that always kills me when implementing ViewModels is how to make properties that notify when they change (via the INotifyPropertyChanged interface).  There have been many attempts to make this more automatic.  My base class includes one option.  There are others, but I feel like this works best for me. The common pattern (without my base class) is to create a private backing store for the variable and specify a getter that returns the private field.  The setter will set the private field and fire an event that notifies the change, only if the value has changed. Before public class PropertyHelpersViewModel : INotifyPropertyChanged { private string text; public string Text { get { return text; } set { if(text != value) { text = value; RaisePropertyChanged("Text"); } } } protected void RaisePropertyChanged(string propertyName) { var handlers = PropertyChanged; if(handlers != null) handlers(this, new PropertyChangedEventArgs(propertyName)); } public event PropertyChangedEventHandler PropertyChanged; } This way of defining properties is error-prone and tedious.  Too much plumbing.  My base class eliminates much of that plumbing with the same functionality: After public class PropertyHelpersViewModel : ViewModelBase { public string Text { get { return Get<string>("Text"); } set { Set("Text", value);} } }   Strongly Typed Property Getters/Setters It turns out that we can do better than that.  We are using a strongly typed language where the use of “Magic Strings” is often frowned upon.  Lets make the names in the getters and setters strongly typed: A refinement public class PropertyHelpersViewModel : ViewModelBase { public string Text { get { return Get(() => Text); } set { Set(() => Text, value); } } }   Dynamic Properties In C# 4.0, we have the ability to program statically OR dynamically.  This base class lets us leverage the powerful dynamic capabilities in our ecosystem. (This is how the automatic commands are implemented, BTW)  By calling Set(“Foo”, 1), you have now created a dynamic property called Foo.  It can be bound against like any static property.  The opportunities are endless.  One great way to exploit this behavior is if you have a customizable view engine with templates that bind to properties defined by the user.  The base class just needs to create the dynamic properties at runtime from information in the model, and the custom template can bind even though the static properties do not exist. All dynamic properties still benefit from the notifiable capabilities that static properties do. For any nay-sayers out there that don’t like using the dynamic features of C#, just remember this: the act of binding the View to a ViewModel is dynamic already.  Why not exploit it?  Get over it :) Just declare the property dynamically public class DynamicPropertyViewModel : ViewModelBase { public DynamicPropertyViewModel() { Set("Foo", "Bar"); } } Then reference it normally <TextBlock Text="{Binding Foo}" />   Default Property Values The Get() method also allows for default properties to be set.  Don’t set them in the constructor.  Set them in the property and keep the related code together: public string Text { get { return Get(() => Text, "This is the default value"); } set { Set(() => Text, value);} }   Derived Properties This is something I blogged about a while back in more detail.  This feature came from the chaining of property notifications when one property affects the results of another, like this: Before public class DependantPropertiesViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); RaisePropertyChanged("Percentage"); RaisePropertyChanged("Output"); } } public int Percentage { get { return (int)(100 * Score); } } public string Output { get { return "You scored " + Percentage + "%."; } } } The problem is: The setter for Score has to be responsible for notifying the world that Percentage and Output have also changed.  This, to me, is backwards.    It certainly violates the “Single Responsibility Principle.” I have been bitten in the rear more than once by problems created from code like this.  What we really want to do is invert the dependency.  Let the Percentage property declare that it changes when the Score Property changes. After public class DependantPropertiesViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); } } [DependsUpon("Score")] public int Percentage { get { return (int)(100 * Score); } } [DependsUpon("Percentage")] public string Output { get { return "You scored " + Percentage + "%."; } } }   Automatic Method Execution This one is extremely similar to the previous, but it deals with method execution as opposed to property.  When you want to execute a method triggered by property changes, let the method declare the dependency instead of the other way around. Before public class DependantMethodsViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); WhenScoreChanges(); } } public void WhenScoreChanges() { // Handle this case } } After public class DependantMethodsViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); } } [DependsUpon("Score")] public void WhenScoreChanges() { // Handle this case } }   Command CanExecute Change Notification Back to Commands.  One of the responsibilities of commands that implement ICommand – it must fire an event declaring that CanExecute() needs to be re-evaluated.  I wanted to wait until we got past a few concepts before explaining this behavior.  You can use the same mechanism here to fire off the change.  In the CanExecute_ method, declare the property that it depends upon.  When that property changes, the command will fire a CanExecuteChanged event, telling the View to re-evaluate the state of the command.  The View will make appropriate adjustments, like disabling the button. DependsUpon works on CanExecute methods as well public class CanExecuteViewModel : ViewModelBase { public void Execute_MakeLower() { Output = Input.ToLower(); } [DependsUpon("Input")] public bool CanExecute_MakeLower() { return !string.IsNullOrWhiteSpace(Input); } public string Input { get { return Get(() => Input); } set { Set(() => Input, value);} } public string Output { get { return Get(() => Output); } set { Set(() => Output, value); } } }   Design-Time Detection If you want to add design-time data to your ViewModel, the base class has a property that lets you ask if you are in the designer.  You can then set some default values that let your designer see what things might look like in runtime. Use the IsInDesignMode property public DependantPropertiesViewModel() { if(IsInDesignMode) { Score = .5; } }   What About Silverlight? Some of the features in this base class only work in WPF.  As of version 4, Silverlight does not support binding to dynamic properties.  This, in my opinion, is a HUGE limitation.  Not only does it keep you from using many of the features in this ViewModel, it also keeps you from binding to ViewModels designed in IronRuby.  Does this mean that the base class will not work in Silverlight?  No.  Many of the features outlined in this article WILL work.  All of the property abstractions are functional, as long as you refer to them statically in the View.  This, of course, means that the automatic command hook-up doesn’t work in Silverlight.  You need to plumb it to a static property in order for the Silverlight View to bind to it.  Can I has a dynamic property in SL5?     Good to go? So, that concludes the feature explanation of my ViewModel base class.  Feel free to take it, fork it, whatever.  It is hosted on CodePlex.  When I find other useful additions, I will add them to the public repository.  I use this base class every day.  It is mature, and well tested.  If, however, you find any problems with it, please let me know!  Also, feel free to suggest patches to me via the CodePlex site.  :)

    Read the article

  • Using local DNS and public DNS during site development

    - by ChrisFM
    I'm a web designer and I often develop new sites for existing businesses. Sometimes I find it useful to point my DNS address (for my personal computer) to the development servers local DNS (instead of Googles 8.8.8.8 or the default isp's address). I like this as it let's me see that the new site's internal links, etc, operate before switching over the authorative DNS from the old site. However outgoing links (Like a Google map) would not route with my local dns. The first thing I thought was, oh I just need to fill out my DNS directory with Google and every other domain I might need to link to, wait... that sounds insane. I was wondering if anyone could give me insight into a better way or more functional way to get use local DNS addresses when they're available and public supplied DNS address when they are not? Kinda like a DNS to 'roll-over'? Or maybe a completely different approach to development all together. Thanks in advance for your insight. All the best!

    Read the article

  • Are programmers bad testers?

    - by jhsowter
    I know this sounds a lot like other questions which have already being asked, but it is actually slightly different. It seems to be generally considered that programmers are not good at performing the role of testing an application. For example: Joel on Software - Top Five (Wrong) Reasons You Don't Have Testers (emphasis mine) Don't even think of trying to tell college CS graduates that they can come work for you, but "everyone has to do a stint in QA for a while before moving on to code". I've seen a lot of this. Programmers do not make good testers, and you'll lose a good programmer, who is a lot harder to replace. And in this question, one of the most popular answers says (again, my emphasis): Developers can be testers, but they shouldn't be testers. Developers tend to unintentionally/unconciously avoid to use the application in a way that might break it. That's because they wrote it and mostly test it in the way it should be used. So the question is are programmers bad at testing? What evidence or arguments are there to support this conclusion? Are programmers only bad at testing their own code? Is there any evidence to suggest that programmers are actually good at testing? What do I mean by "testing?" I do not mean unit testing or anything that is considered part of the methodology used by the software team to write software. I mean some kind of quality assurance method that is used after the code has been built and deployed to whatever that software team would call the "test environment."

    Read the article

  • Desktop Fun: Fast Cars Wallpapers

    - by Asian Angel
    Have you been feeling a need for speed lately? Then get ready to jump into the driver’s seat with our Fast Cars Wallpapers collection. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution.                           For more fun wallpapers be certain to visit our new Desktop Fun section. Similar Articles Productive Geek Tips Windows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe)Desktop Fun: Starship Theme WallpapersDesktop Fun: Underwater Theme WallpapersDesktop Fun: Starscape Theme WallpapersDesktop Fun: Fantasy Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Desktop Fun: Fast Cars Wallpapers

    - by Asian Angel
    Have you been feeling a need for speed lately? Then get ready to jump into the driver’s seat with our Fast Cars Wallpapers collection. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution.                           For more fun wallpapers be certain to visit our new Desktop Fun section. Similar Articles Productive Geek Tips Windows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe)Desktop Fun: Starship Theme WallpapersDesktop Fun: Underwater Theme WallpapersDesktop Fun: Starscape Theme WallpapersDesktop Fun: Fantasy Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Translating with Google Translate without API and C# Code

    - by Rick Strahl
    Some time back I created a data base driven ASP.NET Resource Provider along with some tools that make it easy to edit ASP.NET resources interactively in a Web application. One of the small helper features of the interactive resource admin tool is the ability to do simple translations using both Google Translate and Babelfish. Here's what this looks like in the resource administration form: When a resource is displayed, the user can click a Translate button and it will show the current resource text and then lets you set the source and target languages to translate. The Go button fires the translation for both Google and Babelfish and displays them - pressing use then changes the language of the resource to the target language and sets the resource value to the newly translated value. It's a nice and quick way to get a quick translation going. Ch… Ch… Changes Originally, both implementations basically did some screen scraping of the interactive Web sites and retrieved translated text out of result HTML. Screen scraping is always kind of an iffy proposition as content can be changed easily, but surprisingly that code worked for many years without fail. Recently however, Google at least changed their input pages to use AJAX callbacks and the page updates no longer worked the same way. End result: The Google translate code was broken. Now, Google does have an official API that you can access, but the API is being deprecated and you actually need to have an API key. Since I have public samples that people can download the API key is an issue if I want people to have the samples work out of the box - the only way I could even do this is by sharing my API key (not allowed).   However, after a bit of spelunking and playing around with the public site however I found that Google's interactive translate page actually makes callbacks using plain public access without an API key. By intercepting some of those AJAX calls and calling them directly from code I was able to get translation back up and working with minimal fuss, by parsing out the JSON these AJAX calls return. I don't think this particular Warning: This is hacky code, but after a fair bit of testing I found this to work very well with all sorts of languages and accented and escaped text etc. as long as you stick to small blocks of translated text. I thought I'd share it in case anybody else had been relying on a screen scraping mechanism like I did and needed a non-API based replacement. Here's the code: /// <summary> /// Translates a string into another language using Google's translate API JSON calls. /// <seealso>Class TranslationServices</seealso> /// </summary> /// <param name="Text">Text to translate. Should be a single word or sentence.</param> /// <param name="FromCulture"> /// Two letter culture (en of en-us, fr of fr-ca, de of de-ch) /// </param> /// <param name="ToCulture"> /// Two letter culture (as for FromCulture) /// </param> public string TranslateGoogle(string text, string fromCulture, string toCulture) { fromCulture = fromCulture.ToLower(); toCulture = toCulture.ToLower(); // normalize the culture in case something like en-us was passed // retrieve only en since Google doesn't support sub-locales string[] tokens = fromCulture.Split('-'); if (tokens.Length > 1) fromCulture = tokens[0]; // normalize ToCulture tokens = toCulture.Split('-'); if (tokens.Length > 1) toCulture = tokens[0]; string url = string.Format(@"http://translate.google.com/translate_a/t?client=j&text={0}&hl=en&sl={1}&tl={2}", HttpUtility.UrlEncode(text),fromCulture,toCulture); // Retrieve Translation with HTTP GET call string html = null; try { WebClient web = new WebClient(); // MUST add a known browser user agent or else response encoding doen't return UTF-8 (WTF Google?) web.Headers.Add(HttpRequestHeader.UserAgent, "Mozilla/5.0"); web.Headers.Add(HttpRequestHeader.AcceptCharset, "UTF-8"); // Make sure we have response encoding to UTF-8 web.Encoding = Encoding.UTF8; html = web.DownloadString(url); } catch (Exception ex) { this.ErrorMessage = Westwind.Globalization.Resources.Resources.ConnectionFailed + ": " + ex.GetBaseException().Message; return null; } // Extract out trans":"...[Extracted]...","from the JSON string string result = Regex.Match(html, "trans\":(\".*?\"),\"", RegexOptions.IgnoreCase).Groups[1].Value; if (string.IsNullOrEmpty(result)) { this.ErrorMessage = Westwind.Globalization.Resources.Resources.InvalidSearchResult; return null; } //return WebUtils.DecodeJsString(result); // Result is a JavaScript string so we need to deserialize it properly JavaScriptSerializer ser = new JavaScriptSerializer(); return ser.Deserialize(result, typeof(string)) as string; } To use the code is straightforward enough - simply provide a string to translate and a pair of two letter source and target languages: string result = service.TranslateGoogle("Life is great and one is spoiled when it goes on and on and on", "en", "de"); TestContext.WriteLine(result); How it works The code to translate is fairly straightforward. It basically uses the URL I snagged from the Google Translate Web Page slightly changed to return a JSON result (&client=j) instead of the funky nested PHP style JSON array that the default returns. The JSON result returned looks like this: {"sentences":[{"trans":"Das Leben ist großartig und man wird verwöhnt, wenn es weiter und weiter und weiter geht","orig":"Life is great and one is spoiled when it goes on and on and on","translit":"","src_translit":""}],"src":"en","server_time":24} I use WebClient to make an HTTP GET call to retrieve the JSON data and strip out part of the full JSON response that contains the actual translated text. Since this is a JSON response I need to deserialize the JSON string in case it's encoded (for upper/lower ASCII chars or quotes etc.). Couple of odd things to note in this code: First note that a valid user agent string must be passed (or at least one starting with a common browser identification - I use Mozilla/5.0). Without this Google doesn't encode the result with UTF-8, but instead uses a ISO encoding that .NET can't easily decode. Google seems to ignore the character set header and use the user agent instead which is - odd to say the least. The other is that the code returns a full JSON response. Rather than use the full response and decode it into a custom type that matches Google's result object, I just strip out the translated text. Yeah I know that's hacky but avoids an extra type and firing up the JavaScript deserializer. My internal version uses a small DecodeJsString() method to decode Javascript without the overhead of a full JSON parser. It's obviously not rocket science but as mentioned above what's nice about it is that it works without an Google API key. I can't vouch on how many translates you can do before there are cut offs but in my limited testing running a few stress tests on a Web server under load I didn't run into any problems. Limitations There are some restrictions with this: It only works on single words or single sentences - multiple sentences (delimited by .) are cut off at the ".". There is also a length limitation which appears to happen at around 220 characters or so. While that may not sound  like much for typical word or phrase translations this this is plenty of length. Use with a grain of salt - Google seems to be trying to limit their exposure to usage of the Translate APIs so this code might break in the future, but for now at least it works. FWIW, I also found that Google's translation is not as good as Babelfish, especially for contextual content like sentences. Google is faster, but Babelfish tends to give better translations. This is why in my translation tool I show both Google and Babelfish values retrieved. You can check out the code for this in the West Wind West Wind Web Toolkit's TranslationService.cs file which contains both the Google and Babelfish translation code pieces. Ironically the Babelfish code has been working forever using screen scraping and continues to work just fine today. I think it's a good idea to have multiple translation providers in case one is down or changes its format, hence the dual display in my translation form above. I hope this has been helpful to some of you - I've actually had many small uses for this code in a number of applications and it's sweet to have a simple routine that performs these operations for me easily. Resources Live Localization Sample Localization Resource Provider Administration form that includes options to translate text using Google and Babelfish interactively. TranslationService.cs The full source code in the West Wind West Wind Web Toolkit's Globalization library that contains the translation code. © Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  HTTP   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Project Navigation and File Nesting in ASP.NET MVC Projects

    - by Rick Strahl
    More and more I’m finding myself getting lost in the files in some of my larger Web projects. There’s so much freaking content to deal with – HTML Views, several derived CSS pages, page level CSS, script libraries, application wide scripts and page specific script files etc. etc. Thankfully I use Resharper and the Ctrl-T Go to Anything which autocompletes you to any file, type, member rapidly. Awesome except when I forget – or when I’m not quite sure of the name of what I’m looking for. Project navigation is still important. Sometimes while working on a project I seem to have 30 or more files open and trying to locate another new file to open in the solution often ends up being a mental exercise – “where did I put that thing?” It’s those little hesitations that tend to get in the way of workflow frequently. To make things worse most NuGet packages for client side frameworks and scripts, dump stuff into folders that I generally don’t use. I’ve never been a fan of the ‘Content’ folder in MVC which is just an empty layer that doesn’t serve much of a purpose. It’s usually the first thing I nuke in every MVC project. To me the project root is where the actual content for a site goes – is there really a need to add another folder to force another path into every resource you use? It’s ugly and also inefficient as it adds additional bytes to every resource link you embed into a page. Alternatives I’ve been playing around with different folder layouts recently and found that moving my cheese around has actually made project navigation much easier. In this post I show a couple of things I’ve found useful and maybe you find some of these useful as well or at least get some ideas what can be changed to provide better project flow. The first thing I’ve been doing is add a root Code folder and putting all server code into that. I’m a big fan of treating the Web project root folder as my Web root folder so all content comes from the root without unneeded nesting like the Content folder. By moving all server code out of the root tree (except for Code) the root tree becomes a lot cleaner immediately as you remove Controllers, App_Start, Models etc. and move them underneath Code. Yes this adds another folder level for server code, but it leaves only code related things in one place that’s easier to jump back and forth in. Additionally I find myself doing a lot less with server side code these days, more with client side code so I want the server code separated from that. The root folder itself then serves as the root content folder. Specifically I have the Views folder below it, as well as the Css and Scripts folders which serve to hold only common libraries and global CSS and Scripts code. These days of building SPA style application, I also tend to have an App folder there where I keep my application specific JavaScript files, as well as HTML View templates for client SPA apps like Angular. Here’s an example of what this looks like in a relatively small project: The goal is to keep things that are related together, so I don’t end up jumping around so much in the solution to get to specific project items. The Code folder may irk some of you and hark back to the days of the App_Code folder in non Web-Application projects, but these days I find myself messing with a lot less server side code and much more with client side files – HTML, CSS and JavaScript. Generally I work on a single controller at a time – once that’s open it’s open that’s typically the only server code I work with regularily. Business logic lives in another project altogether, so other than the controller and maybe ViewModels there’s not a lot of code being accessed in the Code folder. So throwing that off the root and isolating seems like an easy win. Nesting Page specific content In a lot of my existing applications that are pure server side MVC application perhaps with some JavaScript associated with them , I tend to have page level javascript and css files. For these types of pages I actually prefer the local files stored in the same folder as the parent view. So typically I have a .css and .js files with the same name as the view in the same folder. This looks something like this: In order for this to work you have to also make a configuration change inside of the /Views/web.config file, as the Views folder is blocked with the BlockViewHandler that prohibits access to content from that folder. It’s easy to fix by changing the path from * to *.cshtml or *.vbhtml so that view retrieval is blocked:<system.webServer> <handlers> <remove name="BlockViewHandler"/> <add name="BlockViewHandler" path="*.cshtml" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" /> </handlers> </system.webServer> With this in place, from inside of your Views you can then reference those same resources like this:<link href="~/Views/Admin/QuizPrognosisItems.css" rel="stylesheet" /> and<script src="~/Views/Admin/QuizPrognosisItems.js"></script> which works fine. JavaScript and CSS files in the Views folder deploy just like the .cshtml files do and can be referenced from this folder as well. Making this happen is not really as straightforward as it should be with just Visual Studio unfortunately, as there’s no easy way to get the file nesting from the VS IDE directly (you have to modify the .csproj file). However, Mads Kristensen has a nice Visual Studio Add-in that provides file nesting via a short cut menu option. Using this you can select each of the ‘child’ files and then nest them under a parent file. In the case above I select the .js and .css files and nest them underneath the .cshtml view. I was even toying with the idea of throwing the controller.cs files into the Views folder, but that’s maybe going a little too far :-) It would work however as Visual Studio doesn’t publish .cs files and the compiler doesn’t care where the files live. There are lots of options and if you think that would make life easier it’s another option to help group related things together. Are there any downside to this? Possibly – if you’re using automated minification/packaging tools like ASP.NET Bundling or Grunt/Gulp with Uglify, it becomes a little harder to group script and css files for minification as you may end up looking in multiple folders instead of a single folder. But – again that’s a one time configuration step that’s easily handled and much less intrusive then constantly having to search for files in your project. Client Side Folders The particular project shown above in the screen shots above is a traditional server side ASP.NET MVC application with most content rendered into server side Razor pages. There’s a fair amount of client side stuff happening on these pages as well – specifically several of these pages are self contained single page Angular applications that deal with 1 or maybe 2 separate views and the layout I’ve shown above really focuses on the server side aspect where there are Razor views with related script and css resources. For applications that are more client centric and have a lot more script and HTML template based content I tend to use the same layout for the server components, but the client side code can often be broken out differently. In SPA type applications I tend to follow the App folder approach where all the application pieces that make the SPA applications end up below the App folder. Here’s what that looks like for me – here this is an AngularJs project: In this case the App folder holds both the application specific js files, and the partial HTML views that get loaded into this single SPA page application. In this particular Angular SPA application that has controllers linked to particular partial views, I prefer to keep the script files that are associated with the views – Angular Js Controllers in this case – with the actual partials. Again I like the proximity of the view with the main code associated with the view, because 90% of the UI application code that gets written is handled between these two files. This approach works well, but only if controllers are fairly closely aligned with the partials. If you have many smaller sub-controllers or lots of directives where the alignment between views and code is more segmented this approach starts falling apart and you’ll probably be better off with separate folders in js folder. Following Angular conventions you’d have controllers/directives/services etc. folders. Please note that I’m not saying any of these ways are right or wrong  – this is just what has worked for me and why! Skipping Project Navigation altogether with Resharper I’ve talked a bit about project navigation in the project tree, which is a common way to navigate and which we all use at least some of the time, but if you use a tool like Resharper – which has Ctrl-T to jump to anything, you can quickly navigate with a shortcut key and autocomplete search. Here’s what Resharper’s jump to anything looks like: Resharper’s Goto Anything box lets you type and quick search over files, classes and members of the entire solution which is a very fast and powerful way to find what you’re looking for in your project, by passing the solution explorer altogether. As long as you remember to use (which I sometimes don’t) and you know what you’re looking for it’s by far the quickest way to find things in a project. It’s a shame that this sort of a simple search interface isn’t part of the native Visual Studio IDE. Work how you like to work Ultimately it all comes down to workflow and how you like to work, and what makes *you* more productive. Following pre-defined patterns is great for consistency, as long as they don’t get in the way you work. A lot of the default folder structures in Visual Studio for ASP.NET MVC were defined when things were done differently. These days we’re dealing with a lot more diverse project content than when ASP.NET MVC was originally introduced and project organization definitely is something that can get in the way if it doesn’t fit your workflow. So take a look and see what works well and what might benefit from organizing files differently. As so many things with ASP.NET, as things evolve and tend to get more complex I’ve found that I end up fighting some of the conventions. The good news is that you don’t have to follow the conventions and you have the freedom to do just about anything that works for you. Even though what I’ve shown here diverges from conventions, I don’t think anybody would stumble over these relatively minor changes and not immediately figure out where things live, even in larger projects. But nevertheless think long and hard before breaking those conventions – if there isn’t a good reason to break them or the changes don’t provide improved workflow then it’s not worth it. Break the rules, but only if there’s a quantifiable benefit. You may not agree with how I’ve chosen to divert from the standard project structures in this article, but maybe it gives you some ideas of how you can mix things up to make your existing project flow a little nicer and make it easier to navigate for your environment. © Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to work RavenDB Id with ASP.NET MVC Routes

    - by shiju
    By default RavenDB's Id would be sperated by "/". Let's say that we have a category object, the Ids would be like "categories/1". This will make problems when working with ASP.NET MVC's route rule. For a route category/edit/id, the uri would be category/edit/categories/1. You can solve this problem in two waysSolution 1 - Change Id SeparatorWe can use different Id Separator for RavenDB Ids in order to working with ASP.NET MVC route rules. The following code specify that Ids would be seperated by "-" rather than the default "/"  documentStore = new DocumentStore { Url = "http://localhost:8080/" };  documentStore.Initialize();  documentStore.Conventions.IdentityPartsSeparator = "-"; The above IdentityPartsSeparator would be generate Ids like "categories-1"Solution 2 - Modify ASP.NET MVC Route Modify the ASP.NET MVC routes in the Global.asax.cs file, as shown in the following code  routes.MapRoute(     "WithParam",                                           // Route name     "{controller}/{action}/{*id}"                         // URL with parameters     );  We just put "*" in front of the id variable that will be working with the default Id separator of RavenDB

    Read the article

  • New Skool Crosstabbing

    - by Tim Dexter
    A while back I spoke about having to go back to BIP's original crosstabbing solution to achieve a certain layout. Hok Min has provided a 'man' page for the new crosstab/pivot builder for 10.1.3.4.1 users. This will make the documentation drop but for now, get it here! The old, hand method is still available but this new approach, is more efficient and flexible. That said you may need to get into the crosstab code to tweak it where the crosstab dialog can not help. I had to do this, this week but more on that later. The following explains how the crosstab wizard builds the crosstab and what the fields inside the resulting template structure are there for. To create the crosstab a new XDO command "<?crosstab:...?>" has been created. XDO Command: <?crosstab: ctvarname; data-element; rows; columns; measures; aggregation?> Parameter Description Example Ctvarname Crosstab variable name. This is automatically generated by the Add-in. C123 data-element This is the XML data element that contains the data. "//ROW" Rows This contains a list of XML elements for row headers. The ordering information is specified within "{" and "}". The first attribute is the sort element. Leaving it blank means the sort element is the same as the row header element. The attribute "o" means order. Its value can be "a" for ascending, or "d" for descending. The attribute "t" means type. Its value can be "t" for text, and "n" for numeric. There can be more than one sort elements, example: "emp-full-name {emp-lastname,o=a,t=n}{emp-firstname,o=a,t=n}. This will sort employee by last name and first name. "Region{,o=a,t=t}, District{,o=a,t=t}" In the example, the first row header is "Region". It is sort by "Region", order is ascending, and type is text. The second row header is "District". It is sort by "District", order is ascending, and type is text. Columns This contains a list of XML elements for columns headers. The ordering information is specified within "{" and "}". The first attribute is the sort element. Leaving it blank means the sort element is the same as the column header element. The attribute "o" means order. Its value can be "a" for ascending, or "d" for descending. The attribute "t" means type. Its value can be "t" for text, and "n" for numeric. There can be more than one sort elements, example: "emp-full-name {emp-lastname,o=a,t=n}{emp-firstname,o=a,t=n}. This will sort employee by last name and first name. "ProductsBrand{,o=a,t=t}, PeriodYear{,o=a,t=t}" In the example, the first column header is "ProductsBrand". It is sort by "ProductsBrand", order is ascending, and type is text. The second column header is "PeriodYear". It is sort by "District", order is ascending, and type is text. Measures This contains a list of XML elements for measures. "Revenue, PrevRevenue" Aggregation The aggregation function name. Currently, we only support "sum". "sum" Using the Oracle BI Publisher Template Builder for Word add-in, we are able to construct the following Pivot Table: The generated XDO command for this Pivot Table is as follow: <?crosstab:c547; "//ROW";"Region{,o=a,t=t}, District{,o=a,t=t}"; "ProductsBrand{,o=a,t=t},PeriodYear{,o=a,t=t}"; "Revenue, PrevRevenue";"sum"?> Running the command on the give XML data files generates this XML file "cttree.xml". Each XPath in the "cttree.xml" is described in the following table. Element XPath Count Description C0 /cttree/C0 1 This contains elements which are related to column. C1 /cttree/C0/C1 4 The first level column "ProductsBrand". There are four distinct values. They are shown in the label H element. CS /cttree/C0/C1/CS 4 The column-span value. It is used to format the crosstab table. H /cttree/C0/C1/H 4 The column header label. There are four distinct values "Enterprise", "Magicolor", "McCloskey" and "Valspar". T1 /cttree/C0/C1/T1 4 The sum for measure 1, which is Revenue. T2 /cttree/C0/C1/T2 4 The sum for measure 2, which is PrevRevenue. C2 /cttree/C0/C1/C2 8 The first level column "PeriodYear", which is the second group-by key. There are two distinct values "2001" and "2002". H /cttree/C0/C1/C2/H 8 The column header label. There are two distinct values "2001" and "2002". Since it is under C1, therefore the total number of entries is 4 x 2 => 8. T1 /cttree/C0/C1/C2/T1 8 The sum for measure 1 "Revenue". T2 /cttree/C0/C1/C2/T2 8 The sum for measure 2 "PrevRevenue". M0 /cttree/M0 1 This contains elements which are related to measures. M1 /cttree/M0/M1 1 This contains summary for measure 1. H /cttree/M0/M1/H 1 The measure 1 label, which is "Revenue". T /cttree/M0/M1/T 1 The sum of measure 1 for the entire xpath from "//ROW". M2 /cttree/M0/M2 1 This contains summary for measure 2. H /cttree/M0/M2/H 1 The measure 2 label, which is "PrevRevenue". T /cttree/M0/M2/T 1 The sum of measure 2 for the entire xpath from "//ROW". R0 /cttree/R0 1 This contains elements which are related to row. R1 /cttree/R0/R1 4 The first level row "Region". There are four distinct values, they are shown in the label H element. H /cttree/R0/R1/H 4 This is row header label for "Region". There are four distinct values "CENTRAL REGION", "EASTERN REGION", "SOUTHERN REGION" and "WESTERN REGION". RS /cttree/R0/R1/RS 4 The row-span value. It is used to format the crosstab table. T1 /cttree/R0/R1/T1 4 The sum of measure 1 "Revenue" for each distinct "Region" value. T2 /cttree/R0/R1/T2 4 The sum of measure 1 "Revenue" for each distinct "Region" value. R1C1 /cttree/R0/R1/R1C1 16 This contains elements from combining R1 and C1. There are 4 distinct values for "Region", and four distinct values for "ProductsBrand". Therefore, the combination is 4 X 4 è 16. T1 /cttree/R0/R1/R1C1/T1 16 The sum of measure 1 "Revenue" for each combination of "Region" and "ProductsBrand". T2 /cttree/R0/R1/R1C1/T2 16 The sum of measure 2 "PrevRevenue" for each combination of "Region" and "ProductsBrand". R1C2 /cttree/R0/R1/R1C1/R1C2 32 This contains elements from combining R1, C1 and C2. There are 4 distinct values for "Region", and four distinct values for "ProductsBrand", and two distinct values of "PeriodYear". Therefore, the combination is 4 X 4 X 2 è 32. T1 /cttree/R0/R1/R1C1/R1C2/T1 32 The sum of measure 1 "Revenue" for each combination of "Region", "ProductsBrand" and "PeriodYear". T2 /cttree/R0/R1/R1C1/R1C2/T2 32 The sum of measure 2 "PrevRevenue" for each combination of "Region", "ProductsBrand" and "PeriodYear". R2 /cttree/R0/R1/R2 18 This contains elements from combining R1 "Region" and R2 "District". Since the list of values in R2 has dependency on R1, therefore the number of entries is not just a simple multiplication. H /cttree/R0/R1/R2/H 18 The row header label for R2 "District". R1N /cttree/R0/R1/R2/R1N 18 The R2 position number within R1. This is used to check if it is the last row, and draw table border accordingly. T1 /cttree/R0/R1/R2/T1 18 The sum of measure 1 "Revenue" for each combination "Region" and "District". T2 /cttree/R0/R1/R2/T2 18 The sum of measure 2 "PrevRevenue" for each combination of "Region" and "District". R2C1 /cttree/R0/R1/R2/R2C1 72 This contains elements from combining R1, R2 and C1. T1 /cttree/R0/R1/R2/R2C1/T1 72 The sum of measure 1 "Revenue" for each combination of "Region", "District" and "ProductsBrand". T2 /cttree/R0/R1/R2/R2C1/T2 72 The sum of measure 2 "PrevRevenue" for each combination of "Region", "District" and "ProductsBrand". R2C2 /cttree/R0/R1/R2/R2C1/R2C2 144 This contains elements from combining R1, R2, C1 and C2, which gives the finest level of details. M1 /cttree/R0/R1/R2/R2C1/R2C2/M1 144 The sum of measure 1 "Revenue". M2 /cttree/R0/R1/R2/R2C1/R2C2/M2 144 The sum of measure 2 "PrevRevenue". Lots to read and digest I know! Customization One new feature I discovered this week is the ability to show one column and sort by another. I had a data set that was extracting month abbreviations, we wanted to show the months across the top and some row headers to the side. As you may know XSL is not great with dates, especially recognising month names. It just wants to sort them alphabetically, so Apr comes before Jan, etc. A way around this is to generate a month number alongside the month and use that to sort. We can do that in the crosstab, sadly its not exposed in the UI yet but its doable. Go back up and take a look a the initial crosstab command. especially the Rows and Columns entries. In there you will find the sort criteria. "ProductsBrand{,o=a,t=t}, PeriodYear{,o=a,t=t}" Notice those leading commas inside the curly braces? Because there is no field preceding them it means that the crosstab should sort on the column before the brace ie PeriodYear. But you can insert another column in the data set to sort by. To get my sort working how I needed. <?crosstab:c794;"current-group()";"_Fund_Type_._Fund_Type_Display_{_Fund_Type_._Fund_Type_Sort_,o=a,t=n}";"_Fiscal_Period__Amount__._Amt_Fm_Disp_Abbr_{_Fiscal_Period__Amount__._Amt_Fiscal_Month_Sort_,o=a,t=n}";"_Execution_Facts_._Amt_";"sum"?> Excuse the horribly verbose XML tags, good ol BIEE :0) The emboldened columns are not in the crosstab but are in the data set. I just opened up the field, dropped them in and changed the type(t) value to be 'n', for number, instead of the default 'a' and my crosstab started sorting how I wanted it. If you find other tips and tricks, please share in the comments.

    Read the article

  • CodePlex Daily Summary for Wednesday, March 10, 2010

    CodePlex Daily Summary for Wednesday, March 10, 2010New ProjectsASP.NET jQuery MessageBox: The ASP.NET jQuery it's an Web User Control that uses jQuery framework to enable diferent ways to present information to the user, by using these ...CommentRemover: Utility for removing comments from source codes. Support PL/SQL, Delphi, C/C#/C++ Developed in C# Requirement Microsoft .NET Framework 3.5DotNetNuke® RadMenu: DNNRadMenu makes it easy to create skins which use telerik RadMenu functionality. Licensing permits anyone (including designers) to use the compon...DotNetNuke® Skin AlphaBrisk: A DotNetNuke Design Challenge skin package submitted to the "Web Standards" category by dnnskin.net. Eight themes using transparent png, div, CSS, ...DotNetNuke® Skin Collaborate: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by Cuong Dang of R2Integrated. This package is 100% XHTML an...DotNetNuke® Skin TR: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by Tracy Wittenkeller of T-Worx. This package is 100% XHTML, ...Encrypted Notes: Encrypted Notes is similar to Notes, but uses Triple DES to encrypt text and files. It has a random key generator, and can save the key. It is deve...FalconLobby: FalconLobby is an authorized AddOn for Falcon 4.0 Allied Force which was created to support the multiplayer experience. FalconLobby retrieves the l...INETA Europe WebSite: Website for INETA EuropeInsert a Favorite (Bookmark) plugin for Windows Live Writer: This Windows Live Writer plugin allows you to select a Favorite (Bookmark) and insert it into your blog entry.Javascript Lib: an javascript libraryjqGrid ASP.Net MVC Control: A fully integrated ASP.Net MVC (2.0) grid control based on the successfull jqGrid plugin for the jQuery jscript framework. Among the features of...Mosaictor: Mosaictor is a per project of mine that I started halfway my education. It is a photo mosaic creator using locally saved files and files obtained t...Notes: Notes is a simple but fast text editor. It can save in many text formats, and includes many features, such as templates (soon to be customizable), ...notmuchweb: A web frontend for notmuchPervasiveID: The PID is actively involved in Open Source ID community-building and education. PID members frequently travel the world to attend ID conferences a...Proyect Electronica: Proyecto de electronicaRapidshare Downloader 2: Rapidshare Downloader 2ROAD is Rapid Oberon Application Development: A suite of integrated tools for the develpment of Oberon-2 applicationSDNTFSIntegration: TFS Integration.SilverlightImageUpload: SilverlightImageUploadSMIL - SharePoint Map Integration Layer: .Useful SharePoint Site Workflow Utilities: This project aims to make it easy use SharePoint 2010's Site Workflows as "event handlers" for various back end systems by providing ways to start ...Windows Media Autorization: Windows Media Autorizaton PlugIn for windows media 9 WinMo Twitter Widget StarterKit: This project will allow you to quickly create Widgets that run on a Windows Mobile 6.5 phone to allow you to view Tweets designated by a hash tag. ...XNA 3D World Studio Content Pipeline: XNA 3D World Studio Content Pipeline New ReleasesAPSales - CRM Software as a Service: APSales 0.1.2: This version add some interesting features to the project: Implements a Grid Control Custom View Query Use lastest version(2.0.2) of APEnnead.net ...ASP.NET jQuery MessageBox: ASP.NET jQuery MessageBox 0.1: Project Description The ASP.NET jQuery it's an Web User Control que uses jQuery framework to enable diferent ways to present information to the use...BTP Tools: CSBC+CUVC+HCSBC.dict files 2010-03-09: a space character should be only between <Strong Number Pattern> and <Count> like: <Text><Strong Number pattern><space character> <Count> The abov...Citrix HDX MediaStream for Flash System Verifier: HDX Flash Verifier Beta (v1.20): Reduced the number of exceptions that terminate the verification process.Code examples, utilities and misc from Lars Wilhelmsen [MVP]: LarsW.MexEdmxFixer 1.5: Added some missing sub elements from the EDMX file's Designer element; Connection and Output. Without them, some of the properties in the designer ...CommonLibrary.NET: CommonLibrary.NET 0.9.4 - Beta 2: A collection of very reusable code and components in C# 3.5 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars,...Encrypted Notes: Source Code: This has the all the code for Encrypted Notes in a Text file.Hybrid Windows Service: Release Assembly: Main Assembly. Usage: 1. Add reference to this dll in your 'Windows Service' project. 2. Replace references to ServiceBase to HybridServiceBase in...jqGrid ASP.Net MVC Control: Version 1.0.0.0: Initial Versionkdar: KDAR 0.0.16: KDAR - Kernel Debugger Anti Rootkit - KINTERRUPT object check added - load image notifier check addedlatex2mathml: 1.0 alpha: This is the first public release of Latex2MathML. Lots are left to add and fix. I encourage you to test it. If something goes wrong, send me the lo...MapWindow GIS: MapWindow 6.0 msi (March 9): This fixes a bug with saving and opening maps.Microsoft Research Biology Extension for Excel: MSR Biology Extension for Excel - Beta 2 (Update): This is an updated release for the Beta 2 Installer for the MSR Biology Extension for Excel. A couple of identified issues with the installation f...Notes: Notes 5.2: This is the latest version of Notes (5.2). It has an installer - it will create a directory 'CPascoe' in My Documents. Once you have extracted the...Notes: Source Code: This has the all the code for Notes in a Text file.RedBulb for XNA Framework: Tree Massacre XMAS Edition (Sample): Tree Massacre XMAS Edition Source Code and Creators Club Package http://bayimg.com/image/jalkiaacb.jpgRoTwee: RoTwee (7.0.2.0): Now color mode is introduced to RoTwee. Push change color button and you can change color mode of RoTwee. Recommended mode is active rainbow mode :)SharePoint Team-Mailer: SharePoint Team-Mailer v1.0: Recommended versionsPWadmin: pwAdmin v0.7_nightly: Nightly Build --------------------- + Target JRE -> 1.5.0_21 + Target ApplicationServer -> Apache Tomcat 5.5.28 + Added xml editor (only working fo...SQL Server PowerShell Extensions: 2.1 Production: Release 2.1 re-implements SQLPSX as PowersShell version 2.0 modules. SQLPSX consists of 9 modules with 133 advanced functions, 2 cmdlets and 7 scri...TMap for VS2010: TMap for VS2010 (MSF Agile) RC Release: Release of the TMap process template for VS2010 combined with the MSF Agile process template basd on the Release Candidate. The references to the g...TS3QueryLib.Net: TS3QueryLib.Net Version 0.19.14.0: Changelog Added property "IsClientRecording" to class "ClientListEntry" which is used in method "GetClientList" of QueryRunner class. (Change of Be...VCC: Latest build, v2.1.30309.0: Automatic drop of latest buildWinMo Twitter Widget StarterKit: Tweet Viewer Files: Files necessary to create your own Tweet ViewerWPF AutoComplete TextBox Control: Version 1.1: This release includes accumulated bug fixes since the initial release. Besides, adds experimental asynchronous support. Sample application gets...XNA 3D World Studio Content Pipeline: XNA 3DWS Content Pipeline: This is an rar file containing the latest content importer codeMost Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesASP.NET Ajax LibraryMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsjQuery Library for SharePoint Web ServicesBlogEngine.NETN2 CMSFasterflect - A Fast and Simple Reflection APIFarseer Physics Enginepatterns & practices – Enterprise LibraryCaliburn: An Application Framework for WPF and Silverlight

    Read the article

  • In what URL segment do you have language? /en/admin/dashboard or /admin/en/dashboard?

    - by Shimmi
    maybe you are thinking that this is another dump question about language in URL, but I hope it is not! I've red many articles on this, but non of them was dealing with "sections of site" (described bellow). I am programming a new application platform in laravel/php and I am still not 100% convinced where to put language slug. There are many places where you can put it - some of them are better some are worse: example.com/en/article en.example.com/article example.com/article?lang=en My personal choice is to put language after the domain - so the first option in above list. But what if you are in some special secion like admin or api? where do you put language then? example.com/en/admin/dashboard example.com/en/api/v1/user/login or example.com/admin/en/dashboard example.com/api/v1/en/user/login (in frontend it is left the same: example.com/en/some-article) what do you preffer? what are cons and pros? One thing is using language in first segments is far more easier to programm than when it have some variants... P.S. whould you recoment using only en, cs or do it in full power with cz-CZ, en-US ? thanks for any thoughts! J.

    Read the article

  • 30 seconds from File|New to a new CRUD Silverlight application with Teleriks new LINQ Implementation

    Last month Telerik released its new LINQ implementation and last week we released the new Data Services Wizard for Telerik OpenAccess, which supports both traditional OpenAccess entities and the new LINQ implementation. I will a walk you through the process where you can connect to a database, add a new domain model, wrap it in a new WCF Data Services (Astoria) service, and add a CRUD enabled Silverlight application. All in 30 seconds! Step 1: Build your Domain Model (20 seconds) Open Visual Studio 2010 RTM (or 2008) and add a new ASP.NET project. Right click on the project and select Add|New Item and choose Telerk OpenAccess Domain Model from the item template list. The Visual Entity Designer wizard comes up. Select the database server you are using in the first screen (SQL Server, Oracle, SQL Azure, MySQL, etc) and then also build your database connection string. Next select the tables, views, and stored procedures you want ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Technical : Configure WebCenter PS5 with WebCenter Content - Good Example

    - by Vikram Kurma
    Read Intro for this post and understand the problem here Solution  : To integrate Jdeveloper with WebCenter Content PS5 and browse through the folders , we need to enable "folders_g" features instead of "Framework Folders" . Detailed steps below . Login to Webcenter content using admin credentials https://<hostname>:<port number>/cs Go to "Administration" menu and click on "Admin Server"  In the newly opened browser , click on 'Component Manager' in the left navigation menu . On the right hand side , you should see "Advanced Component Manager" , click on it . You will now see a list of enabled and disabled features . Disable "Framework Folders" and enable "folders_g" folders . Restart the server . There you go ! . You should have a successful connection and you should be able to traverse through the contribution folders from Jdeveloper. I am posting a few pics to help you navigate the steps .    

    Read the article

  • CodePlex Daily Summary for Thursday, October 27, 2011

    CodePlex Daily Summary for Thursday, October 27, 2011Popular ReleasesAcDown????? - Anime&Comic Downloader: AcDown????? v3.6: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6?? ??“????”...Facebook C# SDK: 5.3.1: This is a BETA release which adds new features and bug fixes to v5.2.1. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ added support for early preview for .NET 4.5 added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added new CS-WinForms-AsyncAwait.sln sample demonstrating the use of async/await, upload progress report using IProgress<T> and cancellation support Query/QueryAsync methods uses graph api instead...SQL Backup Helper: SQL Backup Helper v1.0: Version 1.0 Changes Description added to settings table Automatic LOG files truncation added to BACKUP stored procedure Only database in status ONLINE will be backed upFlowton: Release 0.2: This release is the first official release of Flowton along with Source. Printpreview/Print is enabled with minor bug fixes from 0.1 alpha release locally.MySemanticSearch Sample: MySemanticSearch Installer (CTP3): Note: This release of the MySemanticSearch Sample works with SQL Server 2012 CTP3. Installation InstructionsDownload this self-extracting archive to your computer Execute the self-extracting archive Accept the licensing agreement Choose a target directory on your computer and extract the files Open Windows PowerShell command prompt with elevated priveleges Execute the following command: Set-ExecutionPolicy Unrestricted Close the Windows PowerShell command prompt Run C:\MySema...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.33: Add JSParser.ParseExpression method to parse JavaScript expressions rather than source-elements. Add -strict switch (CodeSettings.StrictMode) to force input code to ECMA5 Strict-mode (extra error-checking, "use strict" at top). Fixed bug when MinifyCode setting was set to false but RemoveUnneededCode was left it's default value of true.Path Copy Copy: 8.0: New version that mostly adds lots of requested features: 11340 11339 11338 11337 This version also features a more elaborate Settings UI that has several tabs. I tried to add some notes to better explain the use and purpose of the various options. The Path Copy Copy documentation is also on the way, both to explain how to develop custom plugins and to explain how to pre-configure options if you're a network admin. Stay tuned.MVC Controls Toolkit: Mvc Controls Toolkit 1.5.0: Added: The new Client Blocks feaure of Views A new "move" js method for the TreeViews The NewHtmlCreated js event to the DataGrid Improved the ChoiceList structure that now allows also the selection list of a dropdown to be chosen with a lambda expression Fixed: Issue with partial thrust Client handling of conditional attributes Bug in TreeView node moves that sometimes were not reflected on the server An issue in the Mvc3 Nuget package that wasn't able to uninstall properly ...Free SharePoint Master Pages: Buried Alive (Halloween) Theme: Release Notes *Created for Halloween, you will find theme file, custom css file and images. *Created by Al Roome @AlstarRoome Features: Custom styling for web part Custom background *Screenshot https://s3.amazonaws.com/kkhipple/post/sharepoint-showcase-halloween.pngDevForce Application Framework: DevForce AF 2.0.3 RTW: PrerequisitesWPF 4.0 Silverlight 4.0 DevForce 2010 6.1.3.1 Download ContentsDebug and Release Assemblies API Documentation Source code License.txt Requirements.txt Release HighlightsNew: EventAggregator event forwarding New: EntityManagerInterceptor<T> to intercept EntityManger events New: IHarnessAware to allow for ViewModel setup when executed inside of the Development Harness New: Improved design time stability New: Support for add-in development New: CoroutineFns.To...NicAudio: NicAudio 2.0.5: Minor change to accept special DTS stereo modes (LtRt, AB,...)Windows Azure Toolkit for Windows Phone: Windows Azure Toolkit for Windows Phone v1.3.1: Upgraded Windows Azure projects to Windows Azure Tools for Microsoft Visual Studio 2010 1.5 – September 2011 Upgraded the tools tools to support the Windows Phone Developer Tools RTW Update SQL Azure only scenarios to use ASP.NET Universal Providers (through the System.Web.Providers v1.0.1 NuGet package) Changed Shared Access Signature service interface to support more operations Refactored Blobs API to have a similar interface and usage to that provided by the Windows Azure SDK Stor...xUnit.net Contrib: xunitcontrib-resharper 0.4.4 (dotCover): xunitcontrib release 0.4.4 (ReSharper runner) This release provides a test runner plugin for Resharper 6.0 RTM, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release addresses the following issues:Support for dotCover code coverage 4132 Note that this build work against ALL VERSIONS of xunit. The files are compiled against xunit.dll 1.8 - DO NOT REPLACE THIS FILE. Thanks to xunit's version independent runner system, this package can r...BookShop: BookShop: BookShop WP7 clientRibbon Editor for Microsoft Dynamics CRM 2011: Ribbon Editor (0.1.2122.266): Added CodePlex and PayPal links New icon Bug fix: can't connect to an IFD deployment when the discovery service url has been customizedSiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.0.921.340): Added CodePlex and PayPal links New iconDotNet.Framework.Common: DotNet.Framework.Common 4.0: ??????????,????????????XML Explorer: XML Explorer 4.0.5: Changes in 4.0.5: Added 'Copy Attribute XPath to Address Bar' feature. Added methods for decoding node text and value from Base64 encoded strings, and copying them to the clipboard. Added 'ChildNodeDefinitions' to the options, which allows for easier navigation of parent-child and ID-IDREF relationships. Discovery happens on-demand, as nodes are expanded and child nodes are added. Nodes can now have 'virtual' child nodes, defined by an xpath to select an identifier (usually relative to ...CODE Framework: 4.0.11021.0: This build adds a lot of our WPF components, including our MVVC and MVC components as well as a "Metro" and "Battleship" style.WiX Toolset: WiX v3.6 Beta: First beta release of WiX v3.6. The primary focus is on Burn but there are also many small bug fixes to the core toolset. For more information see: http://robmensching.com/blog/posts/2011/10/24/WiX-v3.6-Beta-releasedNew Projects5by5by4 Tic Tac Toe Project: 5by5by4 Tic Tac Toe project for CS 3420.ASP.Net Membership provider for MongoDb: A role and membership provider for ASP.Net with MongoDb as the database. Makes use of the Norm Linq provider for MongoDb.Bazookabird TFS Dashboard: Just a simple TFS dashboard. Show build results, tfs queries of your choice and display availability of build machines. Only a feature-lacking proof of concept yet, will hopefully be useful in the end. BestWebApp: Web service and web application.Code Made Simple: A group of projects to make coding life simple.DefenseXna: ??xna??Digibib.NET: Digibib.NET ist eine Portierung der Lesesoftware Digibux für die Digitale Bibliothek der Directmedia Publishing.FaceComparerDistributed: project of face compare distributed versionFMSSolution: FMSISO Analyzer: ISO Analyzer is a tool that makes it easier to analyze ISO 8583 financial transactions and also provides a platform to create a host simulator, capable of receiving requests and sending back the responses. It’s a WinForms application and it’s developed using C#.ITU Project: ITU Projekt - Víc než len klávesové skratky (navigace mezi bežicími aplikacemi)Maintenance Province Data: Help people find your project. Write a concise, reader-focused summary. Example: <project name> makes it easier for <target user group> to <activity>. You'll no longer have to <activity>. It's developed in <programming language>.m?ng chia s? công vi?c: m?ng chia s? công vi?cMetroTask: MetroTask, An example Metro based application using C#MyHomeFinance: Helps to add and keep a record of financeMySemanticSearch Sample: MySemanticSearch is a sample content management application that demonstrates semantic search capabilities introduced in SQL Server 2012. MySemanticSearch allows you to visualize tag clouds for content stored in FileTables and find similar content using semantic search.NetBlocks: This project is an implementation of the Unit-of-Work and Repository patterns using Entity Framework 4.1 and Unity. The project also includes code that can be used to initialize an application’s run-time environment from a set of components. The project includes example components for typed configuration settings, caching and a factory component based on Unity. Also included is an example of how to represent a database command with a C# class that transforms the results to typed objects.Online shopping website in ASP.NET- Open Source Project: asp.net,C#,shopping cart,college project,visual studio 2010,visual web developer 2010Pak Master: Pak Explorer for the fourth coming four.Pizza Service: A mvc3 project which aims to host both a backend webservice and a frontend page for ordering pizza and managing orders, customers and provide a drivermapScadaEveryWhere: ScadaEveryWhereSilverTwitterSearch: SilverTwitterSearch is a Siliverlight library for the Twitter Search API.Snst Salix: Salix is codename for custom solution of time management and task assignment software.SolutionManagementKit: SolutionManagementKit This product will try to combine several monitoring / support tools available right now. the primary focus will be on (T) SQL / MDX parsing for SQL 2008 R2 SSAS 2008 R2 as well as cube maintance Development in C# / VB .NETTarget: Blank Orchard Module: If enabled, outgoing links open in new window (just like with the deprecated target="_blank" attribute)WPPersonality: Psychological tests for windows phoneXrmLibrary: A base library to be used for rapid integration with one or more Microsoft Dynamics CRM 2011 environments. The XrmLibrary contains thread-safe singleton implementations of both the CRM IOrganizationService (1 or many instances) and a tracer/logger that utilizes Apache's log4net.

    Read the article

  • How do I fix the error: CS1548: Cryptographic failure while signing assembly ?

    - by Paula DiTallo
    The full error in Microsoft Visual Studio on a compile looks like this: error CS1548: Cryptographic failure while signing assembly 'C:\Program Files\Microsoft SQL Server\100\Samples\Analysis Services\Programmability\AMO\AMOAdventureWorks\CS\StoredProcedures\obj\Debug\StoredProcedures.dll' This is likely due to a missing strong key pair value file. The easiest way to solve this problem is to create a new one. Navigate to:  Microsoft Visual Studio 2010>Visual Studio Tools>Visual Studio x64 Win64 Command Prompt (2010)  [if you aren't on an x64 box, pick another command prompt option that fits] Once the MS-Dos window displays, type in this statement: sn -k c:\SampleKey.snk Then copy the output *.snk file to project directory, or the *referenced directory. Remove the old reference to the *.snk file from the project. Add the paired key back to the project as an existing item. When you add back the *.snk file to the project, you will see that the *.snk file is no longer missing.   Our work is done!   *referenced directory: Pay attention to the original error message on compile. The *.snk file that is referenced may be in a directory path you aren't expecting--so you will still get the error unless you change the directory path or write the file to the directory the program is expecting to find the *.snk file.

    Read the article

  • How to structure a XML-based order form using ASP.NET

    - by Brendan
    First question here; please help me if I'm doing something wrong. I'm a graphic designer who's trying to teach himself ASP.NET/C#. My server-side background is PHP/WordPress and some ASP Classic, and when I do code I've hand-coded just about everything since I started learning HTML. So, as I've started to learn .NET, my code has been very manual and procedural. I'm now trying to create a really basic order form that pulls from an XML file to populate the form; there's an image, a title, a price, and selectable quantities. If I was making this form as a static HTML file, I'd have each field named manually and so on postback I could query each field to get the values. But I'm trying to do this dynamically so that I can add/remove items from the form and not have to change the code. In terms of displaying the XML, I rolled my own by loading XmlDocument and using XmlNodeList and a bunch of foreach loops to get things displayed. Then, I learned about <asp:XmlDataSource> and <asp:Repeater>, which made displaying the XML simpler by a large margin. However, I've had a really hard time getting the data that's been submitted on postback (it was implied on SO that there are better ways to get data than nested RepeaterItems). So, what I've learned so far is that you can do things a bunch of different ways in .NET. that's why I thought it'd be good to ask for answers regarding the best way to use ASP.NET to display a XML document and dynamically capture the data that's submitted. Any help is appreciated! I'm using Notepad++ to code .NET 2.0.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >