Search Results

Search found 83713 results on 3349 pages for 'data change'.

Page 1237/3349 | < Previous Page | 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244  | Next Page >

  • Rendering ASP.NET MVC Views to String

    - by Rick Strahl
    It's not uncommon in my applications that I require longish text output that does not have to be rendered into the HTTP output stream. The most common scenario I have for 'template driven' non-Web text is for emails of all sorts. Logon confirmations and verifications, email confirmations for things like orders, status updates or scheduler notifications - all of which require merged text output both within and sometimes outside of Web applications. On other occasions I also need to capture the output from certain views for logging purposes. Rather than creating text output in code, it's much nicer to use the rendering mechanism that ASP.NET MVC already provides by way of it's ViewEngines - using Razor or WebForms views - to render output to a string. This is nice because it uses the same familiar rendering mechanism that I already use for my HTTP output and it also solves the problem of where to store the templates for rendering this content in nothing more than perhaps a separate view folder. The good news is that ASP.NET MVC's rendering engine is much more modular than the full ASP.NET runtime engine which was a real pain in the butt to coerce into rendering output to string. With MVC the rendering engine has been separated out from core ASP.NET runtime, so it's actually a lot easier to get View output into a string. Getting View Output from within an MVC Application If you need to generate string output from an MVC and pass some model data to it, the process to capture this output is fairly straight forward and involves only a handful of lines of code. The catch is that this particular approach requires that you have an active ControllerContext that can be passed to the view. This means that the following approach is limited to access from within Controller methods. Here's a class that wraps the process and provides both instance and static methods to handle the rendering:/// <summary> /// Class that renders MVC views to a string using the /// standard MVC View Engine to render the view. /// /// Note: This class can only be used within MVC /// applications that have an active ControllerContext. /// </summary> public class ViewRenderer { /// <summary> /// Required Controller Context /// </summary> protected ControllerContext Context { get; set; } public ViewRenderer(ControllerContext controllerContext) { Context = controllerContext; } /// <summary> /// Renders a full MVC view to a string. Will render with the full MVC /// View engine including running _ViewStart and merging into _Layout /// </summary> /// <param name="viewPath"> /// The path to the view to render. Either in same controller, shared by /// name or as fully qualified ~/ path including extension /// </param> /// <param name="model">The model to render the view with</param> /// <returns>String of the rendered view or null on error</returns> public string RenderView(string viewPath, object model) { return RenderViewToStringInternal(viewPath, model, false); } /// <summary> /// Renders a partial MVC view to string. Use this method to render /// a partial view that doesn't merge with _Layout and doesn't fire /// _ViewStart. /// </summary> /// <param name="viewPath"> /// The path to the view to render. Either in same controller, shared by /// name or as fully qualified ~/ path including extension /// </param> /// <param name="model">The model to pass to the viewRenderer</param> /// <returns>String of the rendered view or null on error</returns> public string RenderPartialView(string viewPath, object model) { return RenderViewToStringInternal(viewPath, model, true); } public static string RenderView(string viewPath, object model, ControllerContext controllerContext) { ViewRenderer renderer = new ViewRenderer(controllerContext); return renderer.RenderView(viewPath, model); } public static string RenderPartialView(string viewPath, object model, ControllerContext controllerContext) { ViewRenderer renderer = new ViewRenderer(controllerContext); return renderer.RenderPartialView(viewPath, model); } protected string RenderViewToStringInternal(string viewPath, object model, bool partial = false) { // first find the ViewEngine for this view ViewEngineResult viewEngineResult = null; if (partial) viewEngineResult = ViewEngines.Engines.FindPartialView(Context, viewPath); else viewEngineResult = ViewEngines.Engines.FindView(Context, viewPath, null); if (viewEngineResult == null) throw new FileNotFoundException(Properties.Resources.ViewCouldNotBeFound); // get the view and attach the model to view data var view = viewEngineResult.View; Context.Controller.ViewData.Model = model; string result = null; using (var sw = new StringWriter()) { var ctx = new ViewContext(Context, view, Context.Controller.ViewData, Context.Controller.TempData, sw); view.Render(ctx, sw); result = sw.ToString(); } return result; } } The key is the RenderViewToStringInternal method. The method first tries to find the view to render based on its path which can either be in the current controller's view path or the shared view path using its simple name (PasswordRecovery) or alternately by its full virtual path (~/Views/Templates/PasswordRecovery.cshtml). This code should work both for Razor and WebForms views although I've only tried it with Razor Views. Note that WebForms Views might actually be better for plain text as Razor adds all sorts of white space into its output when there are code blocks in the template. The Web Forms engine provides more accurate rendering for raw text scenarios. Once a view engine is found the view to render can be retrieved. Views in MVC render based on data that comes off the controller like the ViewData which contains the model along with the actual ViewData and ViewBag. From the View and some of the Context data a ViewContext is created which is then used to render the view with. The View picks up the Model and other data from the ViewContext internally and processes the View the same it would be processed if it were to send its output into the HTTP output stream. The difference is that we can override the ViewContext's output stream which we provide and capture into a StringWriter(). After rendering completes the result holds the output string. If an error occurs the error behavior is similar what you see with regular MVC errors - you get a full yellow screen of death including the view error information with the line of error highlighted. It's your responsibility to handle the error - or let it bubble up to your regular Controller Error filter if you have one. To use the simple class you only need a single line of code if you call the static methods. Here's an example of some Controller code that is used to send a user notification to a customer via email in one of my applications:[HttpPost] public ActionResult ContactSeller(ContactSellerViewModel model) { InitializeViewModel(model); var entryBus = new busEntry(); var entry = entryBus.LoadByDisplayId(model.EntryId); if ( string.IsNullOrEmpty(model.Email) ) entryBus.ValidationErrors.Add("Email address can't be empty.","Email"); if ( string.IsNullOrEmpty(model.Message)) entryBus.ValidationErrors.Add("Message can't be empty.","Message"); model.EntryId = entry.DisplayId; model.EntryTitle = entry.Title; if (entryBus.ValidationErrors.Count > 0) { ErrorDisplay.AddMessages(entryBus.ValidationErrors); ErrorDisplay.ShowError("Please correct the following:"); } else { string message = ViewRenderer.RenderView("~/views/template/ContactSellerEmail.cshtml",model, ControllerContext); string title = entry.Title + " (" + entry.DisplayId + ") - " + App.Configuration.ApplicationName; AppUtils.SendEmail(title, message, model.Email, entry.User.Email, false, false)) } return View(model); } Simple! The view in this case is just a plain MVC view and in this case it's a very simple plain text email message (edited for brevity here) that is created and sent off:@model ContactSellerViewModel @{ Layout = null; }re: @Model.EntryTitle @Model.ListingUrl @Model.Message ** SECURITY ADVISORY - AVOID SCAMS ** Avoid: wiring money, cross-border deals, work-at-home ** Beware: cashier checks, money orders, escrow, shipping ** More Info: @(App.Configuration.ApplicationBaseUrl)scams.html Obviously this is a very simple view (I edited out more from this page to keep it brief) -  but other template views are much more complex HTML documents or long messages that are occasionally updated and they are a perfect fit for Razor rendering. It even works with nested partial views and _layout pages. Partial Rendering Notice that I'm rendering a full View here. In the view I explicitly set the Layout=null to avoid pulling in _layout.cshtml for this view. This can also be controlled externally by calling the RenderPartial method instead: string message = ViewRenderer.RenderPartialView("~/views/template/ContactSellerEmail.cshtml",model, ControllerContext); with this line of code no layout page (or _viewstart) will be loaded, so the output generated is just what's in the view. I find myself using Partials most of the time when rendering templates, since the target of templates usually tend to be emails or other HTML fragment like output, so the RenderPartialView() method is definitely useful to me. Rendering without a ControllerContext The preceding class is great when you're need template rendering from within MVC controller actions or anywhere where you have access to the request Controller. But if you don't have a controller context handy - maybe inside a utility function that is static, a non-Web application, or an operation that runs asynchronously in ASP.NET - which makes using the above code impossible. I haven't found a way to manually create a Controller context to provide the ViewContext() what it needs from outside of the MVC infrastructure. However, there are ways to accomplish this,  but they are a bit more complex. It's possible to host the RazorEngine on your own, which side steps all of the MVC framework and HTTP and just deals with the raw rendering engine. I wrote about this process in Hosting the Razor Engine in Non-Web Applications a long while back. It's quite a process to create a custom Razor engine and runtime, but it allows for all sorts of flexibility. There's also a RazorEngine CodePlex project that does something similar. I've been meaning to check out the latter but haven't gotten around to it since I have my own code to do this. The trick to hosting the RazorEngine to have it behave properly inside of an ASP.NET application and properly cache content so templates aren't constantly rebuild and reparsed. Anyway, in the same app as above I have one scenario where no ControllerContext is available: I have a background scheduler running inside of the app that fires on timed intervals. This process could be external but because it's lightweight we decided to fire it right inside of the ASP.NET app on a separate thread. In my app the code that renders these templates does something like this:var model = new SearchNotificationViewModel() { Entries = entries, Notification = notification, User = user }; // TODO: Need logging for errors sending string razorError = null; var result = AppUtils.RenderRazorTemplate("~/views/template/SearchNotificationTemplate.cshtml", model, razorError); which references a couple of helper functions that set up my RazorFolderHostContainer class:public static string RenderRazorTemplate(string virtualPath, object model,string errorMessage = null) { var razor = AppUtils.CreateRazorHost(); var path = virtualPath.Replace("~/", "").Replace("~", "").Replace("/", "\\"); var merged = razor.RenderTemplateToString(path, model); if (merged == null) errorMessage = razor.ErrorMessage; return merged; } /// <summary> /// Creates a RazorStringHostContainer and starts it /// Call .Stop() when you're done with it. /// /// This is a static instance /// </summary> /// <param name="virtualPath"></param> /// <param name="binBasePath"></param> /// <param name="forceLoad"></param> /// <returns></returns> public static RazorFolderHostContainer CreateRazorHost(string binBasePath = null, bool forceLoad = false) { if (binBasePath == null) { if (HttpContext.Current != null) binBasePath = HttpContext.Current.Server.MapPath("~/"); else binBasePath = AppDomain.CurrentDomain.BaseDirectory; } if (_RazorHost == null || forceLoad) { if (!binBasePath.EndsWith("\\")) binBasePath += "\\"; //var razor = new RazorStringHostContainer(); var razor = new RazorFolderHostContainer(); razor.TemplatePath = binBasePath; binBasePath += "bin\\"; razor.BaseBinaryFolder = binBasePath; razor.UseAppDomain = false; razor.ReferencedAssemblies.Add(binBasePath + "ClassifiedsBusiness.dll"); razor.ReferencedAssemblies.Add(binBasePath + "ClassifiedsWeb.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Utilities.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Web.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Web.Mvc.dll"); razor.ReferencedAssemblies.Add("System.Web.dll"); razor.ReferencedNamespaces.Add("System.Web"); razor.ReferencedNamespaces.Add("ClassifiedsBusiness"); razor.ReferencedNamespaces.Add("ClassifiedsWeb"); razor.ReferencedNamespaces.Add("Westwind.Web"); razor.ReferencedNamespaces.Add("Westwind.Utilities"); _RazorHost = razor; _RazorHost.Start(); //_RazorHost.Engine.Configuration.CompileToMemory = false; } return _RazorHost; } The RazorFolderHostContainer essentially is a full runtime that mimics a folder structure like a typical Web app does including caching semantics and compiling code only if code changes on disk. It maps a folder hierarchy to views using the ~/ path syntax. The host is then configured to add assemblies and namespaces. Unfortunately the engine is not exactly like MVC's Razor - the expression expansion and code execution are the same, but some of the support methods like sections, helpers etc. are not all there so templates have to be a bit simpler. There are other folder hosts provided as well to directly execute templates from strings (using RazorStringHostContainer). The following is an example of an HTML email template @inherits RazorHosting.RazorTemplateFolderHost <ClassifiedsWeb.SearchNotificationViewModel> <html> <head> <title>Search Notifications</title> <style> body { margin: 5px;font-family: Verdana, Arial; font-size: 10pt;} h3 { color: SteelBlue; } .entry-item { border-bottom: 1px solid grey; padding: 8px; margin-bottom: 5px; } </style> </head> <body> Hello @Model.User.Name,<br /> <p>Below are your Search Results for the search phrase:</p> <h3>@Model.Notification.SearchPhrase</h3> <small>since @TimeUtils.ShortDateString(Model.Notification.LastSearch)</small> <hr /> You can see that the syntax is a little different. Instead of the familiar @model header the raw Razor  @inherits tag is used to specify the template base class (which you can extend). I took a quick look through the feature set of RazorEngine on CodePlex (now Github I guess) and the template implementation they use is closer to MVC's razor but there are other differences. In the end don't expect exact behavior like MVC templates if you use an external Razor rendering engine. This is not what I would consider an ideal solution, but it works well enough for this project. My biggest concern is the overhead of hosting a second razor engine in a Web app and the fact that here the differences in template rendering between 'real' MVC Razor views and another RazorEngine really are noticeable. You win some, you lose some It's extremely nice to see that if you have a ControllerContext handy (which probably addresses 99% of Web app scenarios) rendering a view to string using the native MVC Razor engine is pretty simple. Kudos on making that happen - as it solves a problem I see in just about every Web application I work on. But it is a bummer that a ControllerContext is required to make this simple code work. It'd be really sweet if there was a way to render views without being so closely coupled to the ASP.NET or MVC infrastructure that requires a ControllerContext. Alternately it'd be nice to have a way for an MVC based application to create a minimal ControllerContext from scratch - maybe somebody's been down that path. I tried for a few hours to come up with a way to make that work but gave up in the soup of nested contexts (MVC/Controller/View/Http). I suspect going down this path would be similar to hosting the ASP.NET runtime requiring a WorkerRequest. Brrr…. The sad part is that it seems to me that a View should really not require much 'context' of any kind to render output to string. Yes there are a few things that clearly are required like paths to the virtual and possibly the disk paths to the root of the app, but beyond that view rendering should not require much. But, no such luck. For now custom RazorHosting seems to be the only way to make Razor rendering go outside of the MVC context… Resources Full ViewRenderer.cs source code from Westwind.Web.Mvc library Hosting the Razor Engine for Non-Web Applications RazorEngine on GitHub© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQLAuthority News – Guest Post – Performance Counters Gathering using Powershell

    - by pinaldave
    Laerte Junior Laerte Junior has previously helped me personally to resolve the issue with Powershell installation on my computer. He did awesome job to help. He has send this another wonderful article regarding performance counter for readers of this blog. I really liked it and I expect all of you who are Powershell geeks, you will like the same as well. As a good DBA, you know that our social life is restricted to a few movies over the year and, when possible, a pizza in a restaurant next to your company’s place, of course. So what we have to do is to create methods through which we can facilitate our daily processes to go home early, and eventually have a nice time with our family (and not sleeping on the couch). As a consultant or fixed employee, one of our daily tasks is to monitor performance counters using Perfmom. To be honest, IDE is getting more complicated. To deal with this, I thought a solution using Powershell. Yes, with some lines of Powershell, you can configure which counters to use. And with one more line, you can already start collecting data. Let’s see one scenario: You are a consultant who has several clients and has just closed another project in troubleshooting an SQL Server environment. You are to use Perfmom to collect data from the server and you already have its XML configuration files made with the counters that you will be using- a file for memory bottleneck f, one for CPU, etc. With one Powershell command line for each XML file, you start collecting. The output of such a TXT file collection is set to up in an SQL Server. With two lines of command for each XML, you make the whole process of data collection. Creating an XML configuration File to Memory Counters: Get-PerfCounterCategory -CategoryName "Memory" | Get-PerfCounterInstance  | Get-PerfCounterCounters |Save-ConfigPerfCounter -PathConfigFile "c:\temp\ConfigfileMemory.xml" -newfile Creating an XML Configuration File to Buffer Manager, counters Page lookups/sec, Page reads/sec, Page writes/sec, Page life expectancy: Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters -CounterName "Page*" | Save-ConfigPerfCounter -PathConfigFile "c:\temp\BufferManager.xml" –NewFile Then you start the collection: Set-CollectPerfCounter -DateTimeStart "05/24/2010 08:00:00" -DateTimeEnd "05/24/2010 22:00:00" -Interval 10 -PathConfigFile c:\temp\ConfigfileMemory.xml -PathOutputFile c:\temp\ConfigfileMemory.txt To let the Buffer Manager collect, you need one more counters, including the Buffer cache hit ratio. Just add a new counter to BufferManager.xml, omitting the new file parameter Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters -CounterName "Buffer cache hit ratio" | Save-ConfigPerfCounter -PathConfigFile "c:\temp\BufferManager.xml" And start the collection: Set-CollectPerfCounter -DateTimeStart "05/24/2010 08:00:00" -DateTimeEnd "05/24/2010 22:00:00" -Interval 10 -PathConfigFile c:\temp\BufferManager.xml -PathOutputFile c:\temp\BufferManager.txt You do not know which counters are in the Category Buffer Manager? Simple! Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters Let’s see one output file as shown below. It is ready to bulk insert into the SQL Server. As you can see, Powershell makes this process incredibly easy and fast. Do you want to see more examples? Visit my blog at Shell Your Experience You can find more about Laerte Junior over here: www.laertejuniordba.spaces.live.com www.simple-talk.com/author/laerte-junior www.twitter.com/laertejuniordba SQL Server Powershell Extension Team: http://sqlpsx.codeplex.com/ Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Add-On, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: Powershell

    Read the article

  • Custom ASP.Net MVC 2 ModelMetadataProvider for using custom view model attributes

    - by SeanMcAlinden
    There are a number of ways of implementing a pattern for using custom view model attributes, the following is similar to something I’m using at work which works pretty well. The classes I’m going to create are really simple: 1. Abstract base attribute 2. Custom ModelMetadata provider which will derive from the DataAnnotationsModelMetadataProvider   Base Attribute MetadataAttribute using System; using System.Web.Mvc; namespace Mvc2Templates.Attributes {     /// <summary>     /// Base class for custom MetadataAttributes.     /// </summary>     public abstract class MetadataAttribute : Attribute     {         /// <summary>         /// Method for processing custom attribute data.         /// </summary>         /// <param name="modelMetaData">A ModelMetaData instance.</param>         public abstract void Process(ModelMetadata modelMetaData);     } } As you can see, the class simple has one method – Process. Process accepts the ModelMetaData which will allow any derived custom attributes to set properties on the model meta data and add items to its AdditionalValues collection.   Custom Model Metadata Provider For a quick explanation of the Model Metadata and how it fits in to the MVC 2 framework, it is basically a set of properties that are usually set via attributes placed above properties on a view model, for example the ReadOnly and HiddenInput attributes. When EditorForModel, DisplayForModel or any of the other EditorFor/DisplayFor methods are called, the ModelMetadata information is used to determine how to display the properties. All of the information available within the model metadata is also available through ViewData.ModelMetadata. The following class derives from the DataAnnotationsModelMetadataProvider built into the mvc 2 framework. I’ve overridden the CreateMetadata method in order to process any custom attributes that may have been placed above a property in a view model.   CustomModelMetadataProvider using System; using System.Collections.Generic; using System.Linq; using System.Web.Mvc; using Mvc2Templates.Attributes; namespace Mvc2Templates.Providers {     public class CustomModelMetadataProvider : DataAnnotationsModelMetadataProvider     {         protected override ModelMetadata CreateMetadata(             IEnumerable<Attribute> attributes,             Type containerType,             Func<object> modelAccessor,             Type modelType,             string propertyName)         {             var modelMetadata = base.CreateMetadata(attributes, containerType, modelAccessor, modelType, propertyName);               attributes.OfType<MetadataAttribute>().ToList().ForEach(x => x.Process(modelMetadata));               return modelMetadata;         }     } } As you can see, once the model metadata is created through the base method, a check for any attributes deriving from our new abstract base attribute MetadataAttribute is made, the Process method is then called on any existing custom attributes with the model meta data for the property passed in.   Hooking it up The last thing you need to do to hook it up is set the new CustomModelMetadataProvider as the current ModelMetadataProvider, this is done within the Global.asax Application_Start method. Global.asax protected void Application_Start()         {             AreaRegistration.RegisterAllAreas();               RegisterRoutes(RouteTable.Routes);               ModelMetadataProviders.Current = new CustomModelMetadataProvider();         }   In my next post, I’m going to demonstrate a cool custom attribute that turns a textbox into an ajax driven AutoComplete text box. Hope this is useful. Kind Regards, Sean McAlinden.

    Read the article

  • Setting up Tomcat6 properly in Ubuntu 10.04

    - by aasukisuki
    We have a Tomcat6 instance running on Ubuntu 10.04LTS. Our test box was just a Windows machine running Tomcat6. Both machines (Linux and Windows) have 1GB of ram. Via the Tomcat configuration tool in windows, I was able to set the min/max/permgen sizes of the JVM. Those were set to 256/512/128 respectively. Now on the Ubuntu box, I've tried setting the JVM options in several different places including: Adding JAVA_OPTS & CATALINA_OPTS in /etc/environment Adding JAVA_OPTS in $CATALINA_HOME/bin/catalina.sh Creating setenv.sh and adding JAVA_OPTS in $CATALINA_HOME/bin Adding JAVA_OPTS directly to /etc/init.d/tomcat6 Un-commenting the JAVA_OPTS and modifying it in /etc/default/tomcat6 Nearly all of those methods did not work, except for modifying /etc/init.d/tomcat6 directly (and possibly the /etc/default/tomcat6 change, but I just did that). However, my understanding is that when you change these settings, only one JVM should be used for the entire tomcat6 instance, and that memory is shared among the applications. On our windows box, tomcat6 is run as a service, and appears to behave this way. However, when I look at htop on the linux box, there are 20+ tomcat6 instances (I have an app that triggers internal jobs every X seconds using chron, so maybe these are threads? Or are they actual instances) all with those memory settings. The app runs fine for a bit, but eventually ends up locking up. I'm guessing each of these apps thinks it has 512m to work with and never GC's and then locks tomcat up completely. What is the proper way to set all of this up?

    Read the article

  • Microsoft silverlight 5.0 features for developers

    - by Jalpesh P. Vadgama
    Recently on Silverlight 5.0 firestarter event ScottGu has announced road map for Silverlight 5.0. There will be lots of features that will be there in silverlight 5.0 but here are few glimpses of Silverlight 5.0 Features. Improved Data binding support and Better support for MVVM: One of the greatest strength of Silverlight is its data binding. Microsoft is going to enhanced data binding by providing more ability to debug it. Developer will able to debug the binding expression and other stuff in Siverlight 5.0. Its also going to provide Ancestor Relative source binding which will allow property to bind with container control. MVVM pattern support will also be enhanced. Performance and Speed Enhancement: Now silverlight 5.0 will have support for 64bit browser support. So now you can use that silverlight application on 64 bit platform also. There is no need to take extra care for it.It will also have faster startup time and greater support for hardware acceleration. It will also provide end to end support for hard acceleration features of IE 9. More support for Out Of Browser Application: With Siverlight 4.0 Microsoft has announced new features called out of browser application and it has amazed lots of developer because now possibilities are unlimited with it. Now in silverlight 5.0 Out Of Browser application will have ability to Create Manage child windows just like windows forms or WPF Application. So you can fill power of desktop application with your out of browser application. Testing Support with Visual Studio 2010: Microsoft is going to add automated UI Testing support with Visual Studio 2010 with silverlight 5.0. So now we can test UI of Silverlight much faster. Better Support for RIA Services: RIA Services allows us to create N-tier application with silverlight via creating proxy classes on client and server both side. Now it will more features like complex type support, Custom type support for MVVM(Model View View Model) pattern. WCF Enhancements: There are lots of enhancement with WCF but key enhancement will WSTrust support. Text and Printing Support: Silverlight 5.0 will support vector base graphics. It will also support multicolumn text flow and linked text containers. It will full open type support,Postscript vector enhancement. Improved Power Enhancement: This will prevent screensaver from activating while you are watching videos on silverlight. Silverlight 5.0 is going add that smartness so it can determine while you are going to watch video and while you are not going watch videos. Better support for graphics: Silverlight 5.0 will provide in-depth support for 3D API. Now 3D rendering support is more enhancement in silverlight and 3D graphics can be rendered easily. You can find more details on following links and also don’t forgot to view silverlight firestarter keynot video of scottgu. http://www.silverlight.net/news/events/firestarter-labs/ http://blogs.msdn.com/b/katriend/archive/2010/12/06/silverlight-5-features-firestarter-keynote-and-sessions-resources.aspx http://weblogs.asp.net/scottgu/archive/2010/12/02/announcing-silverlight-5.aspx http://www.silverlight.net/news/events/firestarter/ http://www.microsoft.com/silverlight/future/ Hope this will help you. Stay tuned!!!.

    Read the article

  • Changes made to cli's php.ini not taking effect

    - by Sandeepan Nath
    I have two php.ini files - /etc/php.ini which loads in case of cli /opt/lampp/etc/php.ini which loads in case of browser. I am able to use PHP's Mailparse extension after adding the line extension=mailparse.so in the /opt/lampp/etc/php.ini and restarting lampp. But I am not able to load the same in case of command line - getting PHP Fatal error: Call to undefined function mailparse_msg_create() in ... mailparse_msg_create() is a part of the Mailparse extension. I tried by relogging with the user after making the change and even restarting the system. What needs to be done so that the change takes effect. Update I checked that php -i | grep 'Configuration File' gives PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/mailparse.so' - /usr/lib/php/modules/mailparse.so: cannot open shared object file: No such file or directory in Unknown on line 0 Configuration File (php.ini) Path => /etc Loaded Configuration File => /etc/php.ini Update 2 I copied the mailparse.so from /opt/lampp/lib/php/extensions/no-debug-non-zts-20090626 and put it in /usr/lib/php/modules. I added extension=mailparse.so to /etc/php.ini as well. But it still showed this warning PHP Warning: PHP Startup: Unable to load dynamic library ... As told by Lekensteyn, I did ldd /usr/lib/php/modules/mailparse.so and got ldd: warning: you do not have execution permission for /usr/lib/php/modules/mailparse.so' So I gave execute permission. Then ldd /usr/lib/php/modules/mailparse.so showed linux-gate.so.1 => (0x00110000) libc.so.6 => /lib/libc.so.6 (0x0011d000) /lib/ld-linux.so.2 (0x003aa000) which looks normal. BUt now, running php command says PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/mailparse.so' - /usr/lib/php/modules/mailparse.so: undefined symbol: mbfl_name2no_encoding in Unknown on line 0

    Read the article

  • curl FTPS with client certificate to a vsftpd

    - by weeheavy
    I'd like to authenticate FTP clients either via username+password or a client certificate. Only FTPS is allowed. User/password works, but while testing with curl (I don't have another option) and a client certificate, I need to pass a user. Isn't it technically possible to authenticate only by providing a certificate? vsftpd.conf passwd_chroot_enable=YES chroot_local_user=YES ssl_enable=YES rsa_cert_file=usrlocal/ssl/certs/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES Tested with curl -v -k -E client-crt.pem --ftp-ssl-reqd ftp://server:21/testfile the output is: * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Request CERT (13): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS handshake, CERT verify (15): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using DES-CBC3-SHA * Server certificate: * SSL certificate verify result: self signed certificate (18), continuing anyway. > USER anonymous < 530 Anonymous sessions may not use encryption. * Access denied: 530 * Closing connection #0 * SSLv3, TLS alert, Client hello (1): curl: (67) Access denied: 530 This is theoretically ok, as i forbid anonymous access. If I specify a user with -u username:pass it works, but it would without a certificate too. The client certificate seems to be ok, it looks like this: client-crt.pem -----BEGIN RSA PRIVATE KEY----- content -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- content -----END CERTIFICATE----- What am I missing? Thanks in advance. (The OS is Solaris 10 SPARC).

    Read the article

  • Book Review: Middleware Management with Oracle Enterprise Manager Grid Control 10g R5

    - by olaf.heimburger
    When you are familar with the Oracle Database and Middleware stack, chances are that you came across the Enterprise Manager. It comes in many versions for the database or the middleware and differs in its features. If meet someone who talks about Enterprise Manager, it might be possible that this person is talking about something completely different - Enterprise Manager Grid Control. Enterprise Manager Grid Control is the Oracle product for the data center that monitors all databases - and middleware components as well as operating systems. Since the database part is taken for granted, is needs some additional steps to get into the world of centralized middleware management. That's what this book is for - bringing you in the world of middleware management. The Authors This book is written by Debu Panda, former Product Management Director of the Oracle Fusion Middleware Management development team, and Arvind Maheshwari, Senior Software Development Manager of the Oracle Enterprise Manager development team. The Book Oracle Enterprise Manager conceptionally works for many different management areas. As a user you often think of managing databases with it. This is a wide area and deserves another book. The least known area is the middleware management and that's what the booked aimes for. The first 3 chapters cover the key features of Enterprise Manager Grid Control, Installing Enterprise Manager Grid Control, and Enterprise Manager Key Concepts and Subsystems. The foundation you need to understand the whole software and the following chapters. Read them in order and you are well prepared for the next 10 chapters on managing the various bits and pieces in your data center. The list of bits and pieces is always a surprise, no matter how often you open the book. You can manage Oracle WebLogic Server, Oracle Application Server, Oracle Forms and Reports Services, SOA Suite 10g, Oracle Service Bus 10g, Oracle Internet Directory, Oracle Virtual Directory, Oracle Access Manager, Oracle Identity Manager, Oracle Identity Federation, Oracle Coherence Cluster, Non-Oracle Middleware like Apache, Tomcat, JBoss, OBM WebSphere and much much more. The chapters for these components can be read in any order you like, you only need the foundation chapters and continue with the parts in your data center. Once you are done with them, don't forget to read the last chapter, Best Practices for Managing Middleware Components using Enterprise Manager. Read it, understand it, and implement it in your organization. This will save you valueable time and budget. Recommendation This book is mainly written for the Enterprise Manager newbies and saves you a lot of time while going through the standard product documentation. All chapters are considerable short and tell exactly what need to know to get started with. Nothing more and nothing less. That's the beauty of it and why I love it. Due to its limitation it will cover everything you'd like to know, but it gets you started and interested for more insights. But that is the job of the product documentation. The Details Title Middleware Management with Oracle Enterprise Manager Grid Control 10g R5 Authors Debu Panda and Arvind Maheshwari Paperback 310 pages ISBN 13 978-1-847198-34-1

    Read the article

  • MySQL Connect 8 Days Away - Replication Sessions

    - by Mat Keep
    Following on from my post about MySQL Cluster sessions at the forthcoming Connect conference, its now the turn of MySQL Replication - another technology at the heart of scaling and high availability for MySQL. Unless you've only just returned from a 6-month alien abduction, you will know that MySQL 5.6 includes the largest set of replication enhancements ever packaged into a single new release: - Global Transaction IDs + HA utilities for self-healing cluster..(yes both automatic failover and manual switchover available!) - Crash-safe slaves and binlog - Binlog Group Commit and Multi-Threaded Slaves for high performance - Replication Event Checksums and Time-Delayed replication - and many more There are a number of sessions dedicated to learn more about these important new enhancements, delivered by the same engineers who developed them. Here is a summary Saturday 29th, 13.00 Replication Tips and Tricks, Mats Kindahl In this session, the developers of MySQL Replication present a bag of useful tips and tricks related to the MySQL 5.5 GA and MySQL 5.6 development milestone releases, including multisource replication, using logs for auditing, handling filtering, examining the binary log, using relay slaves, splitting the replication stream, and handling failover. Saturday 29th, 17.30 Enabling the New Generation of Web and Cloud Services with MySQL 5.6 Replication, Lars Thalmann This session showcases the new replication features, including • High performance (group commit, multithreaded slave) • High availability (crash-safe slaves, failover utilities) • Flexibility and usability (global transaction identifiers, annotated row-based replication [RBR]) • Data integrity (event checksums) Saturday 29th, 1900 MySQL Replication Birds of a Feather In this session, the MySQL Replication engineers discuss all the goodies, including global transaction identifiers (GTIDs) with autofailover; multithreaded, crash-safe slaves; checksums; and more. The team discusses the design behind these enhancements and how to get started with them. You will get the opportunity to present your feedback on how these can be further enhanced and can share any additional replication requirements you have to further scale your critical MySQL-based workloads. Sunday 30th, 10.15 Hands-On Lab, MySQL Replication, Luis Soares and Sven Sandberg But how do you get started, how does it work, and what are the best practices and tools? During this hands-on lab, you will learn how to get started with replication, how it works, architecture, replication prerequisites, setting up a simple topology, and advanced replication configurations. The session also covers some of the new features in the MySQL 5.6 development milestone releases. Sunday 30th, 13.15 Hands-On Lab, MySQL Utilities, Chuck Bell Would you like to learn how to more effectively manage a host of MySQL servers and manage high-availability features such as replication? This hands-on lab addresses these areas and more. Participants will get familiar with all of the MySQL utilities, using each of them with a variety of options to configure and manage MySQL servers. Sunday 30th, 14.45 Eliminating Downtime with MySQL Replication, Luis Soares The presentation takes a deep dive into new replication features such as global transaction identifiers and crash-safe slaves. It also showcases a range of Python utilities that, combined with the Release 5.6 feature set, results in a self-healing data infrastructure. By the end of the session, attendees will be familiar with the new high-availability features in the whole MySQL 5.6 release and how to make use of them to protect and grow their business. Sunday 30th, 17.45 Scaling for the Web and the Cloud with MySQL Replication, Luis Soares In a Replication topology, high performance directly translates into improving read consistency from slaves and reducing the risk of data loss if a master fails. MySQL 5.6 introduces several new replication features to enhance performance. In this session, you will learn about these new features, how they work, and how you can leverage them in your applications. In addition, you will learn about some other best practices that can be used to improve performance. So how can you make sure you don't miss out - the good news is that registration is still open ;-) And just to whet your appetite, listen to the On-Demand webinar that presents an overview of MySQL 5.6 Replication.  

    Read the article

  • Mounting a TrueCrypt volume over FTP

    - by Maxim Zaslavsky
    Is it possible to mount a TrueCrypt volume file over FTP? Here's how TrueCrypt works with a local file: User inputs path to volume file, enters password TrueCrypt verifies that the password is correct (probably by decrypting the very first part of the volume file?) TrueCrypt reads the directory listing from the volume file and mounts the volume. However, in this step, TrueCrypt does NOT process the whole volume file. The user browses the directory listing and opens a file. TrueCrypt reads only the part of the volume file that contains the file the user wants, and then decrypts it. Once again, TrueCrypt doesn't process the whole volume file - it only reads part of it. The user edits part of the file and saves it. TrueCrypt encrypts the change and edits the volume file. I'm pretty sure it should be possible to mount a volume over FTP, without undermining security and without having to transfer the whole volume file just to read one small part of the volume. Here's how I imagine it: User inputs FTP path to volume file, enters FTP login information, enters password to volume TrueCrypt downloads the very first part of the volume file and verifies that the password is correct TrueCrypt downloads the part of the volume file that contains the directory listing - the data is sent encrypted over FTP and is decrypted locally. The user browses the directory listing and opens a file. TrueCrypt downloads only the part of the volume file that contains the file the user wants, and then decrypts it locally. The user edits part of the file and saves it. TrueCrypt encrypts the change and edits the volume file over FTP, transferring encrypted data only. Is such a feature available?

    Read the article

  • Collaborate10 &ndash; THEconference

    - by jean-pierre.dijcks
    After spending a few days in Mandalay Bay's THEHotel, I guess I now call everything THE... Seriously, they even tag their toilet paper with THEtp... I guess the brand builders in Vegas thought that once you are on to something you keep on doing it, and granted it is a nice hotel with nice rooms. THEanalytics Most of my collab10 experience was in a room called Reef C, where the BIWA bootcamp was held. Two solid days of BI, Warehousing and Analytics organized by the BIWA SIG at IOUG. Didn't get to see all sessions, but what struck me was the high interest in Analytics. Marty Gubar's OLAP session was full and he did some very nice things with the OLAP option. The cool bit was that he actually gets all the advanced calculations in OLAP to show up in OBI EE without any effort. It was nice to see that the idea from OWB where you generate an RPD is now also in AWM. I think it makes life so much simpler to generate these RPD's from your data model. Even if the end RPD needs some tweaking, it is all a lot less effort to get something going. You can see this stuff for yourself in this demo (click here). OBI EE uses just SQL to get to the calculations, and so, if you prefer APEX, you can build you application there and get the same nice calculations in an APEX application. Marty also showed the Simba MDX driver used with Excel. I guess we should call that THEcoolone... and it is very slick and wonderfully useful for all of you who actually know Excel. The nice thing is that you leverage pure Excel for all operations (no plug-ins). That means no new tools to learn, no new controls, all just pure Excel. THEdatabasemachine Got some very good questions in my "what makes Exadata fast" session and overall, the interest in Exadata is overwhelming. One of the things that I did try to do in my session is to get people to think in new patterns rather than in patterns based on Oracle 9i running on some random hardware configuration. We talked a little bit about the often over-indexing and how everyone has to unlearn all of that on Exadata. The main thing however is that everyone needs to get used to the shear size of some of the components in a Database machine V2. 5TB of flash cache is a lot of very fast data storage, half a TB of memory gets quite interesting as well. So what I did there was really focus on some of the content in these earlier posts on Upward ILM and In-Memory processing. In short, I do believe the these newer media point out a trend. In-memory and other fast media will get cheaper and will see more use. Some of that we do automatically by adding new functionality, but in some cases I think the end user of the system needs to start thinking about how to leverage all this new hardware. I think most people got very excited about these new capabilities and opportunities. THEcoolkids One of the cool things about the BIWA track was the hand-on track. Very cool to see big crowds for both OLAP and OWB hands-on. Also quite nice to see that the folks at RittmanMead spent so much time on preparing for that session. While all of them put down cool stuff, none was more cool that seeing Data Mining on an Apple iPAD... it all just looks great on an iPAD! Very disappointing to see that Mark Rittman still wasn't showing OWB on his iPAD ;-) THEend All in all this was a great set of sessions in the BIWA track. Lots of value to our guests (we hope) and we hope they all come again next year!

    Read the article

  • POP Forums v9 Beta 1 for ASP.NET MVC 3 posted to CodePlex!

    - by Jeff
    As promised, I posted a beta build of my forum app for ASP.NET MVC 3. Get the new goodies here: http://popforums.codeplex.com/releases/view/58228 This is the first beta for the ASP.NET MVC 3 version of POP Forums. It is nearly feature complete, and ready for testing and feedback. For previous release notes, look here, here and here.Check out the live preview: http://preview.popforums.com/ForumsSetup instructions are on the home page of this project. The new hotness in the beta, or what has been done since the last preview: All views converted to use Razor E-mail subscription/notification of new posts New post indicators/mark read buttons Permalinks to posts Jump to newest post (from new post indicators) Recent topics Favorite topics Moderator functions for topics (pin/close/delete, plus move and rename) Search, ported from v8. Not a ton of optimization here, or new unit testing, but the old version worked pretty well User posts (topics the user posted in) Forgot password Vanity items (signatures and avatars) Hide vanity items per user preference Some minor data caching where appropriate A little bit of UI refinement Lots-o-bug fixes Lots-o-unit tests What's next? The plan between now and the next beta is as follows: Continue working through features/tasks, and fix bugs as they're reported Integrate the forum into a real, production site Refine the UI Refactor as much as possible... the code organization is not entirely logical in some places After the second beta, a release candidate will follow, with a real "final" release after that. Subsequent releases should come relatively frequently and without a lot of risk. The trick in building this thing has been that it mostly tossed the previous WebForms version, which was all full of crusties. The time table for this is a little harder to pin down, as day jobs and families will have their effect. Other notes Refactoring will be a priority. As the features of MVC have evolved, so have my desires to use it in a fashion that makes things clear and easy to follow. I don't even know if anyone will ever start mucking around in the code, but on the off chance they do, I'd like what they find to not suck. Other nice-to-haves are builds to target Windows Azure and SQL CE. A nice setup UI would be super too. I think the ASP.NET MVC world has gone long enough without a decent forum.The biggest challenge that I've found is making the forum something that can be dropped in any app. While it does rope its views into an area, areas are mostly just routing details. I haven't thought of a clever way yet to limit dependency injection, for example, to just the forum bits. I mean, everyone should be using Ninject, but how realistic is that? ;)How much time and effort should you spend on POP Forums in its current state? Change is inevitable, but at this point I'm reasonably committed to not changing the database schema. I really think it will stay as-is. All bets are off for the various interfaces throughout the app, but the data should generally resist change. It's not even that different from v8, which was one of the original goals because I didn't want to rewrite SQL or introduce a new ORM or whatever. My point is that if you wanted to build a site around this today, even though it's not entirely functional, I think it's low risk in terms of data loss. I can't vouch for whether or not you know what you're doing.I've been having some chats with people lately about quoting posts, and honestly there has to be something better and straight forward. That continues to be a holy grail of mine, and some day, I hope to find it.Enjoy... it's starting to feel more real every day!

    Read the article

  • How to Sync Any Browser’s Bookmarks With Your iPad or iPhone

    - by Chris Hoffman
    Apple makes it easy to synchronize bookmarks between the Safari browser on a Mac and the Safari browser on iOS, but you don’t have to use Safari — or a Mac — to sync your bookmarks back and forth. You can do this with any browser. Whether you’re using Chrome, Firefox, or even Internet Explorer, there’s a way to sync your browser bookmarks so you can access your same bookmarks on your iPad. Safari on a Mac Apple’s iCloud service is the officially supported way to sync data with your iPad or iPhone. It’s included on Macs, but Apple also offers similar iCloud bookmark syncing features for Windows. On a Mac, this should be enabled by default. To check whether it’s enabled, you can launch the System Preferences panel on your Mac, open the iCloud preferences panel, and ensure the Safari option is checked. If you’re using Safari on Windows — well, you shouldn’t be. Apple is no longer updating Safari for Windows. iCloud allows you to synchronize bookmarks between other browsers on your Windows system and Safari on your iOS device, so Safari isn’t necessary. Internet Explorer, Firefox, or Chrome via iCloud To get started, download Apple’s iCloud Control Panel application for Windows and install it. Launch the iCloud Control Panel and log in with the same iCloud account (Apple ID) you use on your iPad or iPhone. You’ll be able to enable Bookmark syncing with Internet Explorer, Firefox, or Chrome. Click the Options button to select the browser you want to synchronize bookmarks with. (Note that bookmarks are called “favorites” in Internet Explorer.) You’ll be able to access your synced bookmarks in the Safari browser on your iPad or iPhone, and they’ll sync back and forth automatically over the Internet. Google Chrome Sync Google Chrome also has its own built-in sync feature and Google provides an official Chrome app for iPad and iPhone. If you’re a Chrome user, you can set up Chrome Sync on your desktop version of Chrome — you should already have this enabled if you have logged into your Chrome browser. You can check if this Chrome Sync is enabled by opening Chrome’s settings screen and seeing whether you’re signed in. Click the Advanced sync settings button and ensure bookmark syncing is enabled. Once you have Chrome Sync set up, you can install the Chrome app from the App Store and sign in with the same Google account. Your bookmarks, as well as other data like your open browser tabs, will automatically sync. This can be a better solution because the Chrome browser is available for so many platforms and you gain the ability to synchronize other browser data, such as your open browser tabs, between your devices. Unfortunately, the Chrome browser is slower than Apple’s own Safari browser on iPad and iPhone because of the way Apple limits third-party browsers, so using it involves a trade-off. Manual Bookmark Sync in iTunes iTunes also allows you to sync bookmarks between your computer and your iPad or iPhone. It does this the old-fashioned way, by initiating a manual sync when your device is plugged in via USB. To access this option, connect your device to your computer, select the device in iTunes, and click the Info tab. This is the more outdated way of synchronizing your bookmarks. This feature may be useful if you want to create a one-time copy of your bookmarks from your PC, but it’s nowhere near ideal for regular syncing. You don’t have to use this feature, just as you really don’t have to use iTunes anymore. In fact, this option is unavailable if you’ve set up iCloud syncing in iTunes. After you set up bookmark syncing via iCloud or Chrome Sync, bookmarks will sync immediately after you save, remove, or edit them.     

    Read the article

  • How to get rid of auto-generated sequence number in network's device name in Windows?

    - by Piotr Dobrogost
    Every time one plugs in the same usb wireless adapter in a new usb port, Windows creates new network device with auto-generated sequence number which looks like this Wireless-N USB Network Adapter #2, Wireless-N USB Network Adapter #3, ... The name of a device is being displayed as part of network's information in Control Panel|Network Connections. How can I get rid of this sequence number? I found out device name which is displayed in network's information is kept in the FriendlyName REG_SZ value under HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\USB\VID_[device specific string]\[usb port specific string] However when I try to modify this value I get error Cannot edit FriendlyName: Error writing the value's new contents. I tried to delete extra keys under HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\USB\VID_13B1&PID_0029 but got Cannot delete KEY NAME: Error while deleting key. error. Trying to solve this problem I followed this answer but trying to change owner with Replace owner on subcontainers and objects option checked I got this error - Registry Editor could not set owner on the currently selected, or some of its subkeys. To find out which subkey is the source of problem I tried changing owner of each subkey. After successfully changing owner of Properites subkey I saw it has subkeys which were previously hidden. Now trying to change owner of these subkeys looks like this: Any idea how to delete these keys?

    Read the article

  • Tuple in C# 4.0

    - by Jalpesh P. Vadgama
    C# 4.0 language includes a new feature called Tuple. Tuple provides us a way of grouping elements of different data type. That enables us to use it a lots places at practical world like we can store a coordinates of graphs etc. In C# 4.0 we can create Tuple with Create method. This Create method offer 8 overload like following. So you can group maximum 8 data types with a Tuple. Followings are overloads of a data type. Create(T1)- Which represents a tuple of size 1 Create(T1,T2)- Which represents a tuple of size 2 Create(T1,T2,T3) – Which represents a tuple of size 3 Create(T1,T2,T3,T4) – Which represents a tuple of size 4 Create(T1,T2,T3,T4,T5) – Which represents a tuple of size 5 Create(T1,T2,T3,T4,T5,T6) – Which represents a tuple of size 6 Create(T1,T2,T3,T4,T5,T6,T7) – Which represents a tuple of size 7 Create(T1,T2,T3,T4,T5,T6,T7,T8) – Which represents a tuple of size 8 Following are some example code for tuple. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create<string, string, string>("Jalpesh", "P", "Vadgama"); Console.WriteLine(tuple); var t = System.Tuple.Create<int, string>(1, "Jalpesh"); Console.WriteLine(t); } } } Following is a output of above as expected. You can also access values insides Tuple with ItemN property. Where N represents particular number of item in tuple. Following is an example of it. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create<string, string, string>("Jalpesh", "P", "Vadgama"); Console.WriteLine(tuple.Item1); Console.WriteLine(tuple.Item2); Console.WriteLine(tuple.Item3); } } } Here you can see I have printed items with Item1,Item2 and Item3 . Following is the output of above code.   Even we can create a nested tuple also following is code for nested tuple. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create(1,"Jalpesh",new Tuple<string,string>("P","Vadgama")); Console.WriteLine(tuple.Item1); Console.WriteLine(tuple.Item2); Console.WriteLine(tuple.Item3); } } } Following is a output above code as expected. As you can see there are unlimited possibilities we can do lots of things with Tuple. Hope you liked it. Stay tuned for more. Till then Happy Programming!!

    Read the article

  • ASP.NET MVC for the php/asp noob

    - by dotjosh
    I was talking to a friend today, who's foremost a php developer, about his thoughts on Umbraco and he said "Well they're apparently working feverishly on the new version of Umbraco, which will be MVC... which i still don't know what that means, but I know you like it." I ended up giving him a ground up explanation of ASP.NET MVC, so I'm posting this so he can link this to his friends and for anyone else who finds it useful.  The whole goal was to be as simple as possible, not being focused on proper syntax. Model-View-Controller (or MVC) is just a pattern that is used for handling UI interaction with your backend.  In a typical web app, you can imagine the *M*odel as your database model, the *V*iew as your HTML page, and the *C*ontroller as the class inbetween.  MVC handles your web request different than your typical php/asp app.In your php/asp app, your url maps directly to a php/asp file that contains html, mixed with database access code and redirects.In an MVC app, your url route is mapped to a method on a class (the controller).  The body of this method can do some database access and THEN decide which *V*iew (html/aspx page) should be displayed;  putting the controller in charge and not the view... a clear seperation of concerns that provides better reusibility and generally promotes cleaner code. Mysite.com, a quick example:Let's say you hit the following url in your application: http://www.mysite.com/Product/ShowItem?Id=4 To avoid tedious configuration, MVC uses a lot of conventions by default. For instance, the above url in your app would automatically make MVC search for a .net class with the name "Product" and a method named "ShowItem" based on the pattern of the url.  So if you name things properly, your method would automatically be called when you entered the above url.  Additionally, it would automatically map/hydrate the "int id" parameter that was in your querystring, matched by name.Product.cspublic class Product : Controller{    public ViewResult ShowItem(int id)    {        return View();    }} From this point you can write the code in the body of this method to do some database access and then pass a "bag" (also known as the ViewData) of data to your chosen *V*iew (html page) to use for display.  The view(html) ONLY needs to be worried about displaying the flattened data that it's been given in the best way it can;  this allows the view to be reused throughout your application as *just* a view, and not be coupled to HOW the data for that view get's loaded.. Product.cspublic class Product : Controller{    public ViewResult ShowItem(int id)    {        var database = new Database();        var item = database.GetItem(id);        ViewData["TheItem"] = item;        return View();    }} Again by convention, since the class' method name is "ShowItem", it'll search for a view named "ShowItem.aspx" by default, and pass the ViewData bag to it to use. ShowItem.aspx<html>     <body>      <%        var item =(Item)ViewData["TheItem"]       %>       <h1><%= item.FullProductName %></h1>     </body></html> BUT WAIT! WHY DOES MICROSOFT HAVE TO DO THINGS SO DIFFERENTLY!?They aren't... here are some other frameworks you may have heard of that use the same pattern in a their own way: Ruby On Rails Grails Spring MVC Struts Django    

    Read the article

  • Installing messaging software displays error 1324 invalid character

    - by llykke
    Trying to install Reuters Messaging software onto a windows 7 pc we receive the error message Error 1324: The folder path 'My Documents' contains an invalid character We've tried installing the application using the local admin account and the user account which is an AD account (roaming?). This user account has administrative rights (i.e. should be allowed to install applications). The users 'My Documents' folder is located on a network drive, where only the user has access. We've tried experimenting with the HKEY_CURRENT_USER \ Software \ Microsoft \ Windows \ CurrentVersion \ Explorer\ User Shell Folders registry entries and setting them to a local position (i.e. C:\Users\Username\Documents) but this didn't resolve the error. We've also tried the following which was taken from a website I can't remember the name of: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem Select the NtfsDisable8dot3NameCreation entry and change the value to 0 Select the Win31FileSystem entry and change the value to 0 which didn't resolve the issue. Edit: This was also an issue when attempting to install the citrix native client necessary to run citrix application (*.ica extension). This made the same error box appear.

    Read the article

  • Introducing .NET 4.0 with Visual Studio 2010 by Alex Mackey - Book review

    - by Malisa L. Ncube
    Alex (http://simpleisbest.co.uk/) does a very good job in covering the new features of .NET 4.0 and Visual Studio 2010. His focus is on the developers that have experience in development using previous versions of Visual Studio, more specifically Visual Studio 2008.     The following are my views towards his book. 1. Scope / Coverage Even as the book is labeled as introduction, it is covers a broad spectrum of technologies, features and references that are focused into helping a developer quickly decide what to use in the new .NET framework. a. Content The content included covers as much as possible the new additions that are included in the new .NET version 4.0. He shows the Visual Studio 2010 new features and quickly shows how to extend it using Managed Extensibility Framework. Some of my favorites are parallel debugging enhancements. The author delves into JQuery, which Microsoft has decided to support. Some of the very interesting content is on the out-of-band releases including ASP.NET MVC, Windows Azure Silverlight 3 and WCF Data Services. b. What is not included? Windows Phone 7 Series. This was only talked about in the MIX10. The data may not have been available at the time of writing. Microsoft Pinpoint (Microsoft code name "Dallas") Windows Embedded development. c. Table of Contents Chapter 1: Introduction Chapter 2: Visual Studio IDE and MEF Chapter 3: Language and Dynamic Changes Chapter 4: CLR and BCL Changes Chapter 5: Parallelization and Threading Enhancements Chapter 6: Windows Workflow Foundation 4 Chapter 7: Windows Communication Foundation Chapter 8: Entity Framework Chapter 9: WCF Data Services Chapter 10: ASPNET Chapter 11: Microsoft AJAX Library Chapter 12: jQuery Chapter 13: ASPNET MVC Chapter 14: Silverlight Introduction Chapter 15: WPF 4.0 and Silverlight 3.0 Chapter 16: Windows Azure 2. Depth Avoids getting into depth on the topics presented, to present the new concepts in assumption of the developer’s existing knowledge. Code samples are on book and exist mostly as snippets and very easy to follow. There are no downloadable examples. 3. Complexity The book is written in a very simple way and easy to follow. There are no irrelevant intimidating details. So it’s a book that you can grab and never put down until you’ve finished reading the entire book. 4. References The author includes reference links to blogs, Wikis and a lot of online resources including the MSDN documentation, which is a very convenient strategy to avoid flooding the reader with details which may not be of interest to them. Most sites do not use url routing and that is really not nice. There are notes from interviews between the author and people behind the new technologies, in which they explain what some specific areas that need clarifications and what their future views are in relation to the features they are working on. 5. Target The author targets experts that want to make a transition from .NET 3.5 to 4.0. Some obvious 3.5 features have been purposely excluded from the text 6. Overrall It is evident that the author has made extensive research into the breadth of what MS is working on, in relation to .NET and Visual Studio and has also been watching the online community. What I would like to see in the next edition are some details on OData protocol, Expression Blend 4 and Embedded development and Windows Phone development. I should say I’m one of the beneficiaries of this book. Excellent work Alex.   Technorati Tags: .NET,Book-Review,Visual Studio

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • How to reproject a shapefile from WGS 84 to Spherical/Web Mercator projection.

    - by samkea
    Definitions: You will need to know the meaning of these terms below. I have given a small description to the acronyms but you can google and know more about them. #1:WGS-84- World Geodetic Systems (1984)- is a standard reference coordinate system used for Cartography, Geodesy and Navigation. #2: EPGS-European Petroleum Survey Group-was a scientific organization with ties to the European petroleum industry consisting of specialists working in applied geodesy, surveying, and cartography related to oil exploration. EPSG::4326 is a common coordinate reference system that refers to WGS84 as (latitude, longitude) pair coordinates in degrees with Greenwich as the central meridian. Any degree representation (e.g., decimal or DMSH: degrees minutes seconds hemisphere) may be used. Which degree representation is used must be declared for the user by the supplier of data. So, the Spherical/Web Mercator projection is referred to as EPGS::3785 which is renamed to EPSG:900913 by google for use in googlemaps. The associated CRS(Coordinate Reference System) for this is the "Popular Visualisation CRS / Mercator ". This is the kind of projection that is used by GoogleMaps, BingMaps,OSM,Virtual Earth, Deep Earth excetra...to show interactive maps over the web with thier nearly precise coordinates.  Reprojection: After reading alot about reprojecting my coordinates from the deepearth project on Codeplex, i still could not do it. After some help from a colleague, i got my ball rolling.This is how i did it. #1 You need to download and open your shapefile using Q-GIS; its the one with the biggest number of coordinate reference systems/ projections. #2 Use the plugins menu, and enable ftools and the WFS plugin. #3 Use the Vector menu--> Data Management Tools and choose define current projection. Enable, use predefined reference system and choose WGS 84 coodinate system. I am personally in zone 36, so i chose WGS84-UTM Zone 36N under ( Projected Coordinate Systems--> Universal Transverse Mercator) and click ok. #4 Now use the Vector menu--> Data Management Tools and choose export to new projection. The same dialog will pop-up. Now choose WGS 84 EPGS::4326 under Geodetic Coordinate Systems. My Input user Defined Spatial Reference System should looks like this: +proj=tmerc +lat_0=0 +lon_0=33 +k=0.9996 +x_0=500000 +y_0=200000 +ellps=WGS84 +datum=WGS84 +units=m +no_defs Your Output user Defined Spatial Reference System should look like this: +proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs Browse for the place where the shapefile is going to be and give the shapefile a name(like origna_reprojected). If it prompts you to add the projected layer to the TOC, accept. There, you have your re-projected map with latitude and longitude pair of coordinates. #5 Now, this is not the actual Spherical/Web Mercator projection, but dont worry, this is where you have to stop. All the other custom web-mapping portals will pick this projection and transform it into EPGS::3785 or EPSG:900913 but the coordinates will still remain as the LatLon pair of the projected shapefile. If you want to test, a particular know point, Q-GIS has a lot of room for that. Go ahead and test it.

    Read the article

  • How do you host multiple public facing websites on a VPS?

    - by pedroarvy
    We host about 30 websites using typical shared hosting plans using ASP.NET and SQL 2000/2005/2008. I am now wondering about hosting all of these websites using our own virtual private server. This is clearly cheaper but comes with a lot of questions I need answers to: Is the risk of having to keep this VPS server up and running worth it? Until now, the host provider has managed the server and we have not had to worry about crashes, downtime, software patches etc. We are not server administrators, we are programmers, so this is not really our expertise. On the other hand, it may not be hard to learn. When we make a website live, we log in to a domain management control panel and change the primary and secondary name servers to point to our shared web host: Eg ns1.sharedwebhost.com and ns2.sharedwebhost.com These name servers are going to have to change when we have a VPS. I don’t understand anything about how to set this up. Is there some useful info anyone could direct me to? Or is there software we need to install to make the primary and secondary name servers work on our VPS? The control panel we have for shared hosting comes with DNS management like this: http://www.yart.com.au/stackoverflow/dns.png What software would I need to install to create this for each site we host at a VPS? The control panel we have for shared hosting also comes with a POP email interface that allows email addresses to be added easily by our customers. Is this something that can be easily set up at a VPS so clients can manage their own email addresses? Is there software we need to install to make this work?

    Read the article

  • Having Fun with Coding4Fun&rsquo;s Windows Phone 7 Controls

    - by mbcrump
    I’m a big believer in having a hobby project as you can probably tell from the first sentence in my “personal webpage using Silverlight” article. One of my current hobby projects is to re-do my current WP7 application in the marketplace. I knew up front that I needed a “Loading” animation and a better “About” box. After starting to develop my own, I noticed a great set of WP7 controls by Coding4Fun and decided to use them in my new application. Before I go any further they are FREE and Open-Source. It is really simple to get started, just go to the CodePlex site and click the download button. After you have downloaded it then extract it to a Folder and you will have 4 DLL files. They are listed below: Now create a Windows Phone 7 Project and add references to the DLL’s by right clicking on the References folder and clicking “Add references”.   After adding the references, we can get started. I needed a ProgressOverlay animation or “Loading Screen” while my RSS feed is downloading. Basically, you just need to add the following namespace to whatever page you want the control on: xmlns:Controls="clr-namespace:Coding4Fun.Phone.Controls;assembly=Coding4Fun.Phone.Controls" And then the code inside your Grid or wherever you want the Loading screen placed. <Controls:ProgressOverlay Name="progressOverlay" > <Controls:ProgressOverlay.Content> <TextBlock>Loading</TextBlock> </Controls:ProgressOverlay.Content> </Controls:ProgressOverlay> Bam, you now have a great looking loading screen. Of course inside the ProgressOverlay, you may want to add a Visibility property to turn it off after your data loads if you are using MVVM or similar pattern.   Next up, I needed a nice clean “About Box” that looks good but is also functional. Meaning, if they click on my twitter name, web or email to launch the appropriate task. Again, this is only a few lines of code: var p = new AboutPrompt(); p.VersionNumber = "2.0"; p.Show("Michael Crump", "@mbcrump", "[email protected]", @"http://michaelcrump.net"); A nice clean “About” box with just a few lines of code! I’m all for code that I don’t have to write. It also comes with a pretty sweet InputPrompt for grabbing info from a user: The code for this is also very simple: InputPrompt input = new InputPrompt(); input.Completed += (s, e) => { MessageBox.Show(e.Result.ToString()); }; input.Title = "Input Box"; input.Message = "What does a \"Developer Large\" T-Shirt Mean? "; input.Show(); I also enjoyed the PhoneHelper that allows you to get data out of the WMAppManifest File very easy. So for example if I wanted the Version info from the WMAppManifest file. I could write one line and get it. PhoneHelper.GetAppAttribute("Version") Of course you would want to make sure you add the following using statement: using Coding4Fun.Phone.Controls.Data; You can’t have all these cool controls without a great set of Converters. The included BooleanToVisibility converter will convert a Boolean to and from a Visibility value. This is excellent when using something like a CheckBox to display a TextBox when its checked. See the example below: The code is below: <phone:PhoneApplicationPage.Resources> <Converters:BooleanToVisibilityConverter x:Key="BooleanToVisibilityConverter"/> </phone:PhoneApplicationPage.Resources> <CheckBox x:Name="checkBox"/> <TextBlock Text="Display Text" Visibility="{Binding ElementName=checkBox, Path=IsChecked, Converter={StaticResource BooleanToVisibilityConverter} }"/> That’s not all the goodies included. They also provide a RoundedButton, TimePicker and several other converters. The documentation is great and I would recommend you give them a shot if you need any of this functionality. Btw, thank Brian Peek for his awesome work on Coding4Fun!  Subscribe to my feed

    Read the article

  • Ask the Readers: Backing Your Files Up – Local Storage versus the Cloud

    - by Asian Angel
    Backing up important files is something that all of us should do on a regular basis, but may not have given as much thought to as we should. This week we would like to know if you use local storage, cloud storage, or a combination of both to back your files up. Photo by camknows. For some people local storage media may be the most convenient and/or affordable way to back up their files. Having those files stored on media under your control can also provide a sense of security and peace of mind. But storing your files locally may also have drawbacks if something happens to your storage media. So how do you know whether the benefits outweigh the disadvantages or not? Here are some possible pros and cons that may affect your decision to use local storage to back up your files: Local Storage Pros You are in control of your data Your files are portable and can go with you when needed if using external or flash drives Files are accessible without an internet connection You can easily add more storage capacity as needed (additional drives, etc.) Cons You need to arrange room for your storage media (if you have multiple externals drives, etc.) Possible hardware failure No access to your files if you forget to bring your storage media with you or it is too bulky to bring along Theft and/or loss of home with all contents due to circumstances like fire If you are someone who is always on the go and needs to travel as lightly as possible, cloud storage may be the perfect way for you to back up and access your files. Perhaps your laptop has a hard-drive failure or gets stolen…unhappy events to be sure, but you will still have a copy of your files available. Perhaps a company wants to make sure their records, files, and other information are backed up off site in case of a major hardware or system failure…expensive and/or frustrating to fix if it happens, but once again there is a nice backup ready to go once things are fixed. As with local storage, here are some possible pros and cons that may influence your choice of cloud storage to back up your files: Cloud Storage Pros No need to carry around flash or bulky external drives All of your files are accessible wherever there is an internet connection No need to deal with local storage media (or its’ upkeep) Your files are still safe if your home is broken into or other unfortunate circumstances occur Cons Your files and data are not 100% under your control Possible hardware failure or loss of files on the part of your cloud storage provider (this could include a disgruntled employee wreaking havoc) No access to your files if you do not have an internet connection The cloud storage provider may eventually shutdown due to financial hardship or other unforeseen circumstances The possibility of your files and data being stolen by hackers due to a security breach on the part of your cloud storage provider You may also prefer to try and cover all of the possibilities by using both local and cloud storage to back up your files. If something happens to one, you always have the other to fall back on. Need access to those files at or away from home? As long as you have access to either your storage media or an internet connection, you are good to go. Maybe you are getting ready to choose a backup solution but are not sure which one would work better for you. Here is your chance to ask your fellow HTG readers which one they would recommend. Got a great backup solution already in place? Then be sure to share it with your fellow readers! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know Winter Sunset by a Mountain Stream Wallpaper Add Sleek Style to Your Desktop with the Aston Martin Theme for Windows 7 Awesome WebGL Demo – Flight of the Navigator from Mozilla Sunrise on the Alien Desert Planet Wallpaper Add Falling Snow to Webpages with the Snowfall Extension for Opera [Browser Fun] Automatically Keep Up With the Latest Releases from Mozilla Labs in Firefox 4.0

    Read the article

  • Java Script – Content delivery networks (CDN) can bit you in the butt.

    - by Ryan Ternier
    As much as I love the new CDN’s that Google, Microsoft and a few others have publically released, there are some strong gotchas that could come up and bite you in the ass if you’re not careful. But before we jump into that, for those that are not 100% sure what a CDN is (besides Canadian).   Content Delivery Network. A way of distributing your static content across various servers in different physical locations.  Because this static content is stored on many servers around the world, whenever a user needs to access this content, they are given the closest server to their location for this data. Already you can probably see the immediate bonuses to a system like this: Lower bandwidth Even small script files downloaded thousands of times will start to take a noticeable hit on your bandwidth meter. Less connections/hits to your web server which gives better latency If you manage many servers, you don’t need to manually update each server with scripts. A user will download a script for each website they visit. If a user is redirected to many domains/sub-domains within your web site, they might download many copies of the same file. When a system sees multiple requests from the same  domain, they will ignore the download   Those are just a handful of the many bonuses a CDN will give you. And for the average website, a CDN is great choice. Check out the following CDN links for their solutions: Google AJAX Library: http://code.google.com/apis/ajaxlibs/ Microsoft Ajax library: http://www.asp.net/ajaxlibrary/cdn.ashx The Gotcha There is always a catch. Here are some issues I found with using CDN’s that hopefully can help you make your decision. HTTP / HTTPS If you are running a website behind SSL, make sure that when you reference your CDN data that you use https:// vs. http://. If you forget this users will get a very nice message telling them that their secure connection is trying to access unsecure data. For a developer this is fairly simple, but general users will get a bit anxious when seeing this. Trusted Sites Internet Explorer has this really nifty feature that allows users to specify what sites they trust, and by some defaults IE7 only allows trusted sites to be viewed.  No problem, they set your website as trusted. But what about your CDN? If a user sets your websites to trusted, but not the CDN, they will not download those static files. This has the potential to totally break your web site. Pedantic Network Admins This alone is sometimes the killer of projects. However, always be careful when you are going to use a CDN for a professional project. If a network / security admin sees that you’re referencing an outside source, or that a call from a website might hit an outside domain.. panties will be bunched, emails will be spewed out and well, no one wants that.

    Read the article

  • SQL SERVER – Top 10 “Ease of Use” Features of expressor Studio

    - by pinaldave
    expressor Studio is new data integration platform that is being marketed as the most easy to use tool of its kind.  But “easy to use” can be a relative term – an expert can find a very complex system easy, but a beginner might be stumped.  A recent article online discussed exactly what makes expressor Studio so easy use, and here is my view on this subject. Simple Installation There is one pop-up for one .exe file, and nothing more.  You can’t get much simpler than this.  It is also in the familiar Windows design, so there should be no surprises. No 3rd party software dependency Have you ever tried to download software, only to be slowed down by the need to download a compatible system to run the program, and another to read the user manual, and so on?  expressor Studio was designed specifically to avoid this problem. Microsoft Office Like Ribbon Bar and Menus As mentioned before, everything is in the familiar Windows design, from the pop up windows to the tool bars and menus.  There should be no learning curve for using this program, or even simply trying to navigate around a new system. General Development Design Interface This software has been designed to be simple and straightforward.  Projects can be arranged in a simple “tree” design, that is totally collapsible and can easy be added to or “trimmed” with a click of a button.  It was meant to be logical and easy to follow. Integrated Contextual Help This is a fancy way of saying that you can practically yell “help!” if you do get stuck on something.  Solving a problem is as simple as highlighting and hitting F1 for contextual help. Visual Indicators and Messages Wouldn’t it be nice to know exactly where something has gone wrong before trying to complete a project.  expressor Studio has a built in system to catch mistakes and highlight them in a bright color, flash a warning message, and even disable functions before you can continue – and possibly lose hours of work. Property Inputs and Selectors Every operator will have a list of requirements that need to be filled in.  But don’t worry; you won’t have to make stuff up to fill in the boxes.  Each one will have a drop-down menu with options to choose from – but not too many as to be confusing. Connection Wizards Configuring connections can be the hardest part of a project.  But not with the expressor Studioconnection wizard.  A familiar, Windows-style menu will walk you through connections so quickly you’ll forget what trouble it used to be. Templates With large, complex projects, a majority of your time is often spent simply setting up the files and inputting data.  But expressor Studio allows you to create one file and then save it as a template, saving you hours of boring data input. Extension Manager Let’s say that you need a little more functionality or some new features on your program. A lot of software requires you to download complex plug-ins that need to be decompressed and installed.  However, expressor Studio has extended its system to an Extension Manager, which allows for quick and easy installation of the functionality you need, without the need to download and decompress. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

< Previous Page | 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244  | Next Page >