Search Results

Search found 17357 results on 695 pages for 'custom attributes'.

Page 330/695 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • jqGrid - Problems opening in jquery tabs (on Firefox and Google Chrome)

    - by Ben Hargreaves
    I have developed a very simple MVC app to test out trirand's jqGrid for MVC. The app opens a jqgrid in a jquery tab group and everything is ok with IE. However when I use Firefox jqgrid only opens occasionaly in the first tab (but not under any other tab), and in Chrome my jqgrids dont appear to open under any tab of the group. I'm a bit of an MVC newbie (and have only been testing jqgrid out for a few days), but I know my users will want to use different browsers. Trirand have not come back with any answer so wondered if anyone else had had a similar issue. I have really just implemented jqgrid as per the controllers and model in the sample application on the Trirand site, and then combined it with a straightforward jquery tab group. My MVC Details Page is as follows; <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<PRAMSAPP.Models.Family>" %> <%@ Import Namespace="Trirand.Web.Mvc" %> <%@ Import Namespace="PRAMSAPP.Controllers" %> <%@ Import Namespace="PRAMSAPP.Models" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Details </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <link rel="stylesheet" type="text/css" href="/scripts/jquery-ui-1.7.2.custom.css" /> <script type="text/javascript" src="/scripts/jquery-1.3.2.min.js"></script> <script type="text/javascript" src="/scripts/jquery-ui-1.7.2.custom.min.js"></script> <fieldset> <legend>Family</legend> <div class="display-field"><%= Html.Encode(Model.FamilyID) %></div> <div class="display-field"><%= Html.Encode(Model.FamilySurname) %></div> </fieldset> <div id="tabs"> <ul> <li> <%= Html.ActionLink("GridChildren", "GridDemo", new { controller = "Grid", id = Model.FamilyID })%> </li> <li> <%= Html.ActionLink("Children", "ShowFamiliesChildren", new { famid = Model.FamilyID, page = Page})%> </li> </ul> </div> <p> <%= Html.ActionLink("Edit", "Edit", new { id=Model.FamilyID }) %> | <%= Html.ActionLink("Back to List", "Index") %> </p> <script type="text/javascript"> $(function() { $('#tabs').tabs(); }); </script> </asp:Content> And My Controller page is as follows; <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<PRAMSAPP.Models.FamiliesChildrenJqGridModel>" %> <%@ Import Namespace="Trirand.Web.Mvc" %> <%@ Import Namespace="PRAMSAPP.Controllers" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head id="Head1" runat="server"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <!-- The jQuery UI theme that will be used by the grid --> <link rel="stylesheet" type="text/css" media="screen" href="/Content/themes/redmond/jquery-ui-1.7.1.custom.css" /> <!-- The Css UI theme extension of jqGrid --> <link rel="stylesheet" type="text/css" media="screen" href="/Content/themes/ui.jqgrid.css" /> <!-- jQuery library is a prerequisite for jqGrid --> <script type="text/javascript" src="/Scripts/jquery-1.3.2.min.js"></script> <!-- language pack - MUST be included before the jqGrid javascript --> <script type="text/javascript" src="/Scripts/grid.locale-en.js"></script> <script type="text/javascript" src="/Scripts/jqgrid/jquery.jqGrid.min.js"></script> </head> <body> <div> <%= Html.Trirand().JQGrid(Model.FamiliesChildrenGrid, "JQGrid1") %> </div> </body>

    Read the article

  • T-SQL Dynamic SQL and Temp Tables

    - by George
    It looks like #temptables created using dynamic SQL via the EXECUTE string method have a different scope and can't be referenced by "fixed" SQLs in the same stored procedure. However, I can reference a temp table created by a dynamic SQL statement in a subsequence dynamic SQL but it seems that a stored procedure does not return a query result to a calling client unless the SQL is fixed. A simple 2 table scenario: I have 2 tables. Let's call them Orders and Items. Order has a Primary key of OrderId and Items has a Primary Key of ItemId. Items.OrderId is the foreign key to identify the parent Order. An Order can have 1 to n Items. I want to be able to provide a very flexible "query builder" type interface to the user to allow the user to select what Items he want to see. The filter criteria can be based on fields from the Items table and/or from the parent Order table. If an Item meets the filter condition including and condition on the parent Order if one exists, the Item should be return in the query as well as the parent Order. Usually, I suppose, most people would construct a join between the Item table and the parent Order tables. I would like to perform 2 separate queries instead. One to return all of the qualifying Items and the other to return all of the distinct parent Orders. The reason is two fold and you may or may not agree. The first reason is that I need to query all of the columns in the parent Order table and if I did a single query to join the Orders table to the Items table, I would be repoeating the Order information multiple times. Since there are typically a large number of items per Order, I'd like to avoid this because it would result in much more data being transfered to a fat client. Instead, as mentioned, I would like to return the two tables individually in a dataset and use the two tables within to populate a custom Order and child Items client objects. (I don't know enough about LINQ or Entity Framework yet. I build my objects by hand). The second reason I would like to return two tables instead of one is because I already have another procedure that returns all of the Items for a given OrderId along with the parent Order and I would like to use the same 2-table approach so that I could reuse the client code to populate my custom Order and Client objects from the 2 datatables returned. What I was hoping to do was this: Construct a dynamic SQL string on the Client which joins the orders table to the Items table and filters appropriate on each table as specified by the custom filter created on the Winform fat-client app. The SQL build on the client would have looked something like this: TempSQL = " INSERT INTO #ItemsToQuery OrderId, ItemsId FROM Orders, Items WHERE Orders.OrderID = Items.OrderId AND /* Some unpredictable Order filters go here */ AND /* Some unpredictable Items filters go here */ " Then, I would call a stored procedure, CREATE PROCEDURE GetItemsAndOrders(@tempSql as text) Execute (@tempSQL) --to create the #ItemsToQuery table SELECT * FROM Items WHERE Items.ItemId IN (SELECT ItemId FROM #ItemsToQuery) SELECT * FROM Orders WHERE Orders.OrderId IN (SELECT DISTINCT OrderId FROM #ItemsToQuery) The problem with this approach is that #ItemsToQuery table, since it was created by dynamic SQL, is inaccessible from the following 2 static SQLs and if I change the static SQLs to dynamic, no results are passed back to the fat client. 3 around come to mind but I'm look for a better one: 1) The first SQL could be performed by executing the dynamically constructed SQL from the client. The results could then be passed as a table to a modified version of the above stored procedure. I am familiar with passing table data as XML. If I did this, the stored proc could then insert the data into a temporary table using a static SQL that, because it was created by dynamic SQL, could then be queried without issue. (I could also investigate into passing the new Table type param instead of XML.) However, I would like to avoid passing up potentially large lists to a stored procedure. 2) I could perform all the queries from the client. The first would be something like this: SELECT Items.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) SELECT Orders.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) This still provides me with the ability to reuse my client sided object-population code because the Orders and Items continue to be returned in two different tables. I have a feeling to, that I might have some options using a Table data type within my stored proc, but that is also new to me and I would appreciate a little bit of spoon feeding on that one. If you even scanned this far in what I wrote, I am surprised, but if so, I woul dappreciate any of your thoughts on how to accomplish this best.

    Read the article

  • When I mix JSTL 1.0 and JSTL 1.1 taglib declarations, it causes a ParseException on some of my serve

    - by sangfroid
    Hello all, When I mix JSTL 1.0 and JSTL 1.1 taglib declarations, it causes a ParseException on some of my servers, but not all of them. Here is the block of code that's giving me trouble : <%@ taglib prefix="c" uri="http://java.sun.com/jstl/core"%> <%@ taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions"%> <c:set var="TEXTVARIABLE">|STRINGOFTEXT|</c:set> <c:set var="OTHERTEXTVARIABLE">${fn:contains(TEXTVARIABLE, '|STRINGOFTEXT|')}</c:set> And here is the exception : javax.servlet.jsp.JspException: com.caucho.jsp.JspLineParseException: /WEB-INF/jsp/online/system/modules/com.MYCOMPANY.marketing/templates/common/MY_JSP_PAGE.jsp:1: tag = 'out' / attribute = 'value': An error occurred while parsing custom action attribute "value" with value "${fn:contains(TEXTVARIABLE, '|STRINGOFTEXT|')}": org.apache.taglibs.standard.lang.jstl.parser.ParseException: EL functions are not supported. However, everything works fine if I change the URI for the core declaration to http://java.sun.com/jsp/jstl/core So here's the really weird part : for some reason, mixing 1.0 and 1.1 taglib declarations only causes an exception on two of my servers -- my staging server and my production server. It causes no problems at all on my local machine or my development server. Why is this? What could possibly be causing this difference in behavior? The three servers are extremely similar in setup and configuration. The JSP page is being served up by OpenCMS, and I'm using the Caucho's Resin webserver. I understand that you don't know how my servers or CMS are set up, but really, what I'm looking for is ideas. Any ideas at all would help -- this problem has been driving me absolutely batty. Even if you don't know what could be causing the problem, if you have any suggestions at all for how I could approach the problem, that would be extremely helpful. I just don't understand what could cause this difference in behavior between my servers. For reference, here's the full stack trace : javax.servlet.jsp.JspException: com.caucho.jsp.JspLineParseException: /WEB-INF/jsp/online/system/modules/com.MYCOMPANY.marketing/templates/common/MY_JSP_PAGE.jsp:1: tag = 'out' / attribute = 'value': An error occurred while parsing custom action attribute "value" with value "${fn:contains(TEXTVARIABLE, '|STRINGOFTEXT|')}": org.apache.taglibs.standard.lang.jstl.parser.ParseException: EL functions are not supported. at org.opencms.jsp.CmsJspTagInclude.includeActionWithCache(CmsJspTagInclude.java:369) at org.opencms.jsp.CmsJspTagInclude.includeTagAction(CmsJspTagInclude.java:241) at org.opencms.jsp.CmsJspTagInclude.doEndTag(CmsJspTagInclude.java:472) at _jsp._WEB_22dINF._jsp._online._system._modules.com_MYCOMPANY__marketing._templates._MAIN_0PAGE__jsp._jspService(_MAIN_0PAGE__jsp.java:153) at com.caucho.jsp.JavaPage.service(JavaPage.java:60) at com.caucho.jsp.Page.pageservice(Page.java:579) at com.caucho.server.dispatch.PageFilterChain.doFilter(PageFilterChain.java:179) at shared.filter.RemoteAddrFilterBase.doFilter(RemoteAddrFilterBase.java:57) at com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:70) at com.caucho.server.webapp.DispatchFilterChain.doFilter(DispatchFilterChain.java:115) at com.caucho.server.cache.CacheFilterChain.doFilter(CacheFilterChain.java:175) at com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:229) at com.caucho.server.webapp.RequestDispatcherImpl.include(RequestDispatcherImpl.java:485) at com.caucho.server.webapp.RequestDispatcherImpl.include(RequestDispatcherImpl.java:350) at org.opencms.flex.CmsFlexRequestDispatcher.includeExternal(CmsFlexRequestDispatcher.java:194) at org.opencms.flex.CmsFlexRequestDispatcher.include(CmsFlexRequestDispatcher.java:169) at org.opencms.loader.CmsJspLoader.service(CmsJspLoader.java:1193) at org.opencms.flex.CmsFlexRequestDispatcher.includeInternalWithCache(CmsFlexRequestDispatcher.java:423) at org.opencms.flex.CmsFlexRequestDispatcher.include(CmsFlexRequestDispatcher.java:173) at org.opencms.loader.CmsJspLoader.dispatchJsp(CmsJspLoader.java:1227) at org.opencms.loader.CmsJspLoader.load(CmsJspLoader.java:1171) at org.opencms.loader.A_CmsXmlDocumentLoader.load(A_CmsXmlDocumentLoader.java:232) at org.opencms.loader.CmsXmlContentLoader.load(CmsXmlContentLoader.java:52) at org.opencms.loader.CmsResourceManager.loadResource(CmsResourceManager.java:964) at org.opencms.main.OpenCmsCore.showResource(OpenCmsCore.java:1498) at org.opencms.main.OpenCmsServlet.doGet(OpenCmsServlet.java:152) at javax.servlet.http.HttpServlet.service(HttpServlet.java:115) at javax.servlet.http.HttpServlet.service(HttpServlet.java:92) at com.caucho.server.dispatch.ServletFilterChain.doFilter(ServletFilterChain.java:106) at com.caucho.filters.CmsGzipFilter.doFilter(CmsGzipFilter.java:177) at com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:70) at shared.filter.RemoteAddrFilterBase.doFilter(RemoteAddrFilterBase.java:57) at com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:70) at com.caucho.server.webapp.DispatchFilterChain.doFilter(DispatchFilterChain.java:115) at com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:229) at com.caucho.server.webapp.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:277) at com.caucho.server.webapp.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:106) at com.caucho.server.dispatch.ForwardFilterChain.doFilter(ForwardFilterChain.java:80) at com.caucho.server.cache.CacheFilterChain.doFilter(CacheFilterChain.java:207) at com.caucho.server.webapp.WebAppFilterChain.doFilter(WebAppFilterChain.java:173) at com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:229) at com.caucho.server.http.HttpRequest.handleRequest(HttpRequest.java:274) at com.caucho.server.port.TcpConnection.run(TcpConnection.java:514) at com.caucho.util.ThreadPool.runTasks(ThreadPool.java:520) at com.caucho.util.ThreadPool.run(ThreadPool.java:442) at java.lang.Thread.run(Thread.java:595) Caused by: com.caucho.jsp.JspLineParseException: /WEB-INF/jsp/online/system/modules/com.MYCOMPANY.marketing/templates/common/MY_JSP_PAGE.jsp:1: tag = 'out' / attribute = 'value': An error occurred while parsing custom action attribute "value" with value "${fn:contains(TEXTVARIABLE, '|STRINGOFTEXT|')}": org.apache.taglibs.standard.lang.jstl.parser.ParseException: EL functions are not supported. at com.caucho.jsp.java.JspNode.error(JspNode.java:1489) at com.caucho.jsp.java.JspNode.error(JspNode.java:1480) at com.caucho.jsp.java.JavaJspGenerator.validate(JavaJspGenerator.java:466) at com.caucho.jsp.JspCompilerInstance.generate(JspCompilerInstance.java:475) at com.caucho.jsp.JspCompilerInstance.compile(JspCompilerInstance.java:373) at com.caucho.jsp.JspManager.compile(JspManager.java:233) at com.caucho.jsp.JspManager.createPage(JspManager.java:177) at com.caucho.jsp.JspManager.createPage(JspManager.java:157) at com.caucho.jsp.PageManager.getPage(PageManager.java:248) at com.caucho.jsp.PageManager.getPage(PageManager.java:166) at com.caucho.jsp.QServlet.getSubPage(QServlet.java:292) at com.caucho.jsp.QServlet.getPage(QServlet.java:210) at com.caucho.server.dispatch.PageFilterChain.compilePage(PageFilterChain.java:206) at com.caucho.server.dispatch.PageFilterChain.doFilter(PageFilterChain.java:133) at shared.filter.RemoteAddrFilterBase.doFilter(RemoteAddrFilterBase.java:57) at com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:70) at com.caucho.server.webapp.DispatchFilterChain.doFilter(DispatchFilterChain.java:115) at com.caucho.server.cache.CacheFilterChain.doFilter(CacheFilterChain.java:175) at com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:229) at com.caucho.server.webapp.RequestDispatcherImpl.include(RequestDispatcherImpl.java:485) at com.caucho.server.webapp.RequestDispatcherImpl.include(RequestDispatcherImpl.java:350) at org.opencms.flex.CmsFlexRequestDispatcher.includeExternal(CmsFlexRequestDispatcher.java:194) at org.opencms.flex.CmsFlexRequestDispatcher.include(CmsFlexRequestDispatcher.java:169) at org.opencms.loader.CmsJspLoader.service(CmsJspLoader.java:1193) at org.opencms.flex.CmsFlexRequestDispatcher.includeInternalWithCache(CmsFlexRequestDispatcher.java:423) at org.opencms.flex.CmsFlexRequestDispatcher.include(CmsFlexRequestDispatcher.java:173) at org.opencms.jsp.CmsJspTagInclude.includeActionWithCache(CmsJspTagInclude.java:364) ... 45 more Thanks for the help!

    Read the article

  • How to add/remove rows using SlickGrid

    - by lkahtz
    How to write such functions and bind them to two buttons like "add row" and "remove row": The now working example code only support adding new row by editing on the blank bottom line. <!DOCTYPE HTML> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <title>SlickGrid example 3: Editing</title> <link rel="stylesheet" href="../slick.grid.css" type="text/css"/> <link rel="stylesheet" href="../css/smoothness/jquery-ui-1.8.16.custom.css" type="text/css"/> <link rel="stylesheet" href="examples.css" type="text/css"/> <style> .cell-title { font-weight: bold; } .cell-effort-driven { text-align: center; } </style> </head> <body> <div style="position:relative"> <div style="width:600px;"> <div id="myGrid" style="width:100%;height:500px;"></div> </div> <div class="options-panel"> <h2>Demonstrates:</h2> <ul> <li>adding basic keyboard navigation and editing</li> <li>custom editors and validators</li> <li>auto-edit settings</li> </ul> <h2>Options:</h2> <button onclick="grid.setOptions({autoEdit:true})">Auto-edit ON</button> &nbsp; <button onclick="grid.setOptions({autoEdit:false})">Auto-edit OFF</button> </div> </div> <script src="../lib/firebugx.js"></script> <script src="../lib/jquery-1.7.min.js"></script> <script src="../lib/jquery-ui-1.8.16.custom.min.js"></script> <script src="../lib/jquery.event.drag-2.0.min.js"></script> <script src="../slick.core.js"></script> <script src="../plugins/slick.cellrangedecorator.js"></script> <script src="../plugins/slick.cellrangeselector.js"></script> <script src="../plugins/slick.cellselectionmodel.js"></script> <script src="../slick.formatters.js"></script> <script src="../slick.editors.js"></script> <script src="../slick.grid.js"></script> <script> function requiredFieldValidator(value) { if (value == null || value == undefined || !value.length) { return {valid: false, msg: "This is a required field"}; } else { return {valid: true, msg: null}; } } var grid; var data = []; var columns = [ {id: "title", name: "Title", field: "title", width: 120, cssClass: "cell-title", editor: Slick.Editors.Text, validator: requiredFieldValidator}, {id: "desc", name: "Description", field: "description", width: 100, editor: Slick.Editors.LongText}, {id: "duration", name: "Duration", field: "duration", editor: Slick.Editors.Text}, {id: "%", name: "% Complete", field: "percentComplete", width: 80, resizable: false, formatter: Slick.Formatters.PercentCompleteBar, editor: Slick.Editors.PercentComplete}, {id: "start", name: "Start", field: "start", minWidth: 60, editor: Slick.Editors.Date}, {id: "finish", name: "Finish", field: "finish", minWidth: 60, editor: Slick.Editors.Date}, {id: "effort-driven", name: "Effort Driven", width: 80, minWidth: 20, maxWidth: 80, cssClass: "cell-effort-driven", field: "effortDriven", formatter: Slick.Formatters.Checkmark, editor: Slick.Editors.Checkbox} ]; var options = { editable: true, enableAddRow: true, enableCellNavigation: true, asyncEditorLoading: false, autoEdit: false }; $(function () { for (var i = 0; i < 500; i++) { var d = (data[i] = {}); d["title"] = "Task " + i; d["description"] = "This is a sample task description.\n It can be multiline"; d["duration"] = "5 days"; d["percentComplete"] = Math.round(Math.random() * 100); d["start"] = "01/01/2009"; d["finish"] = "01/05/2009"; d["effortDriven"] = (i % 5 == 0); } grid = new Slick.Grid("#myGrid", data, columns, options); grid.setSelectionModel(new Slick.CellSelectionModel()); grid.onAddNewRow.subscribe(function (e, args) { var item = args.item; grid.invalidateRow(data.length); data.push(item); grid.updateRowCount(); grid.render(); }); }) </script> </body> </html>

    Read the article

  • Usercontrol databinding within a databound datagridview

    - by user328259
    Good day. I'm developing a Windows application and working with Windows Forms; .Net 2.0. I have an issue databinding a generic List of Car Rental Companies to a DataGridView when that list contains (one of its properties) anoother generic List of Car Makes. I also have a UserControl that I need to bind to this [inner] generic list ... Class CarRentalCompany contains: string Name, string Location, List CarMakes Class CarMake contains: string Name, bool isFord, bool isChevy, bool isOther The UserControl has a label for CarMake.Name and 3 checkboxes for each of the booleans of the class. HOw do I make this user control bindable for the class? In my form, I have a DataGridView binded to the CarRentalCompany object. The list CarMakes could be 0 or more items and I can add these as necessary. How do I establish the binding of CarRentalCompanies properly so CarMakes will bind accordingly?? For example, I have: List CarRentalCompanies = new List(); CarRentalCompany company1 = new CarRentalCompany(); company1.Name = "Acme Rentals"; company1.Location = "New York, NY"; company1.CarMakes = new List<CarMake>(); CarMake car1 = new CarMake(); car1.Name = "The Yellow Car"; car1.isFord = true; car1.isChevy = false; car1.isOther = false; company1.CarMakes.Add(car1); CarMake car2 = new CarMake(); car2.Name = "The Blue Car"; car2.isFord = false; car2.isChevy = true; car2.isOther = false; company1.CarMakes.Add(car2); CarMake car3 = new CarMake(); car3.Name = "The Purple Car"; car3.isFord = false; car3.isChevy = false; car3.isOther = true; company1.CarMakes.Add(car3); CarRentalCompanies.Add(company1); CarRentalCompany company2 = new CarRentalCompany(); company1.Name = "Z-Auto Rentals"; company1.Location = "Phoenix, AZ"; company1.CarMakes = new List<CarMake>(); CarMake car4 = new CarMake(); car4.Name = "The OrangeCar"; car4.isFord = true; car4.isChevy = false; car4.isOther = false; company2.CarMakes.Add(car4); CarMake car5 = new CarMake(); car5.Name = "The Red Car"; car5.isFord = true; car5.isChevy = false; car5.isOther = false; company2.CarMakes.Add(car5); CarMake car6 = new CarMake(); car6.Name = "The Green Car"; car6.isFord = true; car6.isChevy = false; car6.isOther = false; company2.CarMakes.Add(car6); CarRentalCompanies.Add(company2); I load my form and in my load form I have the following: Note: CarDataGrid is a DataGridView BindingSource bsTheRentals = new BindingSource(); DataGridViewTextBoxColumn companyName = new DataGridViewTextBoxColumn(); companyName.DataPropertyName = "Name"; companyName.HeaderText = "Company Name"; companyName.Name = "CompanyName"; companyName.AutoSizeMode = DataGridViewAutoSizeColumnMode.AllCells; DataGridViewTextBoxColumn companyLocation = new DataGridViewTextBoxColumn(); companyLocation.DataPropertyName = "Location"; companyLocation.HeaderText = "Company Location"; companyLocation.Name = "CompanyLocation"; companyLocation.AutoSizeMode = DataGridViewAutoSizeColumnMode.AllCells; ArrayList carMakeColumnsToAdd = new ArrayList(); // Loop through the CarMakes list to add each custom column for (int intX=0; intX < CarRentalCompanies.CarMakes[0].Count; intX++) { // Custom column for user control string carMakeColumnName = "carColumn" + intX; CarMakeListColumn carMakeColumn = new DataGridViewComboBoxColumn(); carMakeColumn.Name = carMakeColumnName; carMakeColumn.DisplayMember = CarRentalCompanies.CarMakes[intX].Name; carMakeColumn.DataSource = CarRentalCompanies.CarMakes; // this is the CarMAkes List carMakeColumn.AutoSizeMode = DataGridViewAutoSizeColumnMode.Fill; carMakeColumnsToAdd.Add(carMakeColumn); } CarDataGrid.DataSource = bsTheRentals; CarDataGrid.Columns.AddRange(new DataGridViewColumn[] { companyName, companylocation, carMakeColumnsToAdd }); CarDataGrid.AutoGenerateColumns = false; The code I provided does not work because I am unfamiliar with UserControls and custom DataGridViewColumns and DataGridViewCells - I know I must derive from these classes in order to use my User Control properly. I appreciate any advice/assistance/help in this. Thank you. Lawrence

    Read the article

  • How to handle failure to release a resource which is contained in a smart pointer?

    - by cj
    How should an error during resource deallocation be handled, when the object representing the resource is contained in a shared pointer? Smart pointers are a useful tool to manage resources safely. Examples of such resources are memory, disk files, database connections, or network connections. // open a connection to the local HTTP port boost::shared_ptr<Socket> socket = Socket::connect("localhost:80"); In a typical scenario, the class encapsulating the resource should be noncopyable and polymorphic. A good way to support this is to provide a factory method returning a shared pointer, and declare all constructors non-public. The shared pointers can now be copied from and assigned to freely. The object is automatically destroyed when no reference to it remains, and the destructor then releases the resource. /** A TCP/IP connection. */ class Socket { public: static boost::shared_ptr<Socket> connect(const std::string& address); virtual ~Socket(); protected: Socket(const std::string& address); private: // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; But there is a problem with this approach. The destructor must not throw, so a failure to release the resource will remain undetected. A common way out of this problem is to add a public method to release the resource. class Socket { public: virtual void close(); // may throw // ... }; Unfortunately, this approach introduces another problem: Our objects may now contain resources which have already been released. This complicates the implementation of the resource class. Even worse, it makes it possible for clients of the class to use it incorrectly. The following example may seem far-fetched, but it is a common pitfall in multi-threaded code. socket->close(); // ... size_t nread = socket->read(&buffer[0], buffer.size()); // wrong use! Either we ensure that the resource is not released before the object is destroyed, thereby losing any way to deal with a failed resource deallocation. Or we provide a way to release the resource explicitly during the object's lifetime, thereby making it possible to use the resource class incorrectly. There is a way out of this dilemma. But the solution involves using a modified shared pointer class. These modifications are likely to be controversial. Typical shared pointer implementations, such as boost::shared_ptr, require that no exception be thrown when their object's destructor is called. Generally, no destructor should ever throw, so this is a reasonable requirement. These implementations also allow a custom deleter function to be specified, which is called in lieu of the destructor when no reference to the object remains. The no-throw requirement is extended to this custom deleter function. The rationale for this requirement is clear: The shared pointer's destructor must not throw. If the deleter function does not throw, nor will the shared pointer's destructor. However, the same holds for other member functions of the shared pointer which lead to resource deallocation, e.g. reset(): If resource deallocation fails, no exception can be thrown. The solution proposed here is to allow custom deleter functions to throw. This means that the modified shared pointer's destructor must catch exceptions thrown by the deleter function. On the other hand, member functions other than the destructor, e.g. reset(), shall not catch exceptions of the deleter function (and their implementation becomes somewhat more complicated). Here is the original example, using a throwing deleter function: /** A TCP/IP connection. */ class Socket { public: static SharedPtr<Socket> connect(const std::string& address); protected: Socket(const std::string& address); virtual Socket() { } private: struct Deleter; // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; struct Socket::Deleter { void operator()(Socket* socket) { // Close the connection. If an error occurs, delete the socket // and throw an exception. delete socket; } }; SharedPtr<Socket> Socket::connect(const std::string& address) { return SharedPtr<Socket>(new Socket(address), Deleter()); } We can now use reset() to free the resource explicitly. If there is still a reference to the resource in another thread or another part of the program, calling reset() will only decrement the reference count. If this is the last reference to the resource, the resource is released. If resource deallocation fails, an exception is thrown. SharedPtr<Socket> socket = Socket::connect("localhost:80"); // ... socket.reset();

    Read the article

  • Allowing pop-up's and downloads with flex

    - by ShadowVariable
    I'm building an flex app that is basically a wrapper for a few websites. One of them is a google docs website, and I'm trying to get flex to allow downloads or popups or something that will allow me to do it. I've tried a whole bunch of solutions online and none of them have worked out. Here's the code so far: <?xml version="1.0" encoding="utf-8"?> <s:WindowedApplication xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" width="100%" height="100%" creationComplete="onCreationComplete()"> <s:layout> <s:HorizontalLayout/> </s:layout> <fx:Style source="style.css"/> <fx:Script> <![CDATA[ include "CustomHTMLLoader.as"; private function onCreationComplete():void { // ... other stuff ... var custom:object; custom.htmlHost = new MyHTMLHost(); } ]]> </fx:Script> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <s:BorderContainer width="100%" height="100%" backgroundColor="#87BED0" styleName="container"> <s:Panel x="188" y="17" width="826" height="112" borderAlpha="0.15" chromeColor="#0C5A74" color="#FFFFFF" cornerRadius="20" dropShadowVisible="false" enabled="true" title="Customer Service Control Panel"> <s:controlBarContent/> <s:Button id="home" x="13" y="10" height="44" label="Phones" click="myViewStack.selectedChild=Home;" enabled="true" icon="@Embed('assets/iconmonstr-mobile-phone-6-icon-32.png')"/> <s:Button id="liveagent" x="131" y="10" height="44" label="Live Agent" click="myViewStack.selectedChild=live_agent;" icon="@Embed('assets/iconmonstr-speech-bubble-11-icon-32.png')"/> <s:Button id="bigcommerce" x="260" y="10" width="158" height="44" label="Big Commerce" click="myViewStack.selectedChild=bigcommerce_home;" icon="@Embed('assets/iconmonstr-coin-6-icon-48.png')"/> <s:Button id="faq" x="436" y="10" width="88" height="44" label="FAQ" click="myViewStack.selectedChild=freqaskquestions;" fontFamily="Arial" icon="@Embed('assets/iconmonstr-help-4-icon-32.png')"/> <s:Button id="call" x="540" y="10" width="131" height="44" label="Google Docs" click="myViewStack.selectedChild=call_notes;" icon="@Embed('assets/iconmonstr-text-file-4-icon-32.png')"/> <s:Button id="hoot" x="684" y="10" width="122" height="44" label="HootSuite" click="myViewStack.selectedChild=hoot_suite;" icon="@Embed('assets/iconmonstr-facebook-icon-32.png')"/> </s:Panel> <mx:ViewStack id="myViewStack" x="0" y="140" width="100%" height="100%" borderStyle="solid"> <s:NavigatorContent id="Home"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://mbvphone.mtbakervapor.org/vbx/messages/inbox" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="bigcommerce_home"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://www.mtbakervapor.com/admin" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="live_agent"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://mbvphone.mtbakervapor.org/liveagent/agent/#login" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="freqaskquestions"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://mbvphone.mtbakervapor.org/liveagent/" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="call_notes"> <s:BorderContainer width="100%" height="100%"> <mx:HTML id="html" x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="https://drive.google.com/a/mtbakervapor.com/" htmlHost="{new CustomHost()}" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="hoot_suite"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="https://hootsuite.com/login" /> </s:BorderContainer> </s:NavigatorContent> </mx:ViewStack> <s:Image x="0" y="0" width="180" height="140" scaleMode="letterbox" smooth="false" source="assets/mbvlogo_black.png"/> </s:BorderContainer> </s:WindowedApplication> and the custom class code: package { import flash.html.HTMLHost; import flash.html.HTMLWindowCreateOptions; import flash.html.HTMLLoader; public class MyHTMLHost extends HTMLHost { public function MyHTMLHost(defaultBehaviors:Boolean=true) { super(defaultBehaviors); } override public function createWindow(windowCreateOptions:HTMLWindowCreateOptions):HTMLLoader { // all JS calls and HREFs to open a new window should use the existing window return HTMLLoader.createRootWindow(); } } } any help would be appreciated.

    Read the article

  • How to Create Auto Playlists in Windows Media Player 12

    - by DigitalGeekery
    Are you getting tired of the same old playlists in Windows Media Player? Today we’ll show you how to create dynamic auto playlists based on criteria you choose in WMP 12 in Windows 7. Auto Playlists In Library view, click on Create playlist dropdown arrow and select Create auto playlist. On the New Auto Playlist window type in a name for the playlist in the text box. Now we need to choose our criteria by which to filter your playlist. Select Click here to add criteria. For our example, we will create a playlist of songs that were added to the library in the last week from the Alternative genre. So, we will first select Date Added from the dropdown list. Many criteria will have addition options to configure. In the example below you will see that we have a few options to fine tune.   We will filter all the songs added to the library in the last 7 days. We will select Is After from the first dropdown list. Then select Last 7 Days from the second dropdown list. You can add multiple criteria to further filter your playlist. If you can’t find the criteria you are looking for, select “More” at the bottom of the dropdown list.   This will pull up a filter window with all the criteria. Select a filter and then click OK when finished.   From the Genre dropdown, we will select Alternative. If you’d like to add Pictures, Videos, or TV Shows to your auto playlists you can do so by selecting them from the dropdown list under And also include. You will then be able to select criteria for your pictures, videos, or TV shows from the dropdown list.   Finally, you can also add restrictions to your music such as the number of items, duration, or total size. We will limit the duration of our playlist to one hour by selecting Limit Total Duration To… Then type in 1 hour…Click OK.   Our library is automatically filtered and a playlist is created based on the criteria we selected. When additional songs are added to the Windows Media Player library, any of new songs that fit the criteria will automatically be added to the New Songs playlist. You can also save a copy of an auto playlist as a regular playlist. Switch to Playlists view by clicking Playlists from either the top menu or the navigation bar. Select the Play tab and then click Clear list to remove any tracks from the list pane.   Right-click on the playlist you want to save, select Add to, and then Play list. The songs from your auto playlist will appear as an Unsaved list on the list pane. Click Save list. Type in a name for your playlist. Your auto playlist will continue to change as you add or remove items from your Media Player library that meet the criteria you established. The new saved playlist we just created will stay as it is currently. Editing a Auto playlist is easy. Right-click on the playlist and select Edit. Now you are ready to enjoy your playlist. Conclusion Auto playlists are great way to keep your playlists fresh in Windows Media Player 12. Users can get creative and experiment with the wide variety of criteria to customize their listening experience. If you are new to playlists in Windows Media Player, you may want to check our our previous post on how to create custom playlists in Windows Media Player 12. Are you looking to get better sound from WMP 12? Take a look at how to improve playback using enhancements in Windows Media Player 12. Similar Articles Productive Geek Tips Create Custom Playlists in Windows Media Player 12Fixing When Windows Media Player Library Won’t Let You Add FilesInstall and Use the VLC Media Player on Ubuntu LinuxMake Windows Media Player Automatically Open in Mini Player ModeMake VLC Player Look like Windows Media Player 10 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics Create Ringtones For Your Android Phone With RingDroid

    Read the article

  • Configuring IIS7 404 page when using IIS7 urlrewrite module

    - by Peter
    I've got custom errors to work for .aspx page like: www.domain.com/whateverdfdgdfg.aspx But, when no .aspx url is requested, (like http://www.domain.com/hfdkfdh4545) it results in an error: HTTP Error 404.0 - Not Found The resource you are looking for has been removed, had its name changed, or is temporarily unavailable. I have this in my web.config: <customErrors mode="On" defaultRedirect="/error/1" redirectMode="ResponseRedirect"> <error statusCode="404" redirect="/404.aspx" /> </customErrors> 404.aspx exists, since the above DOES work when requesting non-existent .aspx pages... I also configured the errorpages in IIS7: Status code: 404 Path: /404.aspx Type: Execute URL Entry type: Local My websites application pool setting is "ASP.NET v4.0" Now why is this still not working?

    Read the article

  • Error on 64 Bit Install of IIS &ndash; LoadLibraryEx failed on aspnet_filter.dll

    - by Rick Strahl
    I’ve been having a few problems with my Windows 7 install and trying to get IIS applications to run properly in 64 bit. After installing IIS and creating virtual directories for several of my applications and firing them up I was left with the following error message from IIS: Calling LoadLibraryEx on ISAPI filter “c:\windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll” failed This is on Windows 7 64 bit and running on an ASP.NET 4.0 Application configured for running 64 bit (32 bit disabled). It’s also on what is essentially a brand new installation of IIS and Windows 7. So it failed right out of the box. The problem here is that IIS is trying to loading this ISAPI filter from the 32 bit folder – it should be loading from Framework64 folder note the Framework folder. The aspnet_filter.dll component is a small Win32 ISAPI filter used to back up the cookieless session state for ASP.NET on IIS 7 applications. It’s not terribly important because of this focus, but it’s a default loaded component. After a lot of fiddling I ended up with two solutions (with the help and support of some Twitter folks): Switch IIS to run in 32 bit mode Fix the filter listing in ApplicationHost.config Switching IIS to allow 32 Bit Code This is a quick fix for the problem above which enables 32 bit code in the Application Pool. The problem above is that IIS is trying to load a 32 bit ISAPI filter and enabling 32 bit code gets you around this problem. To configure your Application Pool, open the Application Pool in IIS Manager bring up Advanced Options and Enable 32 Bit Applications: And voila the error message above goes away. Fix Filters Enabling 32 bit code is a quick fix solution to this problem, but not an ideal one. If you’re running a pure .NET application that doesn’t need to do COM or pInvoke Interop with 32 bit apps there’s usually no need for enabling 32 bit code in an Application Pool as you can run in native 64 bit code. So trying to get 64 bit working natively is a pretty key feature in my opinion :-) So what’s the problem – why is IIS trying to load a 32 bit DLL in a 64 bit install, especially if the application pool is configured to not allow 32 bit code at all? The problem lies in the server configuration and the fact that 32 bit and 64 bit configuration settings exist side by side in IIS. If I open my Default Web Site (or any other root Web Site) and go to the ISAPI filter list here’s what I see: Notice that there are 3 entries for ASP.NET 4.0 in this list. Only two of them however are specifically scoped to the specifically to 32 bit or 64 bit. As you can see the 64 bit filter correctly points at the Framework64 folder to load the dll, while both the 32 bit and the ‘generic’ entry point at the plain Framework 32 bit folder. Aha! Hence lies our problem. You can edit ApplicationHost.config manually, but I ran into the nasty issue of not being able to easily edit that file with the 32 bit editor (who ever thought that was a good idea???? WTF). You have to open ApplicationHost.Config in a 64 bit native text editor – which Visual Studio is not. Or my favorite editor: EditPad Pro. Since I don’t have a native 64 bit editor handy Notepad was my only choice. Or as an alternative you can use the IIS 7.5 Configuration Editor which lets you interactively browse and edit most ApplicationHost settings. You can drill into the configuration hierarchy visually to find your keys and edit attributes and sub values in property editor type interface. I had no idea this tool existed prior to today and it’s pretty cool as it gives you some visual clues to options available – especially in absence of an Intellisense scheme you’d get in Visual Studio (which doesn’t work). To use the Configuration Editor go the Web Site root and use the Configuration Editor option in the Management Group. Drill into System.webServer/isapiFilters and then click on the Collection’s … button on the right. You should now see a display like this: which shows all the same attributes you’d see in ApplicationHost.config (cool!). These entries correspond to these raw ApplicationHost.config entries: <filter name="ASP.Net_4.0" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0" /> <filter name="ASP.Net_4.0_64bit" path="C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness64" /> <filter name="ASP.Net_4.0_32bit" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness32" /> The key attribute we’re concerned with here is the preCondition and the bitness subvalue. Notice that the ‘generic’ version – which comes first in the filter list – has no bitness assigned to it, so it defaults to 32 bit and the 32 bit dll path. And this is where our problem comes from. The simple solution to fix the startup problem is to remove the generic entry from this list here or in the filters list shown earlier and leave only the bitness specific versions active. The preCondition attribute acts as a filter and as you can see here it filters the list by runtime version and bitness value. This is something to keep an eye out in general – if a bitness values are missing it’s easy to run into conflicts like this with any settings that are global and especially those that load modules and handlers and other executable code. On 64 bit systems it’s a good idea to explicitly set the bitness of all entries or remove the non-specific versions and add bit specific entries. So how did this get misconfigured? I installed IIS before everything else was installed on this machine and then went ahead and installed Visual Studio. I suspect the Visual Studio install munged this up as I never saw a similar problem on my live server where everything just worked right out of the box. In searching about this problem a lot of solutions pointed at using aspnet_regiis –r from the Framework64 directory, but that did not fix this extra entry in the filters list – it adds the required 32 bit and 64 bit entries, but it doesn’t remove the errand un-bitness set entry. Hopefully this post will help out anybody who runs into a similar situation without having to trouble shoot all the way down into the configuration settings and noticing the bitness settings. It’s a good lesson learned for me – this is my first desktop install of a 64 bit OS and things like this are what I was reluctant to find. Now that I ran into this I have a good idea what to look for with 32/64 bit misconfigurations in IIS at least.© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • What is New in ASP.NET 4.0 Code Access Security

    - by HosamKamel
    ASP.NET Code Access Security (CAS) is a feature that helps protect server applications on hosting multiple Web sites, ASP.NET lets you assign a configurable trust level that corresponds to a predefined set of permissions. ASP.NET has predefined ASP.NET Trust Levels and Policy Files that you can assign to applications, you also can assign custom trust level and policy files. Most web hosting companies run ASP.NET applications in Medium Trust to prevent that one website affect or harm another site etc. As .NET Framework's Code Access Security model has evolved, ASP.NET 4.0 Code Access Security also has introduced several changes and improvements.   A Full post addresses the new changes in ASP.NET 4.0 is published at Asp.Net QA Team Here http://weblogs.asp.net/asptest/archive/2010/04/23/what-is-new-in-asp-net-4-0-code-access-security.aspx

    Read the article

  • Creating site with wix.com or weebly.com [on hold]

    - by Edgar
    I decided to create web page and for that purpose I find out that I can use wix.com portal. My knowledge of HTML,CSS is on basic level. So I want to ask what are pros and cons of creating webPages using WIX to compare with making your on your own (writing code by yourself). One of the questions is: can I put custom advertisements to the my page. Also would appreciate of suggestions what portal is better for wix.com or weebly.com or for good website I should choose codding by myself? Finally, would be nice to get any suggestions in this field.

    Read the article

  • Developer Dashboard in SharePoint 2010

    - by jcortez
    Introducing the Developer Dashboard As a SharePoint developer (or IT Professional), how many times have you had the pleasure of figuring out why a particular page on your site is taking too long to render? I'm sure one of the techniques you have employed in troubleshooting is the process of elimination - removing individual web parts from the page hoping to identify which web part is misbehaving. One of the new features of SharePoint 2010 is the Developer Dashboard. This dashboard provides tracing and performance information that can be useful when you are trying to troubleshoot pages that are loading too slow. The Developer Dashboard is turned off by default and I'll go over 3 different ways to display it. Here is a screenshot of what the Developer Dashboard looks like when displayed at the bottom of the page:   You can see on the left side the different events that fired during the page processing pipeline and how long these events took. This is where you will see individual web parts being processed and how long it took to complete (obviously the kind of processing depends on what the web part does). On the right side you would see the different database calls issued through the SharePoint Object Model to process the page. You will notice that each of these database queries are actually a hyperlink and clicking on it displays a pop-up window that shows the actual SQL Query Text, the Call Stack that triggered the database call, and the IO statistics of that query. Enabling the Developer Dashboard Option 1: Managed Code   The Developer Dashboard is a farm-wide setting and the code above won't work if it is used within a web part hosted on any non-Central Admin site. The SPDeveloperDashboardLevel enum has three possible values: On, Off, and OnDemand. Setting it to On will always display the Developer Dashboard at the bottom of the page. Setting it Off will hide the Developer Dashboard. Setting it to OnDemand will add an icon at the top right corner of the page (see screenshot below) where a Site Collection Admin can toggle the display of the Developer Dashboard for a particular site collection. In my opinion, OnDemand is the best setting when troubleshooting a page or during development since a Site Collection Admin can turn it on or off and for a particular site only. The first cool thing about this is that the Site Collection Admin that turned it on will be the only one to see the Developer Dashboard output. Everyday users won't see the Developer Dashboard output even if it was turned on by a Site Collection Admin. If you need more flexibility on who gets to see the Developer Dashboard output, you can set the SPDeveloperDashboardSettings.RequiredPermissions to control which group of users will have the permission to see the output. Option 2: Using stsadm Using stsadm, you can run the following command to configure the Developer Dashboard: STSADM –o setproperty –pn developer-dashboard –pv OnDemand To successfully execute this command, be sure you that are running as a Farm Admin. Option 3: Using PowerShell For all scripts in SharePoint 2010, I prefer writing them as PowerShell scripts. Though the stsadm command is less verbose, the PowerShell equivalent is pretty straightforward and uses the SharePoint Object Model: You can of course parameterized the value that gets assigned to the DisplayLevel property so you can turn it On, Off or OnDemand depending on the parameter. Events and the Developer Dashboard  Now, don't assume that all the code inside your web part or page will show up in the Developer Dashboard complete with all the great troubleshooting information. Only a finite set of events are monitored by default (for a web part it will events in the base web part class). Let's say you have a click event that could take some time, for example a web service call. And you want to include troubleshooting information for this event in the Developer Dashboard. Enter SPMonitoredScope which is also a new feature in SharePoint 2010. In SharePoint 2010, everything is executed within a "Monitored Scope". And each scope has a set of "Monitors" that measures and counts calls and timings which appears in the Developer Dashboard. Below is an example on how to get your custom code to get included in the Developer Dashboard by wrapping it inside a new monitored scope: The code above would include your new scope "My long web service call" into the Developer Dashboard and would log the time it took to complete processing. In my opinion, wrapping your custom code in a SPMonitoredScope is a SharePoint development best practice since it provides you visibility and a better understanding on the performance of your components.

    Read the article

  • Event notification for ::SCardListReaders() [migrated]

    - by dpb
    In the PC/SC (Personal Computer Smart Card) Appln, I have (MSCAPI USB CCID based) 1) Calling ::SCardListReaders() returns SCARD_E_NO_READERS_AVAILABLE (0x8010002E). This call is made after OS starts fresh after reboot, from a thread which is part of my custom windows service. 2) Adding delay before ::SCardListReaders() call solves the problem. 3) How can I solve this problem elegantly ? Not using delay & waiting for some event to notify me. since a) Different machines may require different delay values b) Cannot loop since the error code is genuine c) Could not find this event as part of System Event Notification Service or similar COM interface d) platform is Windows 7 Any Help Appreciated.

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • MVC Portable Areas Enhancement &ndash; Embedded Resource Controller

    - by Steve Michelotti
    MvcContrib contains a feature called Portable Areas which I’ve recently blogged about. In short, portable areas provide a way to distribute MVC binary components as simple .NET assemblies where the aspx/ascx files are actually compiled into the assembly as embedded resources. This is an extremely cool feature but once you start building robust portable areas, you’ll also want to be able to access other external files like css and javascript.  After my recent post suggesting portable areas be expanded to include other embedded resources, Eric Hexter asked me if I’d like to contribute the code to MvcContrib (which of course I did!). Embedded resources are stored in a case-sensitive way in .NET assemblies and the existing embedded view engine inside MvcContrib already took this into account. Obviously, we’d want the same case sensitivity handling to be taken into account for any embedded resource so my job consisted of 1) adding the Embedded Resource Controller, and 2) a little refactor to extract the logic that deals with embedded resources so that the embedded view engine and the embedded resource controller could both leverage it and, therefore, keep the code DRY. The embedded resource controller targets these scenarios: External image files that are referenced in an <img> tag External files referenced like css or JavaScript files Image files referenced inside css files Embedded Resources Walkthrough This post will describe a walkthrough of using the embedded resource controller in your portable areas to include the scenarios outlined above. I will build a trivial “Quick Links” widget to illustrate the concepts. The portable area registration is the starting point for all portable areas. The MvcContrib.PortableAreas.EmbeddedResourceController is optional functionality – you must opt-in if you want to use it.  To do this, you simply “register” it by providing a route in your area registration that uses it like this: 1: context.MapRoute("ResourceRoute", "quicklinks/resource/{resourceName}", 2: new { controller = "EmbeddedResource", action = "Index" }, 3: new string[] { "MvcContrib.PortableAreas" }); First, notice that I can specify any route I want (e.g., “quicklinks/resources/…”).  Second, notice that I need to include the “MvcContrib.PortableAreas” namespace as the fourth parameter so that the framework is able to find the EmbeddedResourceController at runtime. The handling of embedded views and embedded resources have now been merged.  Therefore, the call to: 1: RegisterTheViewsInTheEmmeddedViewEngine(GetType()); has now been removed (breaking change).  It has been replaced with: 1: RegisterAreaEmbeddedResources(); Other than that, the portable area registration remains unchanged. The solution structure for the static files in my portable area looks like this: I’ve got a css file in a folder called “Content” as well as a couple of image files in a folder called “images”. To reference these in my aspx/ascx code, all of have to do is this: 1: <link href="<%= Url.Resource("Content.QuickLinks.css") %>" rel="stylesheet" type="text/css" /> 2: <img src="<%= Url.Resource("images.globe.png") %>" /> This results in the following HTML mark up: 1: <link href="/quicklinks/resource/Content.QuickLinks.css" rel="stylesheet" type="text/css" /> 2: <img src="/quicklinks/resource/images.globe.png" /> The Url.Resource() method is now included in MvcContrib as well. Make sure you import the “MvcContrib” namespace in your views. Next, I have to following html to render the quick links: 1: <ul class="links"> 2: <li><a href="http://www.google.com">Google</a></li> 3: <li><a href="http://www.bing.com">Bing</a></li> 4: <li><a href="http://www.yahoo.com">Yahoo</a></li> 5: </ul> Notice the <ul> tag has a class called “links”. This is defined inside my QuickLinks.css file and looks like this: 1: ul.links li 2: { 3: background: url(/quicklinks/resource/images.navigation.png) left 4px no-repeat; 4: padding-left: 20px; 5: margin-bottom: 4px; 6: } On line 3 we’re able to refer to the url for the background property. As a final note, although we already have complete control over the location of the embedded resources inside the assembly, what if we also want control over the physical URL routes as well. This point was raised by John Nelson in this post. This has been taken into account as well. For example, suppose you want your physical url to look like this: 1: <img src="/quicklinks/images/globe.png" /> instead of the same corresponding URL shown above (i.e., “/quicklinks/resources/images.globe.png”). You can do this easily by specifying another route for it which includes a “resourcePath” parameter that is pre-pended. Here is the complete code for the area registration with the custom route for the images shown on lines 9-11: 1: public class QuickLinksRegistration : PortableAreaRegistration 2: { 3: public override void RegisterArea(System.Web.Mvc.AreaRegistrationContext context, IApplicationBus bus) 4: { 5: context.MapRoute("ResourceRoute", "quicklinks/resource/{resourceName}", 6: new { controller = "EmbeddedResource", action = "Index" }, 7: new string[] { "MvcContrib.PortableAreas" }); 8:   9: context.MapRoute("ResourceImageRoute", "quicklinks/images/{resourceName}", 10: new { controller = "EmbeddedResource", action = "Index", resourcePath = "images" }, 11: new string[] { "MvcContrib.PortableAreas" }); 12:   13: context.MapRoute("quicklink", "quicklinks/{controller}/{action}", 14: new {controller = "links", action = "index"}); 15:   16: this.RegisterAreaEmbeddedResources(); 17: } 18:   19: public override string AreaName 20: { 21: get 22: { 23: return "QuickLinks"; 24: } 25: } 26: } The Quick Links portable area results in the following requests (including custom route formats): The complete code for this post is now included in the Portable Areas sample solution in the latest MvcContrib source code. You can get the latest code now.  Portable Areas open up exciting new possibilities for MVC development!

    Read the article

  • Ajax Control Toolkit Now Supports jQuery

    - by Stephen.Walther
    I’m excited to announce the September 2013 release of the Ajax Control Toolkit, which now supports building new Ajax Control Toolkit controls with jQuery. You can download the latest release of the Ajax Control Toolkit from http://AjaxControlToolkit.CodePlex.com or you can install the Ajax Control Toolkit directly within Visual Studio by executing the following NuGet command: The New jQuery Extender Base Class This release of the Ajax Control Toolkit introduces a new jQueryExtender base class. This new base class enables you to create Ajax Control Toolkit controls with jQuery instead of the Microsoft Ajax Library. Currently, only one control in the Ajax Control Toolkit has been rewritten to use the new jQueryExtender base class (only one control has been jQueryized). The ToggleButton control is the first of the Ajax Control Toolkit controls to undergo this dramatic transformation. All of the other controls in the Ajax Control Toolkit are written using the Microsoft Ajax Library. We hope to gradually rewrite these controls as jQuery controls over time. You can view the new jQuery ToggleButton live at the Ajax Control Toolkit sample site: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToggleButton/ToggleButton.aspx Why are we rewriting Ajax Control Toolkits with jQuery? There are very few developers actively working with the Microsoft Ajax Library while there are thousands of developers actively working with jQuery. Because we want talented developers in the community to continue to contribute to the Ajax Control Toolkit, and because almost all JavaScript developers are familiar with jQuery, it makes sense to support jQuery with the Ajax Control Toolkit. Also, we believe that the Ajax Control Toolkit is a great framework for Web Forms developers who want to build new ASP.NET controls that use JavaScript. The Ajax Control Toolkit has great features such as automatic bundling, minification, caching, and compression. We want to make it easy for ASP.NET developers to build new controls that take advantage of these features. Instantiating Controls with data-* Attributes We took advantage of the new JQueryExtender base class to change the way that Ajax Control Toolkit controls are instantiated. In the past, adding an Ajax Control Toolkit to a page resulted in inline JavaScript being injected into the page. For example, adding the ToggleButton control to a page injected the following HTML and script: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" /> <script type="text/javascript"> //<![CDATA[ Sys.Application.add_init(function() { $create(Sys.Extended.UI.ToggleButtonBehavior, {"CheckedImageAlternateText":"Check", "CheckedImageUrl":"ToggleButton_Checked.gif", "ImageHeight":19, "ImageWidth":19, "UncheckedImageAlternateText":"UnCheck", "UncheckedImageUrl":"ToggleButton_Unchecked.gif", "id":"ctl00_SampleContent_ToggleButtonExtender1"}, null, null, $get("ctl00_SampleContent_CheckBox1")); }); //]]> </script> Notice the call to the JavaScript $create() method at the bottom of the page. When using the Microsoft Ajax Library, this call to the $create() method is necessary to create the Ajax Control Toolkit control. This inline script looks pretty ugly to a modern JavaScript developer. Inline script! Horrible! The jQuery version of the ToggleButton injects the following HTML and script into the page: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" data-act-togglebuttonextender="imageWidth:19, imageHeight:19, uncheckedImageUrl:'ToggleButton_Unchecked.gif', checkedImageUrl:'ToggleButton_Checked.gif', uncheckedImageAlternateText:'I don&#39;t understand why you don&#39;t like ASP.NET', checkedImageAlternateText:'It&#39;s really nice to hear from you that you like ASP.NET'" /> Notice that there is no script! There is no call to the $create() method. In fact, there is no inline JavaScript at all. The jQuery version of the ToggleButton uses an HTML5 data-* attribute instead of an inline script. The ToggleButton control is instantiated with a data-act-togglebuttonextender attribute. Using data-* attributes results in much cleaner markup (You don’t need to feel embarrassed when selecting View Source in your browser). Ajax Control Toolkit versus jQuery So in a jQuery world why is the Ajax Control Toolkit needed at all? Why not just use jQuery plugins instead of the Ajax Control Toolkit? For example, there are lots of jQuery ToggleButton plugins floating around the Internet. Why not just use one of these jQuery plugins instead of using the Ajax Control Toolkit ToggleButton control? There are three main reasons why the Ajax Control Toolkit continues to be valuable in a jQuery world: Ajax Control Toolkit controls run on both the server and client jQuery plugins are client only. A jQuery plugin does not include any server-side code. If you need to perform any work on the server – think of the AjaxFileUpload control – then you can’t use a pure jQuery solution. Ajax Control Toolkit controls provide a better Visual Studio experience You don’t get any design time experience when you use jQuery plugins within Visual Studio. Ajax Control Toolkit controls, on the other hand, are designed to work with Visual Studio. For example, you can use the Visual Studio Properties window to set Ajax Control Toolkit control properties. Ajax Control Toolkit controls shield you from working with JavaScript I like writing code in JavaScript. However, not all developers like JavaScript and some developers want to completely avoid writing any JavaScript code at all. The Ajax Control Toolkit enables you to take advantage of JavaScript (and the latest features of HTML5) in your ASP.NET Web Forms websites without writing a single line of JavaScript. Better ToolkitScriptManager Documentation With this release, we have added more detailed documentation for using the ToolkitScriptManager. In particular, we added documentation that describes how to take advantage of the new bundling, minification, compression, and caching features of the Ajax Control Toolkit. The ToolkitScriptManager documentation is part of the Ajax Control Toolkit sample site and it can be read here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToolkitScriptManager/ToolkitScriptManager.aspx Other Fixes This release of the Ajax Control Toolkit includes several important bug fixes. For example, the Ajax Control Toolkit Twitter control was completely rewritten with this release. Twitter is in the process of retiring the first version of their API. You can read about their plans here: https://dev.twitter.com/blog/planning-for-api-v1-retirement We completely rewrote the Ajax Control Toolkit Twitter control to use the new Twitter API. To take advantage of the new Twitter API, you must get a key and access token from Twitter and add the key and token to your web.config file. Detailed instructions for using the new version of the Ajax Control Toolkit Twitter control can be found here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/Twitter/Twitter.aspx   Summary We’ve made some really great changes to the Ajax Control Toolkit over the last two releases to modernize the toolkit. In the previous release, we updated the Ajax Control Toolkit to use a better bundling, minification, compression, and caching system. With this release, we updated the Ajax Control Toolkit to support jQuery. We also continue to update the Ajax Control Toolkit with important bug fixes. I hope you like these changes and I look forward to hearing your feedback.

    Read the article

  • How to use jQuery Date Range Picker plugin in asp.net

    - by alaa9jo
    I stepped by this page: http://www.filamentgroup.com/lab/date_range_picker_using_jquery_ui_16_and_jquery_ui_css_framework/ and let me tell you,this is one of the best and coolest daterangepicker in the web in my opinion,they did a great job with extending the original jQuery UI DatePicker.Of course I made enhancements to the original plugin (fixed few bugs) and added a new option (Clear) to clear the textbox. In this article I well use that updated plugin and show you how to use it in asp.net..you will definitely like it. So,What do I need? 1- jQuery library : you can use 1.3.2 or 1.4.2 which is the latest version so far,in my article I will use the latest version. 2- jQuery UI library (1.8): As I mentioned earlier,daterangepicker plugin is based on the original jQuery UI DatePicker so that library should be included into your page. 3- jQuery DateRangePicker plugin : you can go to the author page or use the modified one (it's included in the attachment),in this article I will use the modified one. 4- Visual Studio 2005 or later : very funny :D,in my article I will use VS 2008. Note: in the attachment,I included all CSS and JS files so don't worry. How to use it? First thing,you will have to include all of the CSS and JS files into your page like this: <script src="Scripts/jquery-1.4.2.min.js" type="text/javascript"></script> <script src="Scripts/jquery-ui-1.8.custom.min.js" type="text/javascript"></script> <script src="Scripts/daterangepicker.jQuery.js" type="text/javascript"></script> <link href="CSS/redmond/jquery-ui-1.8.custom.css" rel="stylesheet" type="text/css" /> <link href="CSS/ui.daterangepicker.css" rel="stylesheet" type="text/css" /> <style type="text/css"> .ui-daterangepicker { font-size: 10px; } </style> Then add this html: <asp:TextBox ID="TextBox1" runat="server" Font-Size="10px"></asp:TextBox><asp:Button ID="SubmitButton" runat="server" Text="Submit" OnClick="SubmitButton_Click" /> <span>First Date:</span><asp:Label ID="FirstDate" runat="server"></asp:Label> <span>Second Date:</span><asp:Label ID="SecondDate" runat="server"></asp:Label> As you can see,it includes TextBox1 which we are going to attach the daterangepicker to it,2 labels to show you later on by code on how to read the date from the textbox and set it to the labels Now we have to attach the daterangepicker to the textbox by using jQuery (Note:visit the author's website for more info on daterangerpicker's options and how to use them): <script type="text/javascript"> $(function() { $("#<%= TextBox1.ClientID %>").attr("readonly", "readonly"); $("#<%= TextBox1.ClientID %>").attr("unselectable", "on"); $("#<%= TextBox1.ClientID %>").daterangepicker({ presetRanges: [], arrows: true, dateFormat: 'd M, yy', clearValue: '', datepickerOptions: { changeMonth: true, changeYear: true} }); }); </script> Finally,add this C# code: protected void SubmitButton_Click(object sender, EventArgs e) { if (TextBox1.Text.Trim().Length == 0) { return; } string selectedDate = TextBox1.Text; if (selectedDate.Contains("-")) { DateTime startDate; DateTime endDate; string[] splittedDates = selectedDate.Split("-".ToCharArray(), StringSplitOptions.RemoveEmptyEntries); if (splittedDates.Count() == 2 && DateTime.TryParse(splittedDates[0], out startDate) && DateTime.TryParse(splittedDates[1], out endDate)) { FirstDate.Text = startDate.ToShortDateString(); SecondDate.Text = endDate.ToShortDateString(); } else { //maybe the client has modified/altered the input i.e. hacking tools } } else { DateTime selectedDateObj; if (DateTime.TryParse(selectedDate, out selectedDateObj)) { FirstDate.Text = selectedDateObj.ToShortDateString(); SecondDate.Text = string.Empty; } else { //maybe the client has modified/altered the input i.e. hacking tools } } } This is the way on how to read from the textbox,That's it!. FAQ: 1-Why did you add this code?: <style type="text/css"> .ui-daterangepicker { font-size: 10px; } </style> A:For two reasons: 1)To show the Daterangepicker in a smaller size because it's original size is huge 2)To show you how to control the size of it. 2- Can I change the theme? A: yes you can,you will notice that I'm using Redmond theme which you will find it in jQuery UI website,visit their website and download a different theme,you may also have to make modifications to the css of daterangepicker,it's all yours. 3- Why did you add a font size to the textbox? A: To make the design look better,try to remove it and see by your self. 4- Can I register the script at codebehind? A: yes you can 5- I see you have added these two lines,what they do? $("#<%= TextBox1.ClientID %>").attr("readonly", "readonly"); $("#<%= TextBox1.ClientID %>").attr("unselectable", "on"); A:The first line will make the textbox not editable by the user,the second will block the blinking typing cursor from appearing if the user clicked on the textbox,you will notice that both lines are necessary to be used together,you can't just use one of them...for logical reasons of course. Finally,I hope everyone liked the article and as always,your feedbacks are always welcomed and if anyone have any suggestions or made any modifications that might be useful for anyone else then please post it at at the author's website and post a reference to your post here.

    Read the article

  • Silverlight 4 + RIA Services - Ready for Business: Exposing OData Services

    OData is an emerging set of extensions for the ATOM protocol that makes it easier to share data over the web. To show off OData in RIA Services, lets continue our series.       We think it is very interesting to expose OData from a DomainService to facilitate data sharing.   For example I might want users to be able to access my data in a rich way in Excel as well as my custom Silverlight client.   Id like to be able to enable that without writing...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Easily Tweak Windows 7 and Vista by Adding Tabs to Explorer, Creating Context Menu Entries, and More

    - by Lori Kaufman
    7Plus is a very useful, free tool for Windows 7 and Vista that adds a lot of features to Windows, such as the ability to add tabs to Windows Explorer, set up hotkeys for common tasks, and other settings to make working with Windows easier. 7Plus is powered by AutoHotkey and allows most of the features to be fully customized. You can also create your own features by creating custom events. 7Plus does not need to be installed. Simply extract the files from the .zip file you downloaded (see the link at the end of this article) and double-click on the 7plus.exe file. HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems

    Read the article

  • Integrating WIF with WCF Data Services

    A time ago I discussed how a custom REST Starter kit interceptor could be used to parse a SAML token in the Http Authorization header and wrap that into a ClaimsPrincipal that the WCF services could use. The thing is that code was initially created for Geneva framework, so it got deprecated quickly. I recently needed that piece of code for one of projects where I am currently working on so I decided to update it for WIF. As this interceptor can be injected in any host for WCF REST services, also...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Running OWSM WLST commands - 11g

    - by Prakash Yamuna
    It has been sometime since I posted some materials. I hope to have more tips, best practices - but here is a quick thing that people seem to trip up on..."How to Run OWSM related WLST commands" The most common issue people seem to run into is trying to run the OWSM WLST commands from the wrong location. You need to run the WLST commands from "oracle_common/common/bin/wlst.sh" (ex: /home/Oracle/Middleware/oracle_common/common/bin/wlst.sh) This is documented in the OWSM doc (Security And Administrator's Guide) under the section "Accessing the Web Services Custom WLST Commands". Note: This location is different from the location from where you run the SOA WLST commands. If you try to run the OWSM WLST command from the wrong location - you will see errors like the following: "The MBean ,@ oracle.wsm:*,name=WSMDocumentManager,type=Repository was not found"

    Read the article

  • Securing an ASP.NET MVC 2 Application

    - by rajbk
    This post attempts to look at some of the methods that can be used to secure an ASP.NET MVC 2 Application called Northwind Traders Human Resources.  The sample code for the project is attached at the bottom of this post. We are going to use a slightly modified Northwind database. The screen capture from SQL server management studio shows the change. I added a new column called Salary, inserted some random salaries for the employees and then turned off AllowNulls.   The reporting relationship for Northwind Employees is shown below.   The requirements for our application are as follows: Employees can see their LastName, FirstName, Title, Address and Salary Employees are allowed to edit only their Address information Employees can see the LastName, FirstName, Title, Address and Salary of their immediate reports Employees cannot see records of non immediate reports.  Employees are allowed to edit only the Salary and Title information of their immediate reports. Employees are not allowed to edit the Address of an immediate report Employees should be authenticated into the system. Employees by default get the “Employee” role. If a user has direct reports, they will also get assigned a “Manager” role. We use a very basic empId/pwd scheme of EmployeeID (1-9) and password test$1. You should never do this in an actual application. The application should protect from Cross Site Request Forgery (CSRF). For example, Michael could trick Steven, who is already logged on to the HR website, to load a page which contains a malicious request. where without Steven’s knowledge, a form on the site posts information back to the Northwind HR website using Steven’s credentials. Michael could use this technique to give himself a raise :-) UI Notes The layout of our app looks like so: When Nancy (EmpID 1) signs on, she sees the default page with her details and is allowed to edit her address. If Nancy attempts to view the record of employee Andrew who has an employeeID of 2 (Employees/Edit/2), she will get a “Not Authorized” error page. When Andrew (EmpID 2) signs on, he can edit the address field of his record and change the title and salary of employees that directly report to him. Implementation Notes All controllers inherit from a BaseController. The BaseController currently only has error handling code. When a user signs on, we check to see if they are in a Manager role. We then create a FormsAuthenticationTicket, encrypt it (including the roles that the employee belongs to) and add it to a cookie. private void SetAuthenticationCookie(int employeeID, List<string> roles) { HttpCookiesSection cookieSection = (HttpCookiesSection) ConfigurationManager.GetSection("system.web/httpCookies"); AuthenticationSection authenticationSection = (AuthenticationSection) ConfigurationManager.GetSection("system.web/authentication"); FormsAuthenticationTicket authTicket = new FormsAuthenticationTicket( 1, employeeID.ToString(), DateTime.Now, DateTime.Now.AddMinutes(authenticationSection.Forms.Timeout.TotalMinutes), false, string.Join("|", roles.ToArray())); String encryptedTicket = FormsAuthentication.Encrypt(authTicket); HttpCookie authCookie = new HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket); if (cookieSection.RequireSSL || authenticationSection.Forms.RequireSSL) { authCookie.Secure = true; } HttpContext.Current.Response.Cookies.Add(authCookie); } We read this cookie back in Global.asax and set the Context.User to be a new GenericPrincipal with the roles we assigned earlier. protected void Application_AuthenticateRequest(Object sender, EventArgs e){ if (Context.User != null) { string cookieName = FormsAuthentication.FormsCookieName; HttpCookie authCookie = Context.Request.Cookies[cookieName]; if (authCookie == null) return; FormsAuthenticationTicket authTicket = FormsAuthentication.Decrypt(authCookie.Value); string[] roles = authTicket.UserData.Split(new char[] { '|' }); FormsIdentity fi = (FormsIdentity)(Context.User.Identity); Context.User = new System.Security.Principal.GenericPrincipal(fi, roles); }} We ensure that a user has permissions to view a record by creating a custom attribute AuthorizeToViewID that inherits from ActionFilterAttribute. public class AuthorizeToViewIDAttribute : ActionFilterAttribute{ IEmployeeRepository employeeRepository = new EmployeeRepository(); public override void OnActionExecuting(ActionExecutingContext filterContext) { if (filterContext.ActionParameters.ContainsKey("id") && filterContext.ActionParameters["id"] != null) { if (employeeRepository.IsAuthorizedToView((int)filterContext.ActionParameters["id"])) { return; } } throw new UnauthorizedAccessException("The record does not exist or you do not have permission to access it"); }} We add the AuthorizeToView attribute to any Action method that requires authorization. [HttpPost][Authorize(Order = 1)]//To prevent CSRF[ValidateAntiForgeryToken(Salt = Globals.EditSalt, Order = 2)]//See AuthorizeToViewIDAttribute class[AuthorizeToViewID(Order = 3)] [ActionName("Edit")]public ActionResult Update(int id){ var employeeToEdit = employeeRepository.GetEmployee(id); if (employeeToEdit != null) { //Employees can edit only their address //A manager can edit the title and salary of their subordinate string[] whiteList = (employeeToEdit.IsSubordinate) ? new string[] { "Title", "Salary" } : new string[] { "Address" }; if (TryUpdateModel(employeeToEdit, whiteList)) { employeeRepository.Save(employeeToEdit); return RedirectToAction("Details", new { id = id }); } else { ModelState.AddModelError("", "Please correct the following errors."); } } return View(employeeToEdit);} The Authorize attribute is added to ensure that only authorized users can execute that Action. We use the TryUpdateModel with a white list to ensure that (a) an employee is able to edit only their Address and (b) that a manager is able to edit only the Title and Salary of a subordinate. This works in conjunction with the AuthorizeToViewIDAttribute. The ValidateAntiForgeryToken attribute is added (with a salt) to avoid CSRF. The Order on the attributes specify the order in which the attributes are executed. The Edit View uses the AntiForgeryToken helper to render the hidden token: ......<% using (Html.BeginForm()) {%><%=Html.AntiForgeryToken(NorthwindHR.Models.Globals.EditSalt)%><%= Html.ValidationSummary(true, "Please correct the errors and try again.") %><div class="editor-label"> <%= Html.LabelFor(model => model.LastName) %></div><div class="editor-field">...... The application uses View specific models for ease of model binding. public class EmployeeViewModel{ public int EmployeeID; [Required] [DisplayName("Last Name")] public string LastName { get; set; } [Required] [DisplayName("First Name")] public string FirstName { get; set; } [Required] [DisplayName("Title")] public string Title { get; set; } [Required] [DisplayName("Address")] public string Address { get; set; } [Required] [DisplayName("Salary")] [Range(500, double.MaxValue)] public decimal Salary { get; set; } public bool IsSubordinate { get; set; }} To help with displaying readonly/editable fields, we use a helper method. //Simple extension method to display a TextboxFor or DisplayFor based on the isEditable variablepublic static MvcHtmlString TextBoxOrLabelFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, bool isEditable){ if (isEditable) { return htmlHelper.TextBoxFor(expression); } else { return htmlHelper.DisplayFor(expression); }} The helper method is used in the view like so: <%=Html.TextBoxOrLabelFor(model => model.Title, Model.IsSubordinate)%> As mentioned in this post, there is a much easier way to update properties on an object. Download Demo Project VS 2008, ASP.NET MVC 2 RTM Remember to change the connectionString to point to your Northwind DB NorthwindHR.zip Feedback and bugs are always welcome :-)

    Read the article

  • ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (today) In today’s post I’m going to discuss how Razor enables you to both implicitly and explicitly define code nuggets within your view templates, and walkthrough some code examples of each of them.  Fluid Coding with Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to the existing .aspx view engine).  You can learn more about Razor, why we are introducing it, and the syntax it supports from my Introducing Razor blog post. Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. For example, the Razor snippet below can be used to iterate a collection of products and output a <ul> list of product names that link to their corresponding product pages: When run, the above code generates output like below: Notice above how we were able to embed two code nuggets within the content of the foreach loop.  One of them outputs the name of the Product, and the other embeds the ProductID within a hyperlink.  Notice that we didn’t have to explicitly wrap these code-nuggets - Razor was instead smart enough to implicitly identify where the code began and ended in both of these situations.  How Razor Enables Implicit Code Nuggets Razor does not define its own language.  Instead, the code you write within Razor code nuggets is standard C# or VB.  This allows you to re-use your existing language skills, and avoid having to learn a customized language grammar. The Razor parser has smarts built into it so that whenever possible you do not need to explicitly mark the end of C#/VB code nuggets you write.  This makes coding more fluid and productive, and enables a nice, clean, concise template syntax.  Below are a few scenarios that Razor supports where you can avoid having to explicitly mark the beginning/end of a code nugget, and instead have Razor implicitly identify the code nugget scope for you: Property Access Razor allows you to output a variable value, or a sub-property on a variable that is referenced via “dot” notation: You can also use “dot” notation to access sub-properties multiple levels deep: Array/Collection Indexing: Razor allows you to index into collections or arrays: Calling Methods: Razor also allows you to invoke methods: Notice how for all of the scenarios above how we did not have to explicitly end the code nugget.  Razor was able to implicitly identify the end of the code block for us. Razor’s Parsing Algorithm for Code Nuggets The below algorithm captures the core parsing logic we use to support “@” expressions within Razor, and to enable the implicit code nugget scenarios above: Parse an identifier - As soon as we see a character that isn't valid in a C# or VB identifier, we stop and move to step 2 Check for brackets - If we see "(" or "[", go to step 2.1., otherwise, go to step 3  Parse until the matching ")" or "]" (we track nested "()" and "[]" pairs and ignore "()[]" we see in strings or comments) Go back to step 2 Check for a "." - If we see one, go to step 3.1, otherwise, DO NOT ACCEPT THE "." as code, and go to step 4 If the character AFTER the "." is a valid identifier, accept the "." and go back to step 1, otherwise, go to step 4 Done! Differentiating between code and content Step 3.1 is a particularly interesting part of the above algorithm, and enables Razor to differentiate between scenarios where an identifier is being used as part of the code statement, and when it should instead be treated as static content: Notice how in the snippet above we have ? and ! characters at the end of our code nuggets.  These are both legal C# identifiers – but Razor is able to implicitly identify that they should be treated as static string content as opposed to being part of the code expression because there is whitespace after them.  This is pretty cool and saves us keystrokes. Explicit Code Nuggets in Razor Razor is smart enough to implicitly identify a lot of code nugget scenarios.  But there are still times when you want/need to be more explicit in how you scope the code nugget expression.  The @(expression) syntax allows you to do this: You can write any C#/VB code statement you want within the @() syntax.  Razor will treat the wrapping () characters as the explicit scope of the code nugget statement.  Below are a few scenarios where we could use the explicit code nugget feature: Perform Arithmetic Calculation/Modification: You can perform arithmetic calculations within an explicit code nugget: Appending Text to a Code Expression Result: You can use the explicit expression syntax to append static text at the end of a code nugget without having to worry about it being incorrectly parsed as code: Above we have embedded a code nugget within an <img> element’s src attribute.  It allows us to link to images with URLs like “/Images/Beverages.jpg”.  Without the explicit parenthesis, Razor would have looked for a “.jpg” property on the CategoryName (and raised an error).  By being explicit we can clearly denote where the code ends and the text begins. Using Generics and Lambdas Explicit expressions also allow us to use generic types and generic methods within code expressions – and enable us to avoid the <> characters in generics from being ambiguous with tag elements. One More Thing….Intellisense within Attributes We have used code nuggets within HTML attributes in several of the examples above.  One nice feature supported by the Razor code editor within Visual Studio is the ability to still get VB/C# intellisense when doing this. Below is an example of C# code intellisense when using an implicit code nugget within an <a> href=”” attribute: Below is an example of C# code intellisense when using an explicit code nugget embedded in the middle of a <img> src=”” attribute: Notice how we are getting full code intellisense for both scenarios – despite the fact that the code expression is embedded within an HTML attribute (something the existing .aspx code editor doesn’t support).  This makes writing code even easier, and ensures that you can take advantage of intellisense everywhere. Summary Razor enables a clean and concise templating syntax that enables a very fluid coding workflow.  Razor’s ability to implicitly scope code nuggets reduces the amount of typing you need to perform, and leaves you with really clean code. When necessary, you can also explicitly scope code expressions using a @(expression) syntax to provide greater clarity around your intent, as well as to disambiguate code statements from static markup. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Setup Remote Access in Windows Home Server

    - by Mysticgeek
    One of the many awesome features of Windows Home Server, is the ability to access your server and other computers on your network remotely. Today we show you the steps to enable Remote Access to your home server from anywhere you have an Internet connection. Remote Access in Windows Home Server has a lot of great features like uploading and downloading files from shared folders, accessing files from machines on your network, and controling machines remotely (on supported OS versions). Here we take a look at the basics of setting it up, choosing a domain name, and verifying you can connect remotely. Setup Remote Access in Windows Home Server Open the Windows Home Server Console and click on Settings. Next select Remote Access, it is off by default, just click the button to turn it on. Wait while your router is configured for remote access, when it’s complete click Next. Notice that it will enable UPnP, if you don’t wish to have that enabled, you can manually forward the correct ports. If you have any problems with the router being automatically configured, we’ll be taking a look at a more detailed troubleshooting guide in the future. The router is successfully configured, and we can continue to the next process of configuring our domain name. The Domain Name Setup Wizard will start. Notice you will need a Windows Live ID to set it up –which is typically your hotmail address. If you don’t already have one, you can get one here. Type in your Live ID email address and password and click Next… Agree to the Home Server Privacy Statement and the Live Custom Domains Addendum. If you’re concerned about privacy and want to learn more about the domain addendum, make sure to read about it before agreeing. There is nothing abnormal to point out about either statement, but if this is your first time setting it up, it’s good to review the information.   Now choose a name for the domain. You should select something that is easy to remember and identifies your home server. The name can contain up to 63 characters, numbers, letters, and hyphens…and must begin and end with a letter or number. When you have the name figured out click the Confirm button. Note: You can only register one domain name per Live ID. If the name isn’t already taken, you’ll get a confirmation message indicating it’s god to go. The wizard is complete and you can now access the home server from the URL provided. A few other things to point out after you’ve set it up…under Domain Name click on the Details button… Which pulls up the domain detail information and you can refresh the data to verify everything is working correctly. Or you can click the Configure button and then change or release your current domain name. Under Web site settings, you can change you site page headline to whatever you want it to be. Accessing Home Server Remotely After you’ve gotten everything setup for your home server domain, you can begin to access it when you’re away from home. Simply type in the domain address you created in the previous steps. The start page is rather boring…and to start accessing your data, click the Log On button in the upper right hand corner. Then enter in your home server credentials to gain access to your files, folders, and network computers. You won’t be able to log in with your administrator user account however, to protect security of your network. Once you’re logged in, you’ll be able to access different parts of your home server shares and network computers. Conclusion Now that you have Remote Access setup, you should be able to access and manage your files easily. Being able to access data from your home server remotely is great when you need to get certain files while on the road. The web UI is pretty self explanatory, works best in IE as ActiveX is required, and is smooth and easy to work with. In future articles we’ll be covering a lot more regarding remote access, including more of the available features, troubleshooting connection issues, and enabling access for other users. Similar Articles Productive Geek Tips GMedia Blog: Setting Up a Windows Home ServerHow to Remote Desktop to the Actual Server Console on Windows 2003Use Windows Vista Aero through Remote Desktop ConnectionAccess Your MySQL Server Remotely Over SSHShare Ubuntu Home Directories using Samba TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox)

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >