Search Results

Search found 64621 results on 2585 pages for 'asp net performance'.

Page 120/2585 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Need help helping in converting jquery, ajax, json and asp.net

    - by Haja Mohaideen
    I am tying out this tutorial, http://www.ezzylearning.com/tutorial.aspx?tid=5869127. It works perfectly. What I am now trying to do is to host the aspx contents as html file. This html file is hosted on my wampserver which is on my laptop. The asp.net code hosted on my test server. When I try to access, I get the following error, Resource interpreted as Script but transferred with MIME type text/html: "http://201.x.x.x/testAjax/Default.aspx/AddProductToCart?callback=jQuery17103264484549872577_1346923699990&{%20pID:%20%226765%22,%20qty:%20%22100%22,%20lblType:%20%2220%22%20}&_=1346923704482". jquery.min.js:4 Uncaught SyntaxError: Unexpected token < I am not sure how to solve this problem. index.html code $(function () { $('#btnAddToCart').click(function () { var result = $.ajax({ type: "POST", url: "http://202.161.45.124/testAjax/Default.aspx/AddProductToCart", crossDomain: true, data: '{ pID: "6765", qty: "100", lblType: "20" }', contentType: "application/json; charset=utf-8", dataType: "jsonp", success: succeeded, failure: function (msg) { alert(msg); }, error: function (xhr, err) { alert(err); } }); }); }); function succeeded(msg) { alert(msg.d); } function btnAddToCart_onclick() { } </script> </head> <body> <form name="form1" method="post"> <div> <input type="button" id="btnAddToCart" onclick="return btnAddToCart_onclick()" value="Button" /> </div> </form> aspx.vb Imports System.Web.Services Imports System.Web.Script.Services <ScriptService()> Public Class WebForm1 Inherits Page Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Session("test") = "" End Sub <WebMethod()> <ScriptMethod(UseHttpGet:=False, ResponseFormat:=ResponseFormat.Json)> Public Shared Function AddProductToCart(pID As String, qty As String, lblType As String) As String Dim selectedProduct As String = String.Format("+ {0} - {1} - {2}", pID, qty, lblType) HttpContext.Current.Session("test") += selectedProduct Return HttpContext.Current.Session("test").ToString() End Function End Class

    Read the article

  • SAN performance issues storing SQL Server tempdb on a SAN that's being backed up

    - by user42724
    I'm afraid I don't know much about SAN's so please forgive my lack of detail or technical terms. As a developer I've just completed and put on an existing production system a new application but it would appear to have tipped the scales regarding the performance of the backups being taken from the SAN. As I understand it there's a mirror of the SAN being taken usually constantly at the block-level. However, there seem to be so many new writes to the disk that the SAN mirroring/backup process can no longer keep up. I believe I've narrowed this down to SQL Servers tempdb which exists on a drive that contributes the largest portion of the problem! In fact I think tempdb has be contributing the largest portion of the issues all along regardless of my application! My question therefore is whether the tempdb should ever be mirrored or backed on the SAN and whether anyone else has gone through this sort of pain already? I'm wondering whether it's a best practise to make sure that tempdb is never mirrored on a SAN simply because any writes to it don't need to be saved. This also raises a slightly connected question - is it better to rely on SQL Servers built-in database backups tools (DB in full-recovery mode with full/differential and transaction log backups) or, as is the case with our application, SQL server is in simple recovery mode and never backed up since the SAN is mirrored and backed up? Many thanks

    Read the article

  • Performance of file operations on thousands of files on NTFS vs HFS, ext3, others

    - by peterjmag
    [Crossposted from my Ask HN post. Feel free to close it if the question's too broad for superuser.] This is something I've been curious about for years, but I've never found any good discussions on the topic. Of course, my Google-fu might just be failing me... I often deal with projects involving thousands of relatively small files. This means that I'm frequently performing operations on all of those files or a large subset of them—copying the project folder elsewhere, deleting a bunch of temporary files, etc. Of all the machines I've worked on over the years, I've noticed that NTFS handles these tasks consistently slower than HFS on a Mac or ext3/ext4 on a Linux box. However, as far as I can tell, the raw throughput isn't actually slower on NTFS (at least not significantly), but the delay between each individual file is just a tiny bit longer. That little delay really adds up for thousands of files. (Side note: From what I've read, this is one of the reasons git is such a pain on Windows, since it relies so heavily on the file system for its object database.) Granted, my evidence is merely anecdotal—I don't currently have any real performance numbers, but it's something that I'd love to test further (perhaps with a Mac dual-booting into Windows). Still, my geekiness insists that someone out there already has. Can anyone explain this, or perhaps point me in the right direction to research it further myself?

    Read the article

  • Outbound HTTP performance tuning recommendations

    - by Richard Gadsden
    I'll detail my exact setup below, but general recommendations for a better web-browsing experience will be useful. A nice checklist of things to try would be great! I have 600 users on a single site with an 8MB leased line. I get a lot of moans about the performance of "the internet" (ie web-browsing). What recommendations do the community have for speeding things up without just throwing more bandwidth at it? I expect I will end up buying some more, but good management tips are always valuable. My setup is this: Cisco PIX (515E) firewall on the edge of the network. It's just doing some basic NAT, and opening up a handful of ports to various bastion hosts (aka DMZ servers). The DMZ is just a switch that the servers are plugged into. ISA 2006 Enterprise array (two servers) connecting DMZ to the internal LAN, with WebSense Web Security filtering HTTP traffic so users can't look at porn or waste bandwidth on YouTube during working hours. I've done a few things - I've just switched my internal DNS over to use root hints, which halved DNS query latency from 500ms to 250ms. Well worth doing. I'm trying to cache more aggressively, but so much more of the internet is AJAXy and doesn't cache very well as compared to five years ago. Plus the 70GB of cache which felt like a lot a few years ago really isn't any more. I'm getting about 45% cache hits by number of requests, but only about 22% by size, ie larger objects are less likely to be cached. Latency seems to be part of the problem. Is that attributable to the bandwidth problem, or are there things I can look at to try to reduce latency even on heavily-loaded bandwidth?

    Read the article

  • Strange performance differences in read/write from/to USB flash drive

    - by Mario De Schaepmeester
    When copying files from my 8GB USB 2.0 flash drive with Windows 7 to a traditional hard drive, the average speed is between 25 and 30 MB/s. When doing the reverse, copying to the USB drive, the speed is 5MB/s average. I have tested this with about 4.5GB of files, a mixture of smaller and larger ones. The observations were the same on both FAT32 and exFAT file systems on the USB drive, NTFS on the internal hard disk. I don't think I can be mistaken in saying that flash memory has a lot higher performance than a spinning hard drive in both terms of reading and writing. For both memory types, reading should be faster than writing too. Now I wonder, how can it be that copying files from a fast read memory to a faster write memory is actually slower than copying files from a fast read memory to a slow write memory? I think that the files are stored in RAM before being copied over too, and there's caching as well, but I don't see how even that could tip the balance. It can only be in the advantage of writing to the USB drive, since it is "closer" to the SATA system than the USB port and it will receive data from the internal SATA HDD faster. Perhaps my way of thinking is all wrong or it just depends on the manufacturer of the USB pen. But I am curious.

    Read the article

  • Poor gaming performance with hyper-v installed in windows 8

    - by SnowCrash
    I am getting very poor gaming performance on my Windows 8 host OS with Hyper-V installed but no guest machines running. For example World of Tanks reports 60-70 FPS without Hyper-V installed and 4-14 FPS with it installed. A similar, dramatic, hit is observed in several other games so the issue is not WoT specific. To make the point clear, I am not trying to run games in a virtual machine. I don't even have a VM running while observing this effect. I simply have the Hyper-V feature installed. My system specs: AMD Phenom II 965 (3.4 GHz) AMD Radeon 6950 2GB (XFX Double D HD-695X-CDFC) 16GB DDR3 1333 AMD 790GX chipset Mainboard (Gigabyte GA-MA790GPT-UD3H) I have tried every AMD driver from 12.8 to the current 12.11beta8, virtualization is enabled in the BIOS settings, the onboard 3300HD video device is disabled in BIOS and I have read the MSDN blog entry here regarding a similar issue in Server 2008 that was resolved in 2008 R2 (and hopefully not regressed in Win 8). I'd like to be able to use Hyper-V for development and testing at home (I am a sysadmin/software developer professionally). If, however, I can't also use my home system for entertainment I'll have to scrap those plans.

    Read the article

  • SSD, AHCI and write performance

    - by Dan
    We've started to deploy SSD drives to our developers workstations. At this moment we're having the unpleasant surprise that the systems using the new SSDs often freeze, with the HDD activity led blinking or being continuously on. Benchmarks shows read speeds around 180 MB/s, but write speeds around 5 MB/s. All developers are using Windows 7 Enterprise, 64 bit, SP1. One of our developers suggested (based on his experience) the following sequence: backup the workstation use a tool to completely erase the SSD make sure AHCI is enabled in BIOS install Windows restore from backup So far, this procedure seems to work (we're still testing, but write speed seems to be 120 MB/s). There are some questions in this context: why do we have to completely reinstall Windows? Is it possible to clean the SSD without reinstalling Windows? Is there a reliable tool? If AHCI was disabled when Windows was installed and we enable it, shouldn't this be enough to correct the write performance issue? If we have to completely erase the SSDs, does this mean the SSDs we've received were used before (SH)? I'm wondering this because the package I've got was open (I didn't think about it at that time, as I considered one of my coworkers simply took a peek inside the package). Has anyone seen a similar problem before?

    Read the article

  • Poor Write Performance in VM inside Proxmox PVE 2.0

    - by sorsenne
    I am running a PVE 2.0 on a decent Hardware (2 SATA HDDs as RAID1, 12GB RAM, i7 CPU) but the I/O Performance is very poor inside the VM (Ubuntu 11.10 Server). The very same VM was copied to another Server running simply Ubuntu Server with KVM and had better I/O Perf. this is how the HDD is shown in the Guest: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) ata1.00: ATA-8: ST3000DM001-9YN166, CC49, max UDMA/133 ata1.00: 5860533168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA ata1.00: configured for UDMA/133 scsi 0:0:0:0: Direct-Access ATA ST3000DM001-9YN1 CC49 PQ: 0 ANSI: 5 sd 0:0:0:0: [sda] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB) sd 0:0:0:0: [sda] 4096-byte physical blocks sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA I tested with DD: $ dd bs=1M count=128 if=/dev/zero of=test conv=fdatasync 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 19.2222 s, 7.0 MB/s on the Host, this same Test will result with 156 MB/s in average. PS: I am using VirtIO and see no error in dmesg.

    Read the article

  • Very poor SCSI hd performance on IBM x336 with LSI 1030 RAID1

    - by David Tschoepe
    I'm experiencing very poor performance on an IBM x336 server with dual 73GB 15k hard drives on a U320 controller, LSI 1030. We're getting maybe 3.5MB/sec max (per HD Tune utility). It should be over 100MB/sec at least, I would think (another x335 box is running 70-80MB/sec). The server was recently setup and didn't really notice the problem, but may have been there from the beginning, so not sure. I have installed the IBM ServerRAID Windows utility. The server is running Windows 2008 R2 Web edition (if that matters). I thought maybe one of the drives was bad, so far I have removed one of the drives out of the array and tested again, but still the same results. I'm waiting for the RAID1 to resync and I will try pulling the other drive next. I've also used the ServerRAID utility but haven't noticed anything in there that might indicate a problem. Not sure if I'm on the right path here. So looking for some advice to track this down.

    Read the article

  • performance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Slow performance of MySQL database on one server and fast on another one, with similar configurations

    - by Alon_A
    We have a web application that run on two servers of GoDaddy. We experince slow preformance on our production server, although it has stronger hardware then the testing one, and it is dedicated. I'll start with the configurations. Testing: CentOS Linux 5.8, Linux 2.6.18-028stab101.1 on i686 Intel(R) Xeon(R) CPU L5609 @ 1.87GHz, 8 cores 60 GB total, 6.03 GB used Apache/2.2.3 (CentOS) MySQL 5.5.21-log PHP Version 5.3.15 Production: CentOS Linux 6.2, Linux 2.6.18-028stab101.1 on x86_64 Intel(R) Xeon(R) CPU L5410 @ 2.33GHz, 8 cores 120 GB total, 2.12 GB used Apache/2.2.15 (CentOS) MySQL 5.5.27-log - MySQL Community Server (GPL) by Remi PHP Version 5.3.15 We are running the same code on both servers. The Problem We have some function that executes ~30000 PDO-exec commands. On our testing server it takes about 1.5-2 minutes to complete and our production server it can take more then 15 minutes to complete. As you can see here, from qcachegrind: Researching the problem, we've checked the live graphs on phpMyAdmin and discovered that the MySQL server on our testing server was preforming at steady level of 1000 execution statements per 2 seconds, while the slow production MySQL server was only 250 executions statements per 2 seconds and not steady at all, jumping from 0 to 250 every seconds. You can clearly see it in the graphs: Testing server: Production server: You can see here the comparison between both of the configuration of the MySQL servers.Left is the fast testing and right is the slow production. The differences are highlighted, but I cant find anything that can cause such a behavior difference, as the configs are mostly the same. Maybe you can see something that I cant see. Note that our tables are all InnoDB, so the MyISAM difference is (probably) not relevant. Maybe it is the MySQL Community Server (GPL) that is installed on the production server that can cause the slow performance? Or maybe it needs to be configured differently for 64bit ? I'm currently out of ideas...

    Read the article

  • Photoshop CS5 performance over network drive (cifs)

    - by grub
    Hello Everyone I did install a QNAP NAS TS410 for a customer (professional photographer) with 3 Hitachi Deskstar 7200rpm 2TB disk configured as RAID5. The NAS and the workstations are connected over a Gigabit network. He and his co-worker are accessing the photos (about 1TB of photos) over a mapped network drive from their windows machines (Windows XP - 32bit and Windows 7 Ultimate - 32bit). Both are using Photoshop CS5 to edit the photos. The problem is that to save a edited photo takes a really long time, it takes about 3 times as long to save a photo as to open it. After some tests I can exclude the network, the NAS and the windows machines as source of the issue. I think the problem is the Photoshop software and its handling of the network drives. Officially network drives are not supported by Adobe. I do not have any experience with the Adobe products, especially with Adobe Photoshop CS5. What are your recommendation to solve the performance issue? Should my customer copy the photos to the local drive, edit them and upload them again to the network drive or is Adobe Drive or Adobe Version Cue the answer? One requirement is that the photos need to be accessible / editable from both computers even when one of them is offline. Adobe Version Cue needs a dedicated service running to be usable, so this solution is not possible as far as I understand the Cue software. Thank you for your input to this issue and have a nice day :-) Greetings grub

    Read the article

  • Disk usage on IIS, PHP5, performance problems.

    - by Jacob84
    Hi everybody, I'm quite worried with a performance problem that I'm facing in one of our production servers. I'm working for a hosting company, so you can imagine how heterogeneous the applications runnning here are. All started with a call of a client complaining about the speed loading a Joomla. The setup is IIS6 (Windows 2003) with PHP5 and FAST CGI wich normally works pretty well. I've tested the loading time and indeed, he was right. 7 or 8 seconds to load, when usually this can be accomplished in 2. Seeing this results, I started to check first CPU and RAM. Everithing normal, 2GB of RAM free, 3%-8% of CPU activity. That's what I call a relaxed server ;). Unfortunately, digging a little deeper I've found the 'PhysicalDisk' counters quite high (above 10), specially the read queues. I've used Process Explorer to see wich of those processes has the higher deltas, but everything seemed normal. As the problem is specially related to PHP pages, I've checked specific IIS counters, as Actual connections, Number of CGI requeriments and Number of ISAPI requeriments. CGI -> 3 to 7 ISAPI -> 5 to 9 Connections-> 90 to 120 (wich appears at the top of the graph) More than a solution (I know this is hard to find), I would like to know if you have an specifical methodology to face this kind of problems. Thanks a lot, as always.

    Read the article

  • Performance decrease in every game and application

    - by Márk Vincze
    When I start a game, initially it runs smoothly, but after a couple of minutes, the performance gradually decreases to the point of being unplayable (1-2 FPS). The sound also starts to lag at this point. This does not happen every time I start my PC, usually exiting the game, rebooting, then starting the game again solves the problem, and I can play with perfect FPS for as long as I want. I could not find any deterministic reason when this happens and when doesn't. It happens in every game I tried (SWTOR, Diablo 3, Skyrim), and not even games, but simple applications like a browser or the Control Panel can get unusably slow. This is a brand new PC I bought three months ago, and this problem occurs since the first day I've been using it. Could you provide any advice how to further diagnose the problem? I tried to reinstall Windows, and tried different video card drivers, but it did not help. It would be important to know whether this is a hardware or software problem, because I can use the warranty if it is a hardware issue. (I did not want to return the PC yet, because I can't reproduce the issue deterministically.) Spec of the pc: Motherboard: ASROCK H61M-HVS CPU: INTEL Core i3-2120 3.30GHz 1155 BOX Memory: KINGMAX 4096MB DDR3 1333MHz KIT Video card: GIGABYTE GV-R685OC-1GD HD6850 1GB GDDR5 PCIE HDD: SEAGATE 500GB Barracuda 7200rpm 16MB SATA3 ST500DM002 I am using Windows 7 64 bit. Thanks a lot in advance!

    Read the article

  • file read performance degrades as number of files increases

    - by bfallik-bamboom
    We're observing poor file read IO results that we'd like to better understand. We can use fio to write 100 files with a sustained aggregate throughput of ~700MB/s. When we switch the test to read instead of write, the aggregate throughput is only ~55MB/s. The drop seems related to the number of files since the throughput for read and write are comparable for a single file then diverge proportionally as we increase the number of files. The test server has 24 CPU cores, 48GB of memory, and is running CentOS 6.0. The disk hardware is a RAID 6 array with 12 disks and a Dell H800 controller. This device is partitioned with ext4 using the default settings. Increasing the readahead (using blockdev) improves the read throughput significantly but it still doesn't match write speed. For instance, increasing the readahead from 128KB to 1M improved the read throughput to ~145MB/s. Is this a known performance issue in our OS/disk/filesystem configuration? If so, how can we tell? If not, what tools or tests can we use to further isolate the issue? Thanks.

    Read the article

  • How to Edit data in nested Listview

    - by miti737
    I am using listview to display a list of items and a nested listview to show list of features to each item. Both parent and child listview need to able Insert,Edit and delete operation. It works fine for parent listview. But when I try to edit an child item, The edit button does not take it into Edit mode. Can you please suggest me what I am missing in my code? <asp:ListView ID="lvParent" runat="server" OnItemDataBound="lvParent_ItemDataBound" onitemcanceling="lvParent_ItemCanceling" onitemcommand="lvParent_ItemCommand" DataKeyNames="ItemID" onitemdeleting="lvParent_ItemDeleting" oniteminserting="lvParent_ItemInserting" > <LayoutTemplate> <asp:PlaceHolder ID="itemPlaceholder" runat="server"></asp:PlaceHolder> <div align="right"> <asp:Button ID="btnInsert" runat="server" Text="ADD Item" onclick="btnInsert_Click"/> </div> </LayoutTemplate> <ItemTemplate> <table runat="server" cellpadding="0" cellspacing="0" border="0" width="100%"> <tr> <td> <div id="dvDetail"> <span >Description</span> <asp:TextBox ID="txtDescription" runat="server" Text='<%# DataBinder.Eval(Container.DataItem, "Description") %>' TextMode="MultiLine" ></asp:TextBox> </div> <div id="dvFeature" > <span>Feature List</span> <asp:ListView ID="lvChild" runat="server" InsertItemPosition="LastItem" DataKeyNames="FeatureID" OnItemCommand="lvChild_ItemCommand" OnItemCanceling="lvChild_ItemCanceling" OnItemDeleting="lvChild_ItemDeleting" OnItemEditing="lvChild_ItemEditing" OnItemInserting="lvChild_ItemInserting" OnItemUpdating="lvChild_ItemUpdating" DataSource='<%# DataBinder.Eval(Container.DataItem, "FeatureList") %>' > <LayoutTemplate> <ul > <asp:PlaceHolder runat="server" ID="itemPlaceHolder" ></asp:PlaceHolder> </ul> </LayoutTemplate> <ItemTemplate> <li> <span class="dvList"><%# DataBinder.Eval(Container.DataItem, "FeatureTitle")%></span> <div class="dvButton" > <asp:ImageButton ID="btnEdit" runat="server" ImageUrl="/Images/edit_16x16.gif" AlternateText= "Edit" CommandName="Edit" CommandArgument='<%# DataBinder.Eval(Container.DataItem, "FeatureID") %>' Width="12" Height="12" /> <asp:ImageButton ID="btnDelete" runat="server" ImageUrl="/Images/delete_16x16.gif" AlternateText= "Delete" CommandName="Delete" CommandArgument='<%# DataBinder.Eval(Container.DataItem, "FeatureID") %>' Width="12" Height="12" /> </div> </li> </ItemTemplate> <EditItemTemplate> <li> <asp:TextBox ID="txtFeature" Text='<%# DataBinder.Eval(Container.DataItem, "FeatureTitle")%>' runat="server"></asp:TextBox> <div class="dvButton"> <asp:ImageButton ID="btnUpdate" runat="server" ImageUrl="/Images/ok_16x16.gif" AlternateText= "Update" CommandName="Update" CommandArgument='<%# DataBinder.Eval(Container.DataItem, "FeatureID") %>' Width="12" Height="12" /> <asp:ImageButton ID="btnCancel" runat="server" ImageUrl="/Images/delete_16x16.gif" AlternateText= "Cancel" CommandName="Cancel" Width="12" Height="12" CausesValidation="false" /> </div> </li> </EditItemTemplate> <InsertItemTemplate> <asp:TextBox ID="txtFeature" runat="server"></asp:TextBox> <div class="dvButton"> <asp:ImageButton ID="btnInsert" runat="server" ImageUrl="/Images/ok_16x16.gif" AlternateText= "Insert" CommandName="Insert" Width="12" Height="12" /> <asp:ImageButton ID="btnCancel" runat="server" ImageUrl="/Images/delete_16x16.gif" AlternateText= "Cancel" CommandName="Cancel" Width="12" Height="12" CausesValidation="false" /> </div> </InsertItemTemplate> </asp:ListView> </div> </td> </tr> <tr> <td align="right"> <div id="dvButton" > <asp:Button ID="btnSave" runat="server" Text="Save" CommandName="Save" CommandArgument='<%# DataBinder.Eval(Container.DataItem, "ItemID") %>' /> <asp:Button ID="btnDelete" runat="server" Text="Delete" CssClass="Cancel" CommandName="Delete" CommandArgument='<%# DataBinder.Eval(Container.DataItem, "ItemID") %>' /> </div> </td> </tr> </table> </ItemTemplate> </asp:ListView> Code Behind: protected void Page_Load(object sender, EventArgs e) { if (Page.IsPostBack == false) { BindData(); } } private void BindData() { MyDataContext data = new MyDataContext(); var result = from itm in data.ItemLists where itm.ItemID == iItemID select new { itm.ItemID, itm.Description, FeatureList = itm.Features }; lvParent.DataSource = result; lvParent.DataBind(); } protected void lvChild_ItemEditing(object sender, ListViewEditEventArgs e) { ListView lvChild = sender as ListView; lvChild.EditIndex = e.NewEditIndex; lvChild.DataBind(); } Edit: protected void lvChild_ItemEditing(object sender, ListViewEditEventArgs e) { ListView lvChild = sender as ListView; lvChild.EditIndex = e.NewEditIndex; lvChild.DataBind(); } If I use "lvChild.DataBind()" in 'ItemEditing' event, the total list of child items goes away if I click 'edit' protected void lvChild_ItemEditing(object sender, ListViewEditEventArgs e) { ListView lvChild = sender as ListView; lvChild.EditIndex = e.NewEditIndex; } if I get rid of 'lvChild.Databind' in ItemEditing event, it goes to Edit mode after clicking the 'edit' button twice . And though it shows textbox control of EditItemTemplate, it appears as a blank textbox (does not bind existing value to edit).

    Read the article

  • RSACryptoServiceProvider CryptographicException System Cannot Find the File Specified under ASP.NET

    - by Will Hughes
    I have an application which is making use of the RSACryptoServiceProvider to decrypt some data using a known private key (stored in a variable). When the IIS Application Pool is configured to use Network Service, everything runs fine. However, when we configure the IIS Application Pool to run the code under a different Identity, we get the following: System.Security.Cryptography.CryptographicException: The system cannot find the file specified. at System.Security.Cryptography.Utils.CreateProvHandle(CspParameters parameters, Boolean randomKeyContainer) at System.Security.Cryptography.RSACryptoServiceProvider.ImportParameters(RSAParameters parameters) at System.Security.Cryptography.RSA.FromXmlString(String xmlString) The code is something like this: byte[] input; byte[] output; string private_key_xml; var provider = new System.Cryptography.RSACryptoServiceProvider(this.m_key.Key_Size); provider.FromXmlString(private_key_xml); // Fails Here when Application Pool Identity != Network Service ouput = provider.Decrypt(input, false); // False = Use PKCS#1 v1.5 Padding There are resources which attempt to answer it by stating that you should give the user read access to the machine key store - however there is no definitive answer to solve this issue. Environment: IIS 6.0, Windows Server 2003 R2, .NET 3.5 SP1

    Read the article

  • Request for the permission of type 'System.Web.AspNetHostingPermission' failed when compiling web si

    - by ahsteele
    I have been using Windows 7 for a while but have not had to work with a particular legacy intranet application since my upgrade. Unfortunately, this application is setup as an ASP.NET Website project hosted on a remote server. When I have the website open in Visual Studio 2008 and try to debug it I get the following compiler error: Request for the permission of type 'System.Web.AspNetHostingPermission' failed To resolve this issue on Windows Vista machines, I would change the machine's .NET Security Configuration trust level to full for the local intranet (fix outlined here). I believe this configuration utility relied upon the mscorcfg.msc which from some cursory research appears to be apart of the .NET 2.0 SDK. I have tried to follow the instructions from this Microsoft Support article running the command below to no avail. Drive:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\caspol.exe -m -ag 1 -url "file:////\\computername\sharename\*" FullTrust -exclusive on Presently, I have the following .NET and ASP.NET components installed on my machine Microsoft .NET Compact Framework 2.0 SP2 Microsoft .NET Compact Framework 3.5 Microsoft .NET Framework 4 Client Profile Microsoft .NET Framework 4 Extended Microsoft .NET Framework 4 Multi-Targeting Pack Microsoft ASP.NET MVC 1.0 Microsoft ASP.NET MVC 2 Microsoft ASP.NET MVC 2 - Visual Studio 2008 Tools Microsoft ASP.NET MVC 2 - Visual Studio 2010 Tools Do I need to install the .NET 2.0 SDK? Am I issuing the caspol command incorrectly? Is there something else that I am missing?

    Read the article

  • ASP.NET MVC and WCF

    - by Michael Stum
    I'm working my way into MVC at the moment, but on my "To learn at some point" list, I also have WCF. I just wonder if WCF is something that should/could be used in an MVC Application or not? The Background is that I want a Desktop Application (.NET 3.5, WPF) interact with my MVC Web Site, and I wonder what the best way to transfer data between the two is. Should I just use special Views/have the controllers return JSON or XML (using the ContentResult)? And maybe even more important, for the other way round, could I just call special controllers? Not sure how Authorization would work in such a context. I can either use Windows Authentication or (if the Site is running forms authentication) have the user store his/her credentials in the application, but I would then essentially create a HTTP Client in my Application. So while MVC = Application seems really easy, Application = MVC does seem to be somewhat tricky and a possible use for WCF? I'm not trying to brute-force WCF in this, but I just wonder if there is indeed a good use case for WCF in an MVC application.

    Read the article

  • ASP.NET putting dynamic controls on page in reverse messes up events

    - by Jimmy Geels
    I have this weird problem when putting textboxes on the page in reverse. The whole event system is messed up. Changing one textbox fires TextChange on all textboxes. I can fix this by putting the controls in a list first and then call add while iterating trough the list in reverse. But i just want to know why this fails. Heres some code (.net 2.0) public partial class _Default : Page { protected void Page_Load(object sender, EventArgs e) { InitFields(); } private void InitFields() { int nrFields; //We have a static textbox called nrElements, this determines the number //of fields to initialize if (int.TryParse(nrElements.Text, out nrFields)) { //Put all the dynamic fields on the screen in reverse order foreach(Control t in GetDynamicFields(nrFields)) { //Calling Controls.Add works fine //Calling Controls.AddAt messes up the events //Try changing different textboxes plhFields.Controls.AddAt(0, t); } } } private IEnumerable<Control> GetDynamicFields(int nrFields) { for (int i = 0; i < nrFields; i++) { TextBox txtBox = new TextBox(); txtBox.ID = string.Format("dynTextBox{0}", i.ToString()); txtBox.AutoPostBack = true; txtBox.TextChanged += t_TextChanged; yield return txtBox; } } private void t_TextChanged(object sender, EventArgs e) { TextBox txtBox = sender as TextBox; if (txtBox != null) txtBox.Text = txtBox.Text + "Changed "; } }

    Read the article

  • ASP.NET MVC 3 - What features do you want to see?

    - by user299592
    I know a bunch of people that are really enjoying the improvements that ASP.NET MVC 2 made over the first release. I have just started to migrate our MVC 1 project over and so far areas has totally cleaned up the subfolder mess we had in our large scale application. As I dive deeper into all the improvements and changes that were made I still keep thinking to myself man it would be nice if they had x in this release. For isntance, I would love it if they had some sort of dependency injection built in instead of having to use third party solutions. My real question is now that ASP.NET MVC 2 is out in the wild, what features do want/wish the team had implemented and hope they will implement for ASP.NET MVC 3?

    Read the article

  • Request for the permission of type 'System.Web.AspNetHostingPermission' when compiling web site

    - by ahsteele
    I have been using Windows 7 for a while but have not had to work with a particular legacy intranet application since my upgrade. Unfortunately, this application is setup as an ASP.NET Website project hosted on a remote server. When I have the website open in Visual Studio 2008 and try to debug it: Request for the permission of type 'System.Web.AspNetHostingPermission' failed To resolve this issue on Windows Vista machines I would change the machine's .NET Security Configuration to trust the local intranet. I believe this configuration utility relied upon the mscorcfg.msc which from some cursory research appears to be apart of the .NET 2.0 SDK. I have tried to follow the instructions from this Microsoft Support article running the command below to no avail. Drive:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\caspol.exe -m -ag 1 -url "file:////\\computername\sharename\*" FullTrust -exclusive on Presently, I have the following .NET and ASP.NET components installed on my machine Microsoft .NET Compact Framework 2.0 SP2 Microsoft .NET Compact Framework 3.5 Microsoft .NET Framework 4 Client Profile Microsoft .NET Framework 4 Extended Microsoft .NET Framework 4 Multi-Targeting Pack Microsoft ASP.NET MVC 1.0 Microsoft ASP.NET MVC 2 Microsoft ASP.NET MVC 2 - Visual Studio 2008 Tools Microsoft ASP.NET MVC 2 - Visual Studio 2010 Tools Do I need to install the .NET 2.0 SDK? Am I issuing the caspol command incorrectly? Is there something else that I am missing?

    Read the article

  • How to Display Validation Error Messages on an ASP.NET MVC Page?

    - by Yardstermister
    I am pretty new to ASP.NET and C# I have spent the day learning the basics of the ASP.NET Membership provider I have built all my validator but are getting stuck at outputting my error message on the page. private void LogCreateUserError(MembershipCreateStatus status, string username) { string reasonText = status.ToString(); switch (status) { case MembershipCreateStatus.DuplicateEmail: case MembershipCreateStatus.DuplicateProviderUserKey: case MembershipCreateStatus.DuplicateUserName: reasonText = "The user details you entered are already registered."; break; case MembershipCreateStatus.InvalidAnswer: case MembershipCreateStatus.InvalidEmail: case MembershipCreateStatus.InvalidProviderUserKey: case MembershipCreateStatus.InvalidQuestion: case MembershipCreateStatus.InvalidUserName: case MembershipCreateStatus.InvalidPassword: reasonText = string.Format("The {0} provided was invalid.", status.ToString().Substring(7)); break; default: reasonText = "Due to an unknown problem, we were not able to register you at this time"; break; } //CODE TO WRITE reasonText TO THE HTML PAGE ?? } What is the best way to output the varible result onto the page as I have relied upon the built in ASP:Validators until now.

    Read the article

  • Classic ASP and MVC side-by-side, different projects?

    - by David Lively
    I've tried asking this in a few different ways, but let's give it another shot (as I've yet to receive an answer and this is driving me nuts!) I have a very large classic ASP 3.0 application (~350K lines) that I want to start migrating to ASP.NET MVC. I'd like to keep the old ASP files in a separate project from the MVC stuff. Ideas on how to debug these? Should I just dump the files in the same folder and create two different projects ( a WAP and an MVC app) that reference the relevant files and folders required by each? This should work, but does anyone have a better idea? I need the ability to migrate small parts of the application individually as this will probably take a year or two to complete.

    Read the article

  • Is *not* using the asp.net membership provider a bad idea?

    - by EJB
    Is it generally a really bad idea to not use the built-in asp.net membership provider? I've always rolled my own for my asp.net apps (public facing), and really have not had any problems in doing so. It works, and seems to avoid a layer of complexity. My needs are pretty basic: once setup, the user must use email address and password to login, if they forget it, it will be emailed back to them (a new one). After setup there is little that needs to be done to each user account, but I do need to store several extra fields with each user (full name, telephone and a few other fields etc). The number of users that required login credentials are small (usually just the administrator and a few backups), and everyone else uses the site unauthenticated. What are the big advantages that I might be missing out on by skipping the asp.net membership provider functionality?

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >