Search Results

Search found 53148 results on 2126 pages for 'coder net'.

Page 901/2126 | < Previous Page | 897 898 899 900 901 902 903 904 905 906 907 908  | Next Page >

  • StructureMap Autowiring with two different instances of the same interface

    - by Lambda
    For the last two days, i tried my best to learn something about StructureMap, using an old project of mine as an concrete implementation example. I tried to simplify my question as much as possible. While i will post my examples in vb.net, answers with examples in C# are also okay. The project includes an interfaces called IDatabase which connects itself to a Database. The important part looks like this. Public Interface IDatabase Function Connect(ByVal ConnectionSettings As ConnectionSettings) As Boolean ReadOnly Property ConnectionOpen As Boolean [... more functions...] End Interface Public Class MSSQLConnection Implements IDatabase Public Function Connect(ByVal ConnectionSettings As ConnectionSettings) As Boolean Implements IDatabase.Connect [... Implementation ...] End Function [... more implementations...] End Class ConnectionSettings is a structure that has all the information needed to connect to a Database. I want to open the Database Connection once and use it for every single connection in the project, so i register a instance in the ObjectFactory. dim foo = ObjectFactory.GetInstance(Of MSSQLConnection)() dim bar as ConnectionSettings foo.connect(bar) ObjectFactory.Configure(Sub(x) x.For(Of IDatabase).Use(foo)) Up until this part, everything works like a charm. Now, i get to a point where i hav e classes that need an additional instance of IDatabase because they connect to a second database. Public Class ExampleClass Public Sub New(ByVal SameOldDatabase as IDatabase, ByVal NewDatabase as IDatabase) [...] Magic happens here [...] End Sub End Class I want this second IDatabase to behave much like the first one. I want it to use a concrete, single instance and want to connect it to a different database invoking Connect with a different ConnectionSettings. The problem is: While i'm pretty sure it's somewhow possible, (my initial idea was registering ExampleClass with alternative constructor arguments), i actually want to do it without registering ExampleClass. This probably involves more configuration, but i have no idea how to do it. So, basically, it comes down to this question: How do i configurate the ObjectFactory in a way that the autowiring always invokes the constructor with object Database1 for the first IDatabase parameter and object Database2 for the second one (if there is one?)

    Read the article

  • Speech recognition plugin Runtime Error: Unhandled Exception. What could possibly cause it?

    - by manuel
    I'm writing a plugin (dll file) for speech recognition, and I'm creating a WinForm as its interface/dialog. When I run the plugin and click a button to start the initialization, I get an unhandled exception. Below is the complete details of it. See the end of this message for details on invoking just-in-time (JIT) debugging instead of this dialog box. ***** Exception Text ******* System.ArgumentException: Value does not fall within the expected range. at System.Speech.Internal.SapiInterop.SapiProxy.MTAThread.Invoke2(VoidDelegate pfn) at System.Speech.Internal.SapiInterop.SapiRecognizer.SetInput(Object input, Boolean allowFormatChanges) at System.Speech.Recognition.RecognizerBase.SetInputToDefaultAudioDevice() at System.Speech.Recognition.SpeechRecognitionEngine.SetInputToDefaultAudioDevice() at gen_myplugin.Dialog.init() at gen_myplugin.Dialog.btnSpeak_Click(Object sender, EventArgs e) at System.Windows.Forms.Control.OnClick(EventArgs e) at System.Windows.Forms.Button.OnClick(EventArgs e) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) ***** Loaded Assemblies ******* mscorlib Assembly Version: 2.0.0.0 Win32 Version: 2.0.50727.3603 (GDR.050727-3600) CodeBase: file:///c:/WINDOWS/Microsoft.NET/Framework/v2.0.50727/mscorlib.dll ---------------------------------------- gen_speechquery Assembly Version: 1.0.3755.878 Win32 Version: CodeBase: file:///C:/Program%20Files/Winamp/Plugins/gen_speechquery.dll ---------------------------------------- msvcm90 Assembly Version: 9.0.30729.1 Win32 Version: 9.00.30729.1 CodeBase: file:///C:/WINDOWS/WinSxS/x86_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.30729.1_x-ww_6f74963e/msvcm90.dll ---------------------------------------- System.Windows.Forms Assembly Version: 2.0.0.0 Win32 Version: 2.0.50727.3053 (netfxsp.050727-3000) CodeBase: file:///C:/WINDOWS/assembly/GAC_MSIL/System.Windows.Forms/2.0.0.0__b77a5c561934e089/System.Windows.Forms.dll ---------------------------------------- System Assembly Version: 2.0.0.0 Win32 Version: 2.0.50727.3053 (netfxsp.050727-3000) CodeBase: file:///C:/WINDOWS/assembly/GAC_MSIL/System/2.0.0.0__b77a5c561934e089/System.dll ---------------------------------------- System.Drawing Assembly Version: 2.0.0.0 Win32 Version: 2.0.50727.3053 (netfxsp.050727-3000) CodeBase: file:///C:/WINDOWS/assembly/GAC_MSIL/System.Drawing/2.0.0.0__b03f5f7f11d50a3a/System.Drawing.dll ---------------------------------------- System.Speech Assembly Version: 3.0.0.0 Win32 Version: 3.0.6920.1109 (lh_tools_devdiv_wpf.071009-1109) CodeBase: file:///C:/WINDOWS/assembly/GAC_MSIL/System.Speech/3.0.0.0__31bf3856ad364e35/System.Speech.dll ***** JIT Debugging ******* To enable just-in-time (JIT) debugging, the .config file for this application or computer (machine.config) must have the jitDebugging value set in the system.windows.forms section. The application must also be compiled with debugging enabled. For example: When JIT debugging is enabled, any unhandled exception will be sent to the JIT debugger registered on the computer rather than be handled by this dialog box.

    Read the article

  • AutoScaleMode problems with changed default font

    - by Doc Brown
    Hi, I have some problems with the Form.AutoScaleMode property together with fixed size controls, when using a non-default font. I boiled it down to a simple test application (WinForms 2.0) with only one form, some fixed size controls and the following properties: class Form1 : Form { // ... private void InitializeComponent() { // ... this.AutoScaleDimensions = new System.Drawing.SizeF(96F, 96F); this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Dpi; this.Font = new System.Drawing.Font("Tahoma", 9.25F); // ... } } Under 96dpi, Windows XP, the form looks correctly like this 96 dpi example. Under 120 dpi, Windows XP, the the Windows Forms autoscaling feature produces this 120 dpi example. As you can see, groupboxes, buttons, list or tree views are scaled correctly, multiline text boxes get too big in the vertical axis, and a fixed size label does not scale correctly in both vertical and horizontal direction. Seems to be bug in the .NET framework? Using the default font (Microsoft Sans Serif 8.25pt), this problem does not occur. Using AutoScaleMode=Font (with adequate AutoScaleDimensions, of course) either does not scale at all or scales exactly like seen above, depending on when the Font is set (before or after the change of AutoScaleMode). The problem is not specific to the "Tahoma" Font, it occurs also with Microsoft Sans Serif, 9.25pt. And yes, i already read this SO post http://stackoverflow.com/questions/2114857/high-dpi-problems but it does not really help me. Any suggestions how to come around this? EDIT: I changed my image hoster, hope this one works better. EDIT2: Some additional information about my intention: I have about 50 already working fixed size dialogs with several hundreds of properly placed, fixed size controls. They were migrated from an older C++ GUI framework to C#/Winforms, that's why they are all fixed-size. All of them look fine with 96 dpi using a 9.25pt font. Under the old framework, scaling to 120 dpi worked fine - all fixed size controls scaled equal in both dimensions. Last week, we detected this strange scaling behaviour under WinForms when switching to 120 dpi. You can imagine that most of our dialogs now look very bad under 120 dpi. We are looking for a solution that avoids a complete redesign all those dialogs.

    Read the article

  • C# - WebBrowser control seems to cache screenshots

    - by Justin
    Hey, I'm using the WebBrowser control in an ASP.NET MVC 2 app (don't judge, I'm doing it in an admin section only to be used by me), here's the code: public static class Screenshot { private static string _url; private static int _width; private static byte[] _bytes; public static byte[] Get(string url) { // This method gets a screenshot of the webpage // rendered at its full size (height and width) return Get(url, 50); } public static byte[] Get(string url, int width) { //set properties. _url = url; _width = width; //start screen scraper. var webBrowseThread = new Thread(new ThreadStart(TakeScreenshot)); webBrowseThread.SetApartmentState(ApartmentState.STA); webBrowseThread.Start(); //check every second if it got the screenshot yet. //i know, the thread sleep is terrible, but it's the secure section, don't judge... int numChecks = 20; for (int k = 0; k < numChecks; k++) { Thread.Sleep(1000); if (_bytes != null) { return _bytes; } } return null; } private static void TakeScreenshot() { try { //load the webpage into a WebBrowser control. using (WebBrowser wb = new WebBrowser()) { wb.ScrollBarsEnabled = false; wb.ScriptErrorsSuppressed = true; wb.Navigate(_url); while (wb.ReadyState != WebBrowserReadyState.Complete) { Application.DoEvents(); } //set the size of the WebBrowser control. //take Screenshot of the web pages full width. wb.Width = wb.Document.Body.ScrollRectangle.Width; //take Screenshot of the web pages full height. wb.Height = wb.Document.Body.ScrollRectangle.Height; //get a Bitmap representation of the webpage as it's rendered in the WebBrowser control. var bitmap = new Bitmap(wb.Width, wb.Height); wb.DrawToBitmap(bitmap, new Rectangle(0, 0, wb.Width, wb.Height)); //resize. var height = _width * (bitmap.Height / bitmap.Width); var thumbnail = bitmap.GetThumbnailImage(_width, height, null, IntPtr.Zero); //convert to byte array. var ms = new MemoryStream(); thumbnail.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg); _bytes = ms.ToArray(); } } catch(Exception exc) {//TODO: why did screenshot fail? string message = exc.Message; } } This works fine for the first screenshot that I take, however if I try to take subsequent screenshots of different URL's, it saves screenshots of the first url for the new url, or sometimes it'll save the screenshot from 3 or 4 url's ago. I'm creating a new instance of WebBrowser for each screenshot and am disposing of it properly with the "using" block, any idea why it's behaving this way? Thanks, Justin

    Read the article

  • Converting to async,await using async targeting package

    - by e4rthdog
    I have this: private void BtnCheckClick(object sender, EventArgs e) { var a = txtLot.Text; var b = cmbMcu.SelectedItem.ToString(); var c = cmbLocn.SelectedItem.ToString(); btnCheck.BackColor = Color.Red; var task = Task.Factory.StartNew(() => Dal.GetLotAvailabilityF41021(a, b, c)); task.ContinueWith(t => { btnCheck.BackColor = Color.Transparent; lblDescriptionValue.Text = t.Result.Description; lblItemCodeValue.Text = t.Result.Code; lblQuantityValue.Text = t.Result.AvailableQuantity.ToString(); },TaskScheduler .FromCurrentSynchronizationContext() ); LotFocus(true); } and i followed J. Skeet's advice to move into async,await in my .NET 4.0 app. I converted into this: private async void BtnCheckClick(object sender, EventArgs e) { var a = txtLot.Text; var b = cmbMcu.SelectedItem.ToString(); var c = cmbLocn.SelectedItem.ToString(); btnCheck.BackColor = Color.Red; JDEItemLotAvailability itm = await Task.Factory.StartNew(() => Dal.GetLotAvailabilityF41021(a, b, c)); btnCheck.BackColor = Color.Transparent; lblDescriptionValue.Text = itm.Description; lblItemCodeValue.Text = itm.Code; lblQuantityValue.Text = itm.AvailableQuantity.ToString(); LotFocus(true); } It works fine. What confuses me is that i could do it without using Task but just the method of my Dal. But that means that i must have modified my Dal method, which is something i dont want? I would appreciate if someone would explain to me in "plain" words if what i did is optimal or not and why. Thanks P.s. My dal method public bool CheckLotExistF41021(string _lot, string _mcu, string _locn) { using (OleDbConnection con = new OleDbConnection(this.conString)) { OleDbCommand cmd = new OleDbCommand(); cmd.CommandText = "select lilotn from proddta.f41021 " + "where lilotn = ? and trim(limcu) = ? and lilocn= ?"; cmd.Parameters.AddWithValue("@lotn", _lot); cmd.Parameters.AddWithValue("@mcu", _mcu); cmd.Parameters.AddWithValue("@locn", _locn); cmd.Connection = con; con.Open(); OleDbDataReader rdr = cmd.ExecuteReader(); bool _retval = rdr.HasRows; rdr.Close(); con.Close(); return _retval; } }

    Read the article

  • Problems related to showing MessageBox from non-GUI threads

    - by Hans Løken
    I'm working on a heavily data-bound Win.Forms application where I've found some strange behavior. The app has separate I/O threads receiving updates through asynchronous web-requests which it then sends to the main/GUI thread for processing and updating of application-wide data-stores (which in turn may be data-bound to various GUI-elements, etc.). The server at the other end of the web-requests requires periodic requests or the session times out. I've gone through several attempted solutions of dealing with thread-issues etc. and I've observed the following behavior: If I use Control.Invoke for sending updates from I/O-thread(s) to main-thread and this update causes a MessageBox to be shown the main form's message pump stops until the user clicks the ok-button. This also blocks the I/O-thread from continuing eventually leading to timeouts on the server. If I use Control.BeginInvoke for sending updates from I/O-thread(s) to main-thread the main form's message pump does not stop, but if the processing of an update leads to a messagebox being shown, the processing of the rest of that update is halted until the user clicks ok. Since the I/O-threads keep running and the message pump keeps processing messages several BeginInvoke's for updates may be called before the one with the message box is finished. This leads to out-of-sequence updates which is unacceptable. I/O-threads add updates to a blocking queue (very similar to http://stackoverflow.com/questions/530211/creating-a-blocking-queuet-in-net/530228#530228). GUI-thread uses a Forms.Timer that periodically applies all updates in the blocking queue. This solution solves both the problem of blocking I/O threads and sequentiality of updates i.e. next update will be never be started until previous is finished. However, there is a small performance cost as well as introducing a latency in showing updates that is unacceptable in the long run. I would like update-processing in the main-thread to be event-driven rather than polling. So to my question. How should I do this to: avoid blocking the I/O-threads guarantee that updates are finished in-sequence keep the main message pump running while showing a message box as a result of an update.

    Read the article

  • DevExpress Reporting Session Issue

    - by LeeHull
    This is pretty complicated so I will explain what is going on. I have created an ASP.NET website for displaying images, the way we use our images is, we have a database table that contains the URL where the images are located, however we recently started moving them from the filesystem, and storing them directly into the database as binary, but since we don't want to break older applications, we currently store them both places till they are all updated. Alright, the website I'm working on, is just a reporting website to display images by a date range, I have created an HTTPHandler to display the image, depending if they exist in the database as binary, if the row is DBNull, I just grab the URL from the other table instead, I just read the image, convert it into a byte[] and write it to the response, so it is interpreted as an image. I have created a page to display the report using DevExpress Reporting, I just query the report, save the dataset into a session, and the report reads the session to bind the report, i also have a picturebox in the report bound to "Handler.ashx?id=ImageID", this is because I am not binding a URL to the image, since it is a byte[] Also since there could be a report with 10000+ images, I am reading the dataset from the session inside the handler and just pulling out the image from the passed in ImageID, I am doing this to prevent connecting to the database each and every row. Now the strange thing is, Report loads fine.. data is loading, I have paging on the devexpress report viewer, it is loading the images fine, however here is the issue. When I print or export, I am not getting any images, I am debugged and found out the Session.Count is 0 when it tries to print or export, which is strange since there is more than just the dataset in the session. I have also added IRequiresSessionState to allow sessions in the handler, but the session count is still 0, I have changing the Handler to an aspx page, same issue. Any ideas or suggestions on logic changes, I'm all ears.. this is a very difficult situation since I have to display the images from database as a string AND byte[] since one can be URL and other be the image bytes, but I also can't slam the database on connection calls each row either. I have also reported the issue to DevExpress and they are clueless as well, since they don't dispose the session in the exporting process.

    Read the article

  • Embedded Javascript in Master Page or Pages That Use Master Page throw "Object Expected Error"

    - by Philter
    I'm semi-new to writing ASP.Net applications using master pages and I've run into an issue I've spent some time on but can't seem to solve. My situation is that I have a master page with a structure that looks like this: <head runat="server"> <title>Test Site</title> <link rel="Stylesheet" type="text/css" href="Default.css" /> <script type="text/javascript" language="javascript" src="js/Default.js" /> <meta http-equiv="Expires" content="0"/> <meta http-equiv="Cache-Control" content="no-cache"/> <meta http-equiv="Pragma" content="no-cache"/> <asp:ContentPlaceHolder ID="cphHead" runat="server"> </asp:ContentPlaceHolder> </head> <body> <form id="form1" runat="server"> <div id="divHeader"> <asp:ContentPlaceHolder ID="cphPageTitle" runat="server"></asp:ContentPlaceHolder> </div> <div id="divMainContent"> <asp:ContentPlaceHolder ID="cphMainContent" runat="server"></asp:ContentPlaceHolder> </div> </div> </form> </body> I then have a page that uses this master page that contains the following: <asp:Content ContentPlaceHolderID="cphHead" runat="server"> <script type="text/javascript" language="javascript" > function test() { alert("Hello World"); } </script> </asp:Content> <asp:Content ContentPlaceHolderID="cphMainContent" runat="server"> <fieldset> <img alt="Select As Of Date" src="Images/Calendar.png" id="aAsOfDate" class="clickable" runat="server" onclick="test();" /> <asp:Button runat="server" CssClass="buttonStyle" ID="btnSubmit" Text="Submit" OnClick="btnSubmit_Clicked"/> </fieldset> </asp:Content> When I run this page and click on the image I get an "Object Expected" error. However, if I place the test function into my Default.js external file it will function perfectly. I can't seem to figure out why this is happening. Any ideas?

    Read the article

  • How should I provide access to this custom DAL?

    - by Casey
    I'm writing a custom DAL (VB.NET) for an ordering system project. I'd like to explain how it is coded now, and receive some alternate ideas to make coding against the DAL easier/more readable. The DAL is part of an n-tier (not n-layer) application, where each tier is in it's own assembly/DLL. The DAL consists of several classes that have specific behavior. For instance, there is an Order class that is responsible for retrieving and saving orders. Most of the classes have only two methods, a "Get" and a "Save," with multiple overloads for each. These classes are marked as Friend and are only visible to the DAL (which is in it's own assembly). In most cases, the DAL returns what I will call a "Data Object." This object is a class that contains only data and validation, and is located in a common assembly that both the BLL and DAL can read. To provide public access to the DAL, I currently have a static (module) class that has many shared members. A simplified version looks something like this: Public Class DAL Private Sub New End Sub Public Shared Function GetOrder(OrderID as String) as OrderData Dim OrderGetter as New OrderClass Return OrderGetter.GetOrder(OrderID) End Function End Class Friend Class OrderClass Friend Function GetOrder(OrderID as string) as OrderData End Function End Class The BLL would call for an order like this: DAL.GetOrder("123456") As you can imagine, this gets cumbersome very quickly. I'm mainly interested in structuring access to the DAL so that Intellisense is very intuitive. As it stands now, there are too many methods/functions in the DAL class with similar names. One idea I had is to break down the DAL into nested classes: Public Class DAL Private Sub New End Sub Public Class Orders Private Sub New End Sub Public Shared Function Get(OrderID as string) as OrderData End Function End Class End Class So the BLL would call like this: DAL.Orders.Get("12345") This cleans it up a bit, but it leaves a lot of classes that only have references to other classes, which I don't like for some reason. Without resorting to passing DB specific instructions (like where clauses) from BLL to DAL, what is the best or most common practice for providing a single point of access for the DAL?

    Read the article

  • UpdateProgress with UpdatePanel not showing up in User control when page is loading

    - by Carter
    Is this typical behavior of the UpdateProgress for an ASP.Net UpdatePanel? I have an update panel with the UpdateProgress control inside of a user control window on a page. If I then make the page in the background do some loading and click a button in the user control update panel the UpdateProgress does not show up at all. It's like the UpdatePanels refresh request is not even registered until after the actual page is done doing it's business. It's worth noting that it will show up if nothing is happening in the background. The functionality I want is what you would expect. I want to loader to show up if it has to wait for anything to get it's refresh done when after the button is clicked. I know I can get this functionality if I just use jquery ajax with a static web method, but you can't have static web methods inside of a user control. I could have it in the page but it really doesn't belong there. A full-blown wcf wouldn't really be worth it in this case either. I'm trying to compromise with an UpdatePanel but these things always seem to cause me some kind of trouble. Maybe this is just the way it works? Edit:So I'll clarify a bit what I'm doing. What's happening is I have a page and all it has on it are some tools on the side and a big map. When the page initially loads it takes some time to load the map. Now if while it's loading I open up the tool (a user control) that has the update panel in question in it and click the button on this user control that should refresh the update panel with new data and show the loading sign (in the updateprogress) then the UpdateProgress loading image does not show up. However, the code run by the button click does run after the page is done loading (as expected) and The UpdateProgress will show up if nothing on the page containing the user control is loading. I just want the loader to show up while the page is loading. I thought my problem was that perhaps the map loading is in an update panel and my UpdateProgress was only being associated with the update panel for the user control's update panel. Hence, I would get no loading icon when the map was loading. This is not the case though.

    Read the article

  • Facing Memory Leaks in AES Encryption Method.

    - by Mubashar Ahmad
    Can anyone please identify is there any possible memory leaks in following code. I have tried with .Net Memory Profiler and it says "CreateEncryptor" and some other functions are leaving unmanaged memory leaks as I have confirmed this using Performance Monitors. but there are already dispose, clear, close calls are placed wherever possible please advise me accordingly. its a been urgent. public static string Encrypt(string plainText, string key) { //Set up the encryption objects byte[] encryptedBytes = null; using (AesCryptoServiceProvider acsp = GetProvider(Encoding.UTF8.GetBytes(key))) { byte[] sourceBytes = Encoding.UTF8.GetBytes(plainText); using (ICryptoTransform ictE = acsp.CreateEncryptor()) { //Set up stream to contain the encryption using (MemoryStream msS = new MemoryStream()) { //Perform the encrpytion, storing output into the stream using (CryptoStream csS = new CryptoStream(msS, ictE, CryptoStreamMode.Write)) { csS.Write(sourceBytes, 0, sourceBytes.Length); csS.FlushFinalBlock(); //sourceBytes are now encrypted as an array of secure bytes encryptedBytes = msS.ToArray(); //.ToArray() is important, don't mess with the buffer csS.Close(); } msS.Close(); } } acsp.Clear(); } //return the encrypted bytes as a BASE64 encoded string return Convert.ToBase64String(encryptedBytes); } private static AesCryptoServiceProvider GetProvider(byte[] key) { AesCryptoServiceProvider result = new AesCryptoServiceProvider(); result.BlockSize = 128; result.KeySize = 256; result.Mode = CipherMode.CBC; result.Padding = PaddingMode.PKCS7; result.GenerateIV(); result.IV = new byte[] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; byte[] RealKey = GetKey(key, result); result.Key = RealKey; // result.IV = RealKey; return result; } private static byte[] GetKey(byte[] suggestedKey, SymmetricAlgorithm p) { byte[] kRaw = suggestedKey; List<byte> kList = new List<byte>(); for (int i = 0; i < p.LegalKeySizes[0].MaxSize; i += 8) { kList.Add(kRaw[(i / 8) % kRaw.Length]); } byte[] k = kList.ToArray(); return k; }

    Read the article

  • Parse XML and populate in List Box

    - by cedar715
    I've posted the same question here and I've also got couple of good answers as well. While I was trying the same answers, I was getting compilation errors. Later I got to know that we are using .NET 2.0 and our existing application has no references to LINQ files. After searching in SO, i tried to figured out partly: public partial class Item { public object CHK { get; set; } public int SEL { get; set; } public string VALUE { get; set; } } Parsing: XmlDocument doc = new XmlDocument(); doc.LoadXml("<LISTBOX_ST> <item><CHK></CHK><SEL>00001</SEL><VALUE>val01</VALUE></item> <item><CHK></CHK><SEL>00002</SEL><VALUE>val02</VALUE></item> <item><CHK></CHK><SEL>00003</SEL><VALUE>val03</VALUE></item> <item><CHK></CHK><SEL>00004</SEL><VALUE>val04</VALUE></item> <item><CHK></CHK><SEL>00005</SEL><VALUE>val05</VALUE></item> </LISTBOX_ST>"); List<Item> _lbList = new List<Item>(); foreach (XmlNode node in doc.DocumentElement.ChildNodes) { string text = node.InnerText; //or loop through its children as well //HOW - TO - POPULATE THE ITEM OBJECT ?????? } listBox1.DataSource = _lbList; listBox1.DisplayMember = "VALUE"; listBox1.ValueMember = "SEL"; How to read two child nodes - SEL and VALUE of node and populate the same in the new Item DTO??

    Read the article

  • How to maximize http.sys file upload performance

    - by anelson
    I'm building a tool that transfers very large streaming data sets (possibly on the order of terabytes in a single stream; routinely in the tens of gigabytes) from one server to another. The client portion of the tool will read blocks from the source disk, and send them over the network. The server side will read these blocks off the network and write them to a file on the server disk. Right now I'm trying to decide which transport to use. Options are raw TCP, and HTTP. I really, REALLY want to be able to use HTTP. The HttpListener (or WCF if I want to go that route) make it easy to plug in to the HTTP Server API (http.sys), and I can get things like authentication and SSL for free. The problem right now is performance. I wrote a simple test harness that sends 128K blocks of NULL bytes using the BeginWrite/EndWrite async I/O idiom, with async BeginRead/EndRead on the server side. I've modified this test harness so I can do this with either HTTP PUT operations via HttpWebRequest/HttpListener, or plain old socket writes using TcpClient/TcpListener. To rule out issues with network cards or network pathways, both the client and server are on one machine and communicate over localhost. On my 12-core Windows 2008 R2 test server, the TCP version of this test harness can push bytes at 450MB/s, with minimal CPU usage. On the same box, the HTTP version of the test harness runs between 130MB/s and 200MB/s depending upon how I tweak it. In both cases CPU usage is low, and the vast majority of what CPU usage there is is kernel time, so I'm pretty sure my usage of C# and the .NET runtime is not the bottleneck. The box has two 6-core Xeon X5650 processors, 24GB of single-ranked DDR3 RAM, and is used exclusively by me for my own performance testing. I already know about HTTP client tweaks like ServicePointManager.MaxServicePointIdleTime, ServicePointManager.DefaultConnectionLimit, ServicePointManager.Expect100Continue, and HttpWebRequest.AllowWriteStreamBuffering. Does anyone have any ideas for how I can get HTTP.sys performance beyond 200MB/s? Has anyone seen it perform this well on any environment?

    Read the article

  • Maintaining ViewModel fields with default model binding and failed validation

    - by TonE
    I have an ASP.Net MVC Controller with a 'MapColumns' action along with a corresponding ViewModel and View. I'm using the defaultModelBinder to bind a number of drop down lists to a Dictionary in the ViewModel. The view model also contains an IList field for both source and destination columns which are used to render the view. My question is what to do when validation fails on the Post call to the MapColumns action? Currently the MapColumns view is returned with the ViewModel resulting from the default binding. This contains the Dictionary values but not the two lists used to render the page. What is the best way to re-provide these to the view? I can set them explicitly after failed validation, but if obtaining these values (via GetSourceColumns() and GetDestinationColumns() in the example) carries any overhead this doesn't seem ideal. What I am looking for is a way to retain these lists when they are not bound to the model from the view. Here is some code to illustrate: public class TestViewModel { public Dictionary<string, string> ColumnMappings { get; set; } public List<string> SourceColumns; public List<string> DestinationColumns; } public class TestController : Controller { [AcceptVerbs(HttpVerbs.Get)] public ActionResult MapColumns() { var model = new TestViewModel; model.SourceColumns = GetSourceColumns(); model.DestinationColumns = GetDestinationColumns(); return View(model); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult MapColumns(TestViewModel model) { if( Validate(model) ) { // Do something with model.ColumnMappings RedirectToAction("Index"); } else { // Here model.SourceColumns and model.DestinationColumns are empty return View(model); } } } The relevant section of MapColumns.aspx: <% int columnCount = 0; foreach(string column in Model.targetColumns) {%> <tr> <td> <input type="hidden" name="ColumnMappings[<%= columnCount %>].Value" value="<%=column %>" /> <%= Html.DropDownList("ColumnMappings[" + columnCount + "].Key", Model.DestinationColumns.AsSelectItemList())%> </td> </tr> <% columnCount++; }%>

    Read the article

  • N2 CMS SlidingCurtain control is not visible

    - by Carl Raymond
    I just set up a new N2 site by starting with the MVC 2 Web Application template in Visual Studio, then following the directions in N2 CMS Developer Documentation in the section Integrating with Existing ASP.NET MVC Application. I have the basic site running now, but with one problem: the sliding curtain widget that holds the administrative controls is not visible in the upper right corner (when logged in, of course). I can make it visible the hard way by using Firebug to locate it in the DOM, and then disabling a couple of the CSS positioning elements. Once I do that, it seems to work normally. After I open it that way, I can click the various controls, or close it up (and I see the animation). But then it's off screen again. My master page has the sliding curtain just inside the <body> tag: <body> <n2:SlidingCurtain runat="server"> <n2:ControlPanel runat="server" /> </n2:SlidingCurtain> ... The site.css file generated in the base MVC site doesn't seem to do any positioning that would affect this. Firebug shows that right after by <body> tag, I have this: <div class="sc" id="SC" style="top: -2px; left: -574px;"><div class="scContent"> .... The style for <div class="sc" ...> is element.style { left:-574px; top:-2px; } .sc { background:#FFFFFF none repeat-x scroll 0 0; border-color:#CCCCBB; border-style:none solid solid none; border-width:1px; left:-200px; position:fixed; top:-200px; z-index:990; } If I disable both top: and both left: rules, the widget appears.

    Read the article

  • VS2008: File creation fails randomly in unit testing?

    - by Tim
    I'm working on implementing a reasonably simple XML serializer/deserializer (log file parser) application in C# .NET with VS 2008. I have about 50 unit tests right now for various parts of the code (mostly for the various serialization operations), and some of them seem to be failing mostly at random when they deal with file I/O. The way the tests are structured is that in the test setup method, I create a new empty file at a certain predetermined location, and close the stream I get back. Then I run some basic tests on the file (varying by what exactly is under test). In the cleanup method, I delete the file again. A large portion (usually 30 or more, though the number varies run to run) of my unit tests will fail at the initialize method, claiming they can't access the file I'm trying to create. I can't pin down the exact reason, since a test that will work one run fails the next; they all succeed when run individually. What's the problem here? Why can't I access this file across multiple unit tests? Relevant methods for a unit test that will fail some of the time: [TestInitialize()] public void LogFileTestInitialize() { this.testFolder = System.Environment.GetFolderPath( System.Environment.SpecialFolder.LocalApplicationData ); this.testPath = this.testFolder + "\\empty.lfp"; System.IO.File.Create(this.testPath); } [TestMethod()] public void LogFileConstructorTest() { string filePath = this.testPath; LogFile target = new LogFile(filePath); Assert.AreNotEqual(null, target); Assert.AreEqual(this.testPath, target.filePath); Assert.AreEqual("empty.lfp", target.fileName); Assert.AreEqual(this.testFolder + "\\empty.lfp.lfpdat", target.metaPath); } [TestCleanup()] public void LogFileTestCleanup() { System.IO.File.Delete(this.testPath); } And the LogFile() constructor: public LogFile(String filePath) { this.entries = new List<Entry>(); this.filePath = filePath; this.metaPath = filePath + ".lfpdat"; this.fileName = filePath.Substring(filePath.LastIndexOf("\\") + 1); } The precise error message: Initialization method LogFileParserTester.LogFileTest.LogFileTestInitialize threw exception. System.IO.IOException: System.IO.IOException: The process cannot access the file 'C:\Users\<user>\AppData\Local\empty.lfp' because it is being used by another process..

    Read the article

  • Network communication for a turn based board game

    - by randooom
    Hi all, my first question here, so please don't be to harsh if something went wrong :) I'm currently a CS student (from Germany, if this info is of any use ;) ) and we got a, free selectable, programming assignment, which we have to write in a C++/CLI Windows Forms Application. My team, two others and me, decided to go for a network-compatible port of the board game Risk. We divided the work in 3 Parts, namely UI, game logic and network. Now we're on the part where we have to get everything working together and the big question mark is, how to get the clients synchronized with each other? Our approach so far is, that each client has all information necessary to calculate and/or execute all possible actions. Actually the clients have all information available at all, aside from the game-initializing phase (add players, select map, etc.), which needs one "super-client" with some extra stuff to control things. This is the standard scenario of our approach: player performs action, the action is valid and got executed on the players client action is sent over the network action is executed on the other clients The design (i.e. no or code so far) we came up with so far, is something like the following pseudo sequence diagram. Gui, Controller and Network implement all possible actions (i.e. all actions which change data) as methods from an interface. So each part can implement the method in a way to get their job done. Example with Action(): On the player side's Client: Player-->Gui.Action() Gui-->Controller.Action() Controller-->Logic.Action (Logic.Action() == NoError)? Controller-->Network.Action() Network-->Parser.ParseAction() Network.Send(msg) On all other clients: Network.Recv(msg) Network-->Parser.Deparse(msg) Parser-->Logic.Action() Logic-->Gui.Action() The questions: Is this a viable approach to our task? Any better/easier way to this? Recommendations, critique? Our knowledge (so you can better target your answer): We are on the beginner side, in regards to programming on a somewhat larger projects with a small team. All of us have some general programming experience and basic understanding of the .Net Libraries and Windows Forms. If you need any further information, please feel free to ask.

    Read the article

  • Is it normal for a programmer with 2 years experience to take a long time to code simple programs?

    - by ajax81
    Hi all, I'm a relatively new programmer (18 months on the scene), and I'm finally getting to the point where I'm comfortable accepting projects and developing solutions under minimal supervision. Unfortunately, this also means that I've become acutely aware of my performance shortfalls, the most prevalent of which is the amount of time it takes me to develop, test, and submit algorithms for review. A great example of what I'm talking about occurred this week when I was tasked with developing a simple XML web service (asp.net 3.5) callable via client-side JavaScript, that accepts a single parameter and returns a dataset output to a modal window (please note this is the first time I've had to develop a web service and have had ZERO experience creating/consuming them...let alone calling them from JS client side). Keeping a long story short -- I worked on it for 4 days straight, all day each day, for a grand total of 36 hours, not including the time I spent dwelling on the problem in the shower, the morning commute, and laying awake in bed at night. I learned a great deal about web services and xml/json/javascript...but was called in for a management review to discuss the length of time it took me to develop the solution. In the meeting, I was praised for the quality of my work and was in fact told that my effort was commendable. However, they (senior leads and pm's) weren't impressed with the amount of time it took me to develop the solution and expressed that they would have liked to see the solution in roughly 1/3 of the time it took me. I guess what concerns me the most is that I've identified this pattern as common for myself. Between online videos, book research, and trial/error coding...if its something I haven't seen before, I can spend up to two weeks on a problem that seems to only take the pros in the videos moments to code up. And of course, knowing that management isn't happy with this pattern has shaken me up a bit. To sum up, I have some very specific questions I'd like to ask, and would greatly appreciate your objective professional feedback. Is my experience as a junior programmer common among new developers? Or is it possible that I'm just not cut out for the work? If you suspect that my experience is not common and that there may be an aptitude issue, do you have any suggestions/solutions that I could propose to management to help bring me up to speed? Do seasoned, professional programmers ever encounter knowledge barriers that considerably delay deliverables? When you started out in the industry, did you know how to "do it all"? If not, how long did it take you to be perceived as "proficient"? Was it a natural progression of trial and error, or was there a particular zen moment when you knew you had achieved super saiyen power level? Anyways, thanks for taking the time to read my question(s). I don't know if this is the right place to ask for professional career guidance, but I greatly appreciate your willingness to help me out. Cheers, Daniel

    Read the article

  • How to use ResourceManager in a "website" mode ?

    - by Erick
    I am trying here to do a manual translation for the application I am working with. (There is already a working LocalizationModule but it's working dodgy, so I can't use <asp:Localize /> tags. Normally with ResourceManager you are supposed to be using it as Namespace.Folder.Resourcename (in an application). Currently I am translating an existing asp.net "website" (not web application so no namespace here....). The resources are located into a folder name "Locales/resources" which contains "fr-ca.resx" and "en-us.resx". So I used a code with something like this : public static string T(string search) { System.Resources.ResourceManager resMan = new System.Resources.ResourceManager( "Locales", System.Reflection.Assembly.GetExecutingAssembly(), null ); var text = resMan.GetString(search, System.Threading.Thread.CurrentThread.CurrentCulture); if (text == null) return "null"; else if (text == string.Empty) return "empty"; else return text; } and inside the page I have something like this <%= Locale.T("T_HOME") %> When I refresh I have this : Could not find any resources appropriate for the specified culture or the neutral culture. Make sure "Locales.resources" was correctly embedded or linked into assembly "App_Code.9yopn1f7" at compile time, or that all the satellite assemblies required are loadable and fully signed. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Resources.MissingManifestResourceException: Could not find any resources appropriate for the specified culture or the neutral culture. Make sure "Locales.resources" was correctly embedded or linked into assembly "App_Code.9yopn1f7" at compile time, or that all the satellite assemblies required are loadable and fully signed. Source Error: Line 14: System.Resources.ResourceManager resMan = new System.Resources.ResourceManager( "Locales", System.Reflection.Assembly.GetExecutingAssembly(), null ); Line 15: Line 16: var text = resMan.GetString(search, System.Threading.Thread.CurrentThread.CurrentCulture); Line 17: Line 18: if (text == null) Source File: c:\inetpub\vhosts\galerieocarre.com\subdomains\dev\httpdocs\App_Code\Locale.cs Line: 16 I even tried to load the resource with Locales.fr-ca or only fr-ca nothing quite work here.

    Read the article

  • Wondering why DisplayName attribute is ignored in LabelFor on an overridden property

    - by Lasse Krantz
    Hi, today I got confused when doing a couple of <%=Html.LabelFor(m=>m.MyProperty)%> in ASP.NET MVC 2 and using the [DisplayName("Show this instead of MyProperty")] attribute from System.ComponentModel. As it turned out, when I put the attribute on an overridden property, LabelFor didn't seem to notice it. However, the [Required] attribute works fine on the overridden property, and the generated errormessage actually uses the DisplayNameAttribute. This is some trivial examplecode, the more realistic scenario is that I have a databasemodel separate from the viewmodel, but for convenience, I'd like to inherit from the databasemodel, add View-only properties and decorating the viewmodel with the attributes for the UI. public class POCOWithoutDataAnnotations { public virtual string PleaseOverrideMe { get; set; } } public class EditModel : POCOWithoutDataAnnotations { [Required] [DisplayName("This should be as label for please override me!")] public override string PleaseOverrideMe { get { return base.PleaseOverrideMe; } set { base.PleaseOverrideMe = value; } } [Required] [DisplayName("This property exists only in EditModel")] public string NonOverriddenProp { get; set; } } The strongly typed ViewPage<EditModel> contains: <div class="editor-label"> <%= Html.LabelFor(model => model.PleaseOverrideMe) %> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.PleaseOverrideMe) %> <%= Html.ValidationMessageFor(model => model.PleaseOverrideMe) %> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.NonOverriddenProp) %> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.NonOverriddenProp) %> <%= Html.ValidationMessageFor(model => model.NonOverriddenProp) %> </div> The labels are then displayed as "PleaseOverrideMe" (not using the DisplayNameAttribute) and "This property exists only in EditModel" (using the DisplayNameAttribute) when viewing the page. If I post with empty values, triggering the validation with this ActionMethod: [HttpPost] public ActionResult Edit(EditModel model) { if (!ModelState.IsValid) return View(model); return View("Thanks"); } the <%= Html.ValidationMessageFor(model => model.PleaseOverrideMe) %> actually uses [DisplayName("This should be as label for please override me!")] attribute, and produces the default errortext "The This should be as label for please override me! field is required." Would some friendly soul shed some light on this?

    Read the article

  • Checkbox has the wrong values

    - by Praesagus
    I have a page with several checkboxes on it, along with a dropdownlist of users. The checkboxes correlate to the user's permissions so each user is different. When a different user is chosen, the check boxes should change to match that user's permissions. The codebehind is correct, I stepped through it and the checkbox.checked value is being assigned to the box with the correct value to match the user. chk.Checked = viewable; No matter what I assign to the checked property the value stays the same as the very first submittal. I tried chk.EnableViewState = false; but that did not help. I am sure it is dot net trying to be helpful (grrr). Thank you for your help. There is no databinding per se. I will be saving the values from the textboxes via xmlhttp when the user clicks on them. I never want the check boxes to fill with values other than what I give them. Here is the essence of the code. foreach (DataRow dRow in dTable.Rows) { viewable = Convert.ToBoolean(dRow["Viewable"]); table.Rows.Add(CreatePageRow(Convert.ToString(dRow["SitePageViewName"]), Convert.ToString(dRow["SitePageName"]), folderDepth, maxFolderDepth, viewable)); } FileTree.Controls.Add(table); private TableRow CreatePageRow(String ViewName, String FileName, Int32 folderDepth, Int32 maxFolderDepth, Boolean viewable) { TableRow tr = new TableRow(); tr.Cells.Add(CreateCheckboxCell(viewable, folderDepth+3)); tr.Cells.Add(CreateImageCell("/images/icon/sm/report_graph.gif", "Page: " + ViewName)); tr.Cells.Add(CreateTitleCell(ViewName, maxFolderDepth - (folderDepth+1), FileName)); return tr; } private TableCell CreateCheckboxCell(Boolean viewable, Int32 colSpan) { TableCell td = new TableCell(); if (colSpan > 1) td.ColumnSpan = colSpan; CheckBox chk = new CheckBox(); chk.Checked = viewable; chk.EnableViewState = false; td.Controls.Add(chk); td.CssClass = "right"; return td; }

    Read the article

  • What is the relationship between WebProxy & IWebProxy with respect to WebClient?

    - by Streamline
    I am creating an app (.NET 2.0) that uses WebClient to connect (downloaddata, etc) to/from a http web service. I am adding a form now to handle allowing proxy information to either be stored or set to use the defaults. I am a little confused about some things. First, some of the methods & properties available in either WebProxy or IWebProxy are not in both. What is the difference here with respect to setting up how WebClient will be have when it is called? Secondly, do I have to tell WebClient to use the proxy information if I set it using either WebProxy or IWebProxy class elsewhere? Or is it automatically inherited? Thirdly, when giving the option for the user to use the default proxy (whatever is set in IE) and using the default credentials (I assume also whatever is set in IE) are these two mutually exclusive? Or you only use default credentials when you have also used default proxy? This gets me to the whole difference between WebProxy and IWebProxy. WebRequest.DefaultProxy is a IWebPRoxy class but UseDefaultCredentials is not a method on the IWebProxy class, rather it is only on WebProxy and in turn, How to set the proxy to the WebRequest.DefautlProxy if they are two different classes? Here is my current method to read the stored form settings by the user - but I am not sure if this is correct, not enough, overkill, or just wrong because of the mix of WebProxy and IWebProxy: private WebProxy _proxyInfo = new WebProxy(); private WebProxy SetProxyInfo() { if (UseProxy) { if (UseIEProxy) { // is doing this enough to set this as default for WebClient? IWebProxy iProxy = WebRequest.DefaultWebProxy; if (UseIEProxyCredentials) { _proxyInfo.UseDefaultCredentials = true; } else { // is doing this enough to set this as default credentials for WebClient? WebRequest.DefaultWebProxy.Credentials = new NetworkCredential(ProxyUsername, ProxyPassword); } } else { // is doing this enough to set this as default for WebClient? WebRequest.DefaultWebProxy = new WebProxy(ProxyAddress, ParseLib.StringToInt(ProxyPort)); if (UseIEProxyCredentials) { _proxyInfo.UseDefaultCredentials = true; } else { WebRequest.DefaultWebProxy.Credentials = new NetworkCredential(ProxyUsername, ProxyPassword); } } } // Do I need to WebClient to absorb this returned proxy info if I didn't set or use defaults? return _proxyInfo; } Is there any reason to not just scrap storing app specific proxy information and only allow the app the ability to use the default proxy information & credentials for the logged in user? Will this ever not be enough if using HTTP? Part 2 Question: How can I test that the WebClient instance is using the proxy information or not?

    Read the article

  • Decompressing a very large serialized object and managing memory

    - by Mike_G
    I have an object that contains tons of data used for reports. In order to get this object from the server to the client I first serialize the object in a memory stream, then compress it using the Gzip stream of .NET. I then send the compressed object as a byte[] to the client. The problem is on some clients, when they get the byte[] and try to decompress and deserialize the object, a System.OutOfMemory exception is thrown. Ive read that this exception can be caused by new() a bunch of objects, or holding on to a bunch of strings. Both of these are happening during the deserialization process. So my question is: How do I prevent the exception (any good strategies)? The client needs all of the data, and ive trimmed down the number of strings as much as i can. edit: here is the code i am using to serialize/compress (implemented as extension methods) public static byte[] SerializeObject<T>(this object obj, T serializer) where T: XmlObjectSerializer { Type t = obj.GetType(); if (!Attribute.IsDefined(t, typeof(DataContractAttribute))) return null; byte[] initialBytes; using (MemoryStream stream = new MemoryStream()) { serializer.WriteObject(stream, obj); initialBytes = stream.ToArray(); } return initialBytes; } public static byte[] CompressObject<T>(this object obj, T serializer) where T : XmlObjectSerializer { Type t = obj.GetType(); if(!Attribute.IsDefined(t, typeof(DataContractAttribute))) return null; byte[] initialBytes = obj.SerializeObject(serializer); byte[] compressedBytes; using (MemoryStream stream = new MemoryStream(initialBytes)) { using (MemoryStream output = new MemoryStream()) { using (GZipStream zipper = new GZipStream(output, CompressionMode.Compress)) { Pump(stream, zipper); } compressedBytes = output.ToArray(); } } return compressedBytes; } internal static void Pump(Stream input, Stream output) { byte[] bytes = new byte[4096]; int n; while ((n = input.Read(bytes, 0, bytes.Length)) != 0) { output.Write(bytes, 0, n); } } And here is my code for decompress/deserialize: public static T DeSerializeObject<T,TU>(this byte[] serializedObject, TU deserializer) where TU: XmlObjectSerializer { using (MemoryStream stream = new MemoryStream(serializedObject)) { return (T)deserializer.ReadObject(stream); } } public static T DecompressObject<T, TU>(this byte[] compressedBytes, TU deserializer) where TU: XmlObjectSerializer { byte[] decompressedBytes; using(MemoryStream stream = new MemoryStream(compressedBytes)) { using(MemoryStream output = new MemoryStream()) { using(GZipStream zipper = new GZipStream(stream, CompressionMode.Decompress)) { ObjectExtensions.Pump(zipper, output); } decompressedBytes = output.ToArray(); } } return decompressedBytes.DeSerializeObject<T, TU>(deserializer); } The object that I am passing is a wrapper object, it just contains all the relevant objects that hold the data. The number of objects can be a lot (depending on the reports date range), but ive seen as many as 25k strings. One thing i did forget to mention is I am using WCF, and since the inner objects are passed individually through other WCF calls, I am using the DataContract serializer, and all my objects are marked with the DataContract attribute.

    Read the article

  • Rendering javascript at the server side level. A good or bad idea?

    - by davidhong
    I want to make it clear first: This isn't a question in relation to server-side Javascript or running Javascript server side. This is a question regarding rendering of Javascript code (which will be executed on the client-side) from server-side code. Having said that, take a look at below ASP.net code for example: hlRemoveCategory.Attributes.Add("onclick", "return confirm('Are you sure you want to delete this?');") This is prescribing the client-side onclick event on the server-side. As oppose to: $('a[rel=remove]').bind('click', function(event) { return confirm('Are you sure you want to delete this?'); } Now the question I want to ask is: What is the benefit of rendering javascript from the server-side code? Or the vice-versa? I personally prefer the second way of hooking up client-side UI/behaviour to HTML elements for the following reasons: Server-side does what ever it needs to already, including data-validation, event delegation and etc; and What server-side sees as an event is not necessarily the same process on the client-side. i.e., there are plenty more events on client-side (just look at custom events); and What happens on client-side and on server-side, during an event, could be completely irrelevant and decoupled; and What ever happens on client-side happens on client-side, there is no need for the server to know. Server should process and run what is given to them, how the process comes to life is not really up to them to decide in the event of the client-side events; and so and so forth. These are my thoughts obviously. I want to know what others think and if there has been any discussions on this topic. Topics branching from this argument can reach: Code management: is it easier to render everything from server-side? Separation of concern: is it easier if client-side logic is separated to server-side logic? Efficiency: which is more efficient both in terms of coding and running? At the end of the day, I am trying to move my team to go towards the second approach. There are lot of old guys in this team who are afraid of this change. I just wish to convince them with the right facts and stats. Let me know your thoughts.

    Read the article

  • Entity Framework Update Entity along with child entities (add/update as necessary)

    - by Jorin
    I have a many-to-many relationship between Issues and Scopes in my EF Context. In ASP.NET MVC, I bring up an Edit form that allows the user to edit a particular Issue. At the bottom of the form, is a list of checkboxes that allow them to select which scopes apply to this issue. When editing an issue, it likely will always have some Scopes associated with it already--these boxes will be checked already. However, the user has the opportunity to check more scopes or remove some of the currently checked scopes. My code looked something like this to save just the Issue: using (var edmx = new MayflyEntities()) { Issue issue = new Issue { IssueID = id, TSColumn = formIssue.TSColumn }; edmx.Issues.Attach(issue); UpdateModel(issue); if (ModelState.IsValid) { //if (edmx.SaveChanges() != 1) throw new Exception("Unknown error. Please try again."); edmx.SaveChanges(); TempData["message"] = string.Format("Issue #{0} successfully modified.", id); } } So, when I try to add in the logic to save the associated scopes, I tried several things, but ultimately, this is what made the most sense to me: using (var edmx = new MayflyEntities()) { Issue issue = new Issue { IssueID = id, TSColumn = formIssue.TSColumn }; edmx.Issues.Attach(issue); UpdateModel(issue); foreach (int scopeID in formIssue.ScopeIDs) { var thisScope = new Scope { ID = scopeID }; edmx.Scopes.Attach(thisScope); thisScope.ProjectID = formIssue.ProjectID; if (issue.Scopes.Contains(thisScope)) { issue.Scopes.Attach(thisScope); //the scope already exists } else { issue.Scopes.Add(thisScope); // the scope needs to be added } } if (ModelState.IsValid) { //if (edmx.SaveChanges() != 1) throw new Exception("Unknown error. Please try again."); edmx.SaveChanges(); TempData["message"] = string.Format("Issue #{0} successfully modified.", id); } } But, unfortunately, that just throws the following exception: An object with the same key already exists in the ObjectStateManager. The ObjectStateManager cannot track multiple objects with the same key. What am I doing wrong?

    Read the article

< Previous Page | 897 898 899 900 901 902 903 904 905 906 907 908  | Next Page >