Search Results

Search found 21140 results on 846 pages for 'control theory'.

Page 384/846 | < Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >

  • .Net website create directory to remote server access denied

    - by tmfkmoney
    I have a web application that creates directories. The application works fine when creating a directory on the web server, however, it does not work when it tries to create a directory on our remote fileserver. The fileserver and the webserver are in the same domain. I have created a local user in our domain, "DOMAIN\aspnet". The local user is on both servers. I am running my .Net app pool under the domain user. I have also tried using windows impersonate in the web.config to run under the domain user. I have verified that the domain user has full control to the remote directory. In an effort to debug this I have also given the "everyone" full control to the remote directory. In an effort to debug this I have also added the domain user to the administrators group. I have a simple .net test page on the web server to test this. Through the test page I am able to read the directory on the file server and get a list of everything in it. I am not able to upload files or to create directories on the file server. Here's code that works: var path = @"\\fileserver\images\"; var di = new DirectoryInfo(path); foreach (var d in di.GetDirectories()) { Response.Write(d.Name); } Here's code that doesn't work: path = Path.Combine(path, "NewDirectory"); Directory.CreateDirectory(path); Here's the error I'm getting: Access to the path '\fileserver\images\NewDirectory' is denied. I'm pretty stuck on this. Any ideas?

    Read the article

  • Mouse receiver stopped working after pairing mouse to an unifying receiver

    - by mp19uy
    I bought this mouse, logitech m510, which came with a nano receiver (no-unifying). Yes I know, the mouse is supposed to come with a unifying receiver but I bought it knowing that it won't come on it's original package and it will come with a no-unifying receiver. When I received it, everything worked ok but, after connecting the mouse to another computer using an unifying receiver (that also worked ok, btw), then, when I tried to connect back the mouse to my computer using the no-unifying receiver, I couldn't connect it. I tried everything from removing the batteries and reinstalling the drivers to restarting the computer and trying in different computers, but I couldn't connect them. What I think it happened, is that in fact if you check the documentation of the logitech m510 it says that it works with unifying receivers only, and even more, there is the following article explaining it: http://logitech-en-amr.custhelp.com/app/answers/detail/a_id/18001/~/using-my-m510-with-a-different-usb-receiver So my theory is that the problem was connecting it to a unifying receiver, and now, isn't recognize by another receiver. The receiver (the no-unifying one) itself is recognized by windows, and if I connect the mouse using a unifying receiver, it works. I would like to know if there is any know solution for this or if I can try something else to see if I can solve this.

    Read the article

  • plupload with webpy.

    - by markus
    Hi, i have a problem. I want to upload a file with plupload with the HML5 runtime. This is my html/js code : jQuery(function(){ jQuery("#uploader").pluploadQueue({ // General settings runtimes : 'html5', name : 'file', url : 'http://server.name/addContent', max_file_size : '${maxSize}$_("GB")', }); jQuery('#form_upload_file').submit(function(e) { var uploader = jQuery('#uploader').pluploadQueue(); // Validate number of uploaded files if (uploader.total.uploaded == 0) { // Files in queue upload them first if (uploader.files.length > 0) { // When all files are uploaded submit form uploader.bind('UploadProgress', function() { if (uploader.total.uploaded == uploader.files.length) jQuery('#form_upload_file').submit(); }); uploader.start(); } else alert('You must at least upload one file.'); e.preventDefault(); } }); }); <form id="form_upload_file" action="#" method="POST"> <div id="uploader"></div> <input type="hidden" name="token" value="token" /> <input type="hidden" name="idUser" value="$idUser" /> </form> So, when i click in the button to upload(the submit() method is not called), it does an OPTIONS HTTP request to my server so i don't know what i must do to save the file? this is my webpy code : def OPTIONS(self): web.header('Content-type', 'text/plain: charset=utf-8') web.header('Cache-Control', 'no-store, no-cache, must-revalidate') web.header('Cache-Control', 'post-check=0, pre-check=0', False) web.header('Pragma', 'no-cache') def POST(self): input = web.input(_unicode=False, file={})#on récupère les input self.copy(input.file.file) etc. any idea ? thanks.

    Read the article

  • Make case-sensitive SMB share case-insensitive

    - by fungs
    I am running a legacy XP app that I would like to move on a network share. It is very simple and works in theory but the server providing the share is based on Linux (cannot configure) and the software does not work correctly because it is programmed case-insensitively, it seems. After some research, network shares behave like the filesystem they use underneath. This is normal. Unfortunately I cannot fix the software myself. Is there any way to turn the case-sensitivity into case-insensitivity for a Windows network drive on the client side? I fould two approaches: First, something like icasefile (http://wnd.katei.fi/icasefile/) that wraps around the program and intercepts the file I/O. This is for UNIX only. Secondly, a proxy virtual file system (e. g. something using Dokan). Unfortunately I couldn't find any suitable fs, the only possibility would be to put a case-insensitive filesystem on an image file and put this on the share using for example lmdisk (http://www.ltr-data.se/opencode.html/#ImDisk). Do you have any better ideas?

    Read the article

  • Replace System.Net.Mail.MailMessage with manually created message and send it

    - by DEH
    I am trying to send emails that will bounce to a known mailbox. I plan to use VERP. Unfortunately the System.Net.Mail.MailMessage object does not allow me to precisely set the From: and Sender: headers within my email - it forces the values so that the resulting email contains the phrase 'on behalf of', and does not allow me fine control over the relevant mime headers. I therefore plan to manually write mime email messages directly to the pickup directory so that I can independently control the From and Sender headers. My dev box is a Vista box and therefore does not have an SMTP server. I would like to configure the dev box so that I have an SMTP server running on it. I can then turn off the SMTP server, write messages to the pickup dir, then turn on the SMPT server and see how the individual emails that I have written will behave (some delivered, some bounced to a bounce handler on a different email domain, as dictated by the Sender). Two questions: 1. Can anyone recommend an SMTP server that will monitor a pickup directory? 2. If I set headers as follows; From:[email protected]; Sender:[email protected] then the recipient will see the email as having come from [email protected] ( and won't see any reference to [email protected]), but if the mail bounces then the NDR will be sent to [email protected]). It's a real pain to have to do this, but I can't see any way of using System.Net.Mail.MailMessage without it messing up my headers.

    Read the article

  • The HTTP verb POST used to access path '/Documents/TestNote/Documents/AddNote' is not allowed.

    - by priya4
    I am having two user control on a aspx page and one of the user control has a text area for notes. and i am trying to use JSON so that when they click the addnote button it does not reload the page. Below is my java script , but it says that it is giving this error The HTTP verb POST used to access path '/Documents/TestNote/Documents/AddNote' is not allowed. <script type="text/javascript"> $(document).ready(function() { $("#btnAddNote").click(function() { alert("knock knock"); var gnote = getNotes(); //var notes = $("#txtNote").val(); if (gnote == null) { alert("Note is null"); return; } $.post("Documents/AddNote", gnote, function(data) { var msg = data.Msg; $("#resultMsg").html(msg); }); }); }); function getNotes() { alert("I am in getNotes function"); var notes = $("#txtNote").val(); if (notes == "") alert("notes is empty"); return (notes == "") ? null : { Note: notes }; } </script> My controller [HttpPost] public ActionResult AddNote(AdNote note) { string msg = string.Format("Note {0} added", note.Note); return Json(new AdNote { Note = msg }); }

    Read the article

  • Avoiding GC thrashing with WSE 3.0 MTOM service

    - by Leon Breedt
    For historical reasons, I have some WSE 3.0 web services that I cannot upgrade to WCF on the server side yet (it is also a substantial amount of work to do so). These web services are being used for file transfers from client to server, using MTOM encoding. This can also not be changed in the short term, for reasons of compatibility. Secondly, they are being called from both Java and .NET, and therefore need to be cross-platform, hence MTOM. How it works is that an "upload" WebMethod is called by the client, sending up a chunk of data at a time, since files being transferred could potentially be gigabytes in size. However, due to not being able to control parts of the stack before the WebMethod is invoked, I cannot control the memory usage patterns of the web service. The problem I am running into is for file sizes from 50MB or so onwards, performance is absolutely killed because of GC, since it appears that WSE 3.0 buffers each chunk received from the client in a new byte[] array, and by the time we've done 50MB we're spending 20-30% of time doing GC. I've played with various chunk sizes, from 16k to 2MB, with no real great difference in results. Smaller chunks are killed by the latency involved with round-tripping, and larger chunks just postpone the slowdown until GC kicks in. Any bright ideas on cutting down on the garbage created by WSE? Can I plug into the pipeline somehow and jury-rig something that has access to the client's request stream and streams it to the WebMethod? I'm aware that it is possible to "stream" responses to the client using WSE (albeit very ugly), but this problem is with requests from the client.

    Read the article

  • WPF validation red border doesn't show If UserControl collapsed first

    - by Creepy Gnome
    There seems to be a bug with WPF in 3.5, and I was hoping someone may have found a workaround. Basically if you have a custom UserControl that contains a TextBox and it is in a Window but initialized to be Collapsed by default in the xaml or code behind if it fails validation when you make the control visible it will not show the red border until it fails while visible. However, this works correctly when visibility is set to Hidden, just no when Collapsed. I am already overriding the ErrorTemplate with a style to workaround the Adornment issue with the red border staying visibile when you collapse the control. Below is my full style for the TextBox. If there is any additional changes or additions to make it work correctly with collapsed controls that would be great. <Style TargetType="TextBox"> <Setter Property="Margin" Value="3" /> <Setter Property="Validation.ErrorTemplate"> <Setter.Value> <ControlTemplate> <ControlTemplate.Resources> <BooleanToVisibilityConverter x:Key="converter" /> </ControlTemplate.Resources> <DockPanel LastChildFill="True"> <Border BorderThickness="2" BorderBrush="Red" Visibility="{ Binding ElementName=placeholder, Mode=OneWay, Path=AdornedElement.IsVisible, Converter={StaticResource converter}}" > <AdornedElementPlaceholder x:Name="placeholder" /> </Border> </DockPanel> </ControlTemplate> </Setter.Value> </Setter> <Style.Triggers> <Trigger Property="Validation.HasError" Value="true" > <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self}, Path=(Validation.Errors)[0].ErrorContent}" /> </Trigger> </Style.Triggers> </Style>

    Read the article

  • Inserting a ContentControl after another ContentControl

    - by Markus Roth
    In our VSTO Word 2010 Addin, we are trying to insert a RichTextControl after a given other ContentControl. We have tried this: public ContentControl AddContentControl(WdContentControlType type, int position) { Paragraph paragraphBefore = null; if (position == 0) { if (WordDocument.Paragraphs.Count == 0) { WordDocument.Paragraphs.Add(); } paragraphBefore = WordDocument.Paragraphs.First; } else { paragraphBefore = Controls.ElementAt(position - 1).Range.Paragraphs.Last; } object start = paragraphBefore.Range.End; object end = paragraphBefore.Range.End + 1; paragraphBefore.Range.InsertParagraphAfter(); Range rangeToUse = WordDocument.Range(ref start, ref end); ContentControl newControl = _ContentControl = _WordDocument.ContentControls.Add(type, rangeToInsert); Controls.Insert(position, newControl); OnNewContentControl(newControl, position); return newControl.ContentControl; } which works fine, unless the control that is before the one we want to insert has an empty paragraph at the end. If that is the case, the new ContentControl is inserted within the last control. How can we avoid this?

    Read the article

  • Are there scenarios where the ViewModel needs to invoke methods on the View w.r.t. MVVM in WPF?

    - by Gishu
    As per the pattern, the ViewModel exposes Properties(with change notification) and Commands (to notify the VM of user actions) that the View binds to. The only communication that flows from the VM to the View is the property change notifications (so that the View can refresh itself with updated data). In MVP or PresentationModel form of the pattern (if I'm not mistaken), the View implements a plain vanilla interface (consisting of methods, properties and/or events). With MVVM, it feels methods on the IView have been outlawed (along with IView itself). One scenario I could think of was to set the focus to a certain control in the View. (When the user does ActionX, the focus should immediately be set to FieldY). In MVP, I'd write this as IView.ActivateField(NameConstant), which the presenter or PM would invoke. In MVVM, this seems to be a fringe case that needs a workaround / little bit of code-behind. The VM implements an ActiveField Property, which it sets to NameConstant. The view picks up the change notification event and in a code-behind event handler, activates the Name control. Is the above just an exception to the norm? Or are there other such scenarios, where the VM needs to invoke a method on the View ?

    Read the article

  • ASP.NET RadioButton messing with the name (groupname)

    - by Hojou
    I got a templated control (a repeater) listing some text and other markup. Each item has a radiobutton associated with it, making it possible for the user to select ONE of the items created by the repeater. The repeater writes the radiobutton setting its id and name generated with the default asp.net naming convention making each radiobutton a full 'group'. This means all radiobuttons are independant on each other, which again unfortunately means i can select all radiobuttons at the same time. The radiobutton has the clever attribute 'groupname' used to set a common name so they get grouped together and thus should be dependant (so i can only select one at a time). The problem is - this doesn't work - the repeater makes sure the id and thus the name (which controls the grouping) are different. Since i use a repeater (could have been a listview or any other templated databound control) i can't use the RadioButtonList. So where does that leave me? I know i've had this problem before and solved it. I know almost every asp.net programmer must have had it too, so why can't i google and find a solid solution to the problem? I came across solutions to enforce the grouping by javascript (ugly!) or even to handle the radiobuttons as non-server controls, forcing me to do a Request.Form[name] to read the status. I also tried experimenting with overriding the name attribute on the PreRender event - unfortunately the owning page and masterpage again overrides this name to reflect the full id/name so i end up with the same wrong result. If you have no better solution than i posted, you are still very welcome to post your thoughts - atleast i'll know that my friend 'jack' is right about how messed up 'asp.net' is sometimes ;)

    Read the article

  • ASP.NET configure data source is not returning anything?

    - by Greg McNulty
    I'm selecting table data of the current user: SELECT [ConfidenceLevel], [LoveLevel], [HappinessLevel] FROM [UserData] WHERE ([UserProfileID] = @UserProfileID) I have set a control to the unique user ID and it is getting the correct value: HTML: <asp:Label ID="userID" runat="server" Text="Labeluser"></asp:Label> C#: userID.Text = Membership.GetUser().ProviderUserKey.ToString(); I then use it in the where clause using the Configure Data Source window unique ID = control then controlID userID (fills in .text for me) I compile and run but nothing shows up where the table should be. Any suggestions? Here is the code it has created: <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourceID="SqlDataSource1"> <Columns> <asp:BoundField DataField="ConfidenceLevel" HeaderText="ConfidenceLevel" SortExpression="ConfidenceLevel" /> <asp:BoundField DataField="LoveLevel" HeaderText="LoveLevel" SortExpression="LoveLevel" /> <asp:BoundField DataField="HappinessLevel" HeaderText="HappinessLevel" SortExpression="HappinessLevel" /> </Columns> </asp:GridView> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionStringToDB %>" SelectCommand="SELECT [ConfidenceLevel], [LoveLevel], [HappinessLevel] FROM [UserData] WHERE ([UserProfileID] = @UserProfileID)"> <SelectParameters> <asp:ControlParameter ControlID="userID" Name="UserProfileID" PropertyName="Text" Type="Object" /> </SelectParameters> </asp:SqlDataSource>

    Read the article

  • How do I disable the network connection from .Net without needing admin priveledges?

    - by Brad Mathews
    I may be SOL on this but I thought I would give throw it out for possible solutions. I am writing a computer access control service to help me control my kids' computer use. Plan on open sourcing it when I have it working. It is written in VB.Net and needs to work on XP through 7. I am running into all sorts of security and desktop access issues on Windows 7. The service needs to run as admin to execute the NetSh command to disable the network. But I cannot interact with the desktop from the service so I IPC to a UI to handle other stuff, but I still cannot detect from the service if the desktop is locked. Argghh! I could get it all working from a hidden windows form app if I could just lick the one piece that needs admin permissions: disabling the network. It does no good if a kid logs on and denies the popup asking if the program should run as administrator and he says no. Also windows 7 will not start a program set to run as admin using HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run Anyone know how to get this working? Or have an outside the box solution? Thanks! Brad

    Read the article

  • Make winform run away from the mouse.

    - by JACK IN THE CRACK
    Okay so I'm trying to make a little gag program that will "run away" from the mouse. So, to get the mouse coordinates for the whole screen and not just the form control I had to create a little helper: static class MouseHelper { [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] internal static extern bool GetCursorPos(ref Point pt); public static Point GetPosition() { Point w32Mouse = new Point(); GetCursorPos(ref w32Mouse); return w32Mouse; } } Now I thought I was going to use the MouseMove event... but that doesn't work for outside the form control either so I have an auto-enabled timer on a 10ms loop called timerMouseMove. public partial class Form1 : Form { public Form1() { InitializeComponent(); } private bool CollisionCheck() { Point win32Mouse = MouseHelper.GetPosition(); if (win32Mouse.X <= Location.X || win32Mouse.X >= (Location.X + Width)) return false; if (win32Mouse.Y <= Location.Y || win32Mouse.Y >= (Location.Y + Height)) return false; return true; } private void timerMouseMove_Tick(object sender, EventArgs e) { if (CollisionCheck()) Location = new Point(Location.X + 1, Location.Y + 1); } } So this works out nicely, at least I have the collision checking working and whatnot. But now, how should I go about figuring which side of the form the mouse has collided with, so that I can update its location to move in the opposite direction the mouse collides with it? And such halp

    Read the article

  • “Could not find type” error loading a form in the Designer

    - by Vaccano
    This is the exact same question as this one: “Could not find type” error loading a form in the Designer Before anyone goes closing my question please read that one. You will realize that it did not get a real answer. I hope to get a full answer (rather than a workaround) from this question. When I create a class that descends from Control and uses generics, that class fails to load in the designer. Here is and example: class OwnerDrawnListBox<T> : System.Windows.Forms.Control { private readonly List<T> _items; // Other list box private stuff here public OwnerDrawnListBox() { _items = new List<T>(); } // More List box code } I then use this in my designer: private OwnerDrawnListBox<Bag> lstAvailable; private void InitializeComponent() { // Used to be System.Windows.Forms.ListBox(); this.lstAvailable = new ARUP.ScanTrack.Mobile.OwnerDrawnListBox<Bag>(); // Other items } If the generic class is subclassed (to a non-generic) then the referenced question says that it works fine (ie if I made Class BagOwnerDrawListBox: OwnerDrawnListBox<Bag>). What I want to know is there a way to "fix" this so that the generic item is accepted by the designer? Side Note: I am using the Compact Framework.

    Read the article

  • Real benefits of tcp TIME-WAIT and implications in production environment

    - by user64204
    SOME THEORY I've been doing some reading on tcp TIME-WAIT (here and there) and what I read is that it's a value set to 2 x MSL (maximum segment life) which keeps a connection in the "connection table" for a while to guarantee that, "before your allowed to create a connection with the same tuple, all the packets belonging to previous incarnations of that tuple will be dead". Since segments received (apart from SYN under specific circumstances) while a connection is either in TIME-WAIT or no longer existing would be discarded, why not close the connection right away? Q1: Is it because there is less processing involved in dealing with segments from old connections and less processing to create a new connection on the same tuple when in TIME-WAIT (i.e. are there performance benefits)? If the above explanation doesn't stand, the only reason I see the TIME-WAIT being useful would be if a client sends a SYN for a connection before it sends remaining segments for an old connection on the same tuple in which case the receiver would re-open the connection but then get bad segments and and would have to terminate it. Q2: Is this analysis correct? Q3: Are there other benefits to using TIME-WAIT? SOME PRACTICE I've been looking at the munin graphs on a production server that I administrate. Here is one: As you can see there are more connections in TIME-WAIT than ESTABLISHED, around twice as many most of the time, on some occasions four times as many. Q4: Does this have an impact on performance? Q5: If so, is it wise/recommended to reduce the TIME-WAIT value (and what to)? Q6: Is this ratio of TIME-WAIT / ESTABLISHED connections normal? Could this be related to malicious connection attempts?

    Read the article

  • How to implement menuitems that depend on current selection in WPF MVVM explorer-like application

    - by Doug
    I am new to WPF and MVVM, and I am working on an application utilizing both. The application is similar to windows explorer, so consider an app with a main window with menu (ShellViewModel), a tree control (TreeViewModel), and a list control (ListViewModel). I want to implement menu items such as Edit - Delete, which deletes the currently selected item (which may be in the tree or in the list). I am using Josh Smith's RelayCommand, and binding the menuitem to a DeleteItemCommand in the ShellViewModel is easy. It seems like implementing the DeleteItemCommand, however, requires some fairly tight coupling between the ShellViewModel and the two child view models (TreeViewModel and ListViewModel) to keep track of the focus/selection and direct the action to the proper child for implementation. That seems wrong to me, and makes me think I'm missing something. Writing a focus manager and/or selection manager to do the bookkeeping does not seem too hard, and could be done without coupling the classes together. The windowing system is already keeping track of which view has the focus, and it seems like I'd be duplicating code. What I'm not sure about is how I would route the command from the ShellViewModel down to either the ListViewModel or the TreeViewModel to do the actual work without making a mess of the code. Some day, the application will be extended to include more than two children, and I want the shell to be as ignorant of the children as possible to make that extension as painless as possible. Looking at some sample WPF/MVVM applications (Karl Shifflett's CipherText, Josh Smith's MVVM Demo, etc.), I haven't seen any code that does this (or I didn't understand it). Regardless of whether you think my approach is way off base or I'm just missing a small nuance, please share your thoughts and help me get back on track. Thanks!

    Read the article

  • CF - How to get mouse position when ContextMenu pops up ??

    - by no9
    I have a problem i cannot solve. In my view (that shows a map) i created a contextMenu. When context menu is invoked i need to get the position where the user has clicked on the map. Here is my problem: In the view i already have onMouseDown event that gets me the coordinates where the user clicked. private void MapView_MouseDown(object sender, MouseEventArgs e) { this.lastMouseDownX = e.X; this.lastMouseDownY = e.Y; } When the contextMenu is invoked i need the same data, but the problem is that contextMenu only has EventArgs that dont keep the data i need. Furthermore ... contextMenu is invoked when user presses and holds mouse for a second and when its invoked the code does not enter onMouseDown event ! It just goes into popup event on my context menu.... I tried putting this in my popup event, but the coordinates are not ok. Y coordinate is way off the chart. private void servicesContextMenu_Popup(object sender, EventArgs e) { this.lastMouseUpX = Control.MousePosition.X; this.lastMouseUpX = Control.MousePosition.Y; } I also tried this, but with no success ... the problem remains ... when contextMenu is invoked the code does not fire onMouseDown event. http://www.mofeel.net/58-microsoft-public-dotnet-framework-compactframework/19285.aspx Help?

    Read the article

  • Convert Form to UserControl

    - by Joe
    I have a 3rd party code library that I'm using; part of this library is a winforms application for editing the configuration files used by this library. I would like to embed their configuration editor app into my application. I have the source code to their library and the configuration editor is (as far as I can tell) a straight forward Winforms app using standard controls. I'm trying to convert the app's main form into a UserControl so that I can host it inside my application which is WPF (WPF's WindowsFormsHost won't host a Form object, I get an exception). I changed the form object to inherit from UserControl instead of Form and fixed all the compiler errors (there weren't many, just property initializations that don't exist on UserControls) but what's happening is my newly converted control is just blank. When I run my test app I don't see any of the controls that make up the original form, just a blank page. Any ideas? I really don't want to have to re-implement their app from scratch, that would suck. Edit: I forgot to mention I'm testing this in a WinForms application, not WPF, to just get the control working before trying to use it from WPF.

    Read the article

  • DAS vs SAN storage for serving 2 to 4 nodes

    - by Luke404
    We currently have 4 Linux nodes with local storage, arranged in two active/passive pairs with storage mirrored using DRBD, running virtual machines (actually using Xen Hypervisor) for typical hosting workloads (mail, web, a couple VPS, etc.). We're approaching the (presumed) maximum IOPS of those servers, and we're planning to migrate to an external storage solution with two active nodes, with capacity for up to four active nodes. Since we're an all-Dell shop I've done some research and found the MD3200 / MD3200i products should be the ones we're looking for. We are pretty sure we won't be attaching more than 4 hosts on a single storage and I'm wondering if there is any clear advantage for one or the other. In theory I should be able to attach 4 SAS hosts to a single MD3200 (single links on a single controller MD3200, or dual redundant SAS links from each host to a dual-controller MD3200), or 4 iSCSI hosts to a single MD3200i (directly on its 4 GigE ports without any switch, again with dual links for the dual controller option). Both setups should let us implement live VM migration since all hosts can access all the LUNs at the same time, and also some shared filesystem like GFS2 or OCFS2. Also, both setups should allow full redundancy of the whole system (assuming dual controllers in the storage). One difference I can see is that the DAS solution is actually limited to 4 hosts while the iSCSI one should be able to grow to more hosts (adding two GigE switches to the mix). One point for the iSCSI solution is that it would allow us to start out with our current nodes and upgrade them at a later time (we can't add other SAS controllers, but they already have 4 GigE ports each). With the right (iSCSI|SAS) controllers I should be able to connect diskless nodes and boot them off the external storage which I think is a good thing (get rid of any local storage). On the other hand, I would have thought the SAS one to be cheaper but it seems like an MD3200 actually costs a little less than an MD3200i (?) (please note: I've used Dell gear in my examples since that's what we're looking for but I assume the same goes with other vendors) I would like to know if my assumptions above are correct, and if I'm missing any important difference between the two setups.

    Read the article

  • Pass ng-model and place-holder value into directive

    - by Zen
    I have a segment of code needs to be reuse a lot, there for I want to just create a directive for it. <div class="btn-group"> <div class="input-group"> <div class="has-feedback"> <input type="text" class="form-control" placeholder="BLAH BLAH" ng-model="model"> <span class="times form-control-feedback" ng-click="model=''" ng-show="model.length > 0"></span> </div> </div> </div> I want to use this code as template in directive. Create a directive used as follow: <div search-Field ng-model="model" placeholder="STRING"></div> to replace to old html, ng-model and placeholder will be as variables. angular.module('searchField', []) .directive('searchField', [function () { return { scope: { placeholder: '@', ngModel: '=' }, templateUrl: 'Partials/_SearchInputGroup.html' } }]); Is it the way of doing it?

    Read the article

  • Ignore LD_LIBRARY_PATH and stick with library given through -rpath at link time

    - by roe
    I'm sitting in an environment which I have no real control over (it's not just me, so basically, I can't change the environment or it won't work for anyone else), the only thing I can affect is how the binary is built. My problem is, the environment specifies an LD_LIBRARY_PATH containing a libstdc++ which is not compatible with the compiler being used. I tried compiling it statically, but that doesn't seem possible for g++ (version 4.2.3, seems to have been work done in this direction in later versions though which are not available, -static-libstdc++ or something like that). Now I've arrived at using rpath to bake the absolute path name into the executable (would work, all machines it's supposed to run on are identical). Unfortunately it appears as though LD_LIBRARY_PATH takes precedence over rpath (resetting LD_LIBRARY_PATH confirmed that it's able to find the library, but as stated above, LD_LIBRARY_PATH will be set for everyone, and I cannot change that). Is there any way I can make rpath take precedence over LD_LIBRARY_PATH, or are there any other possible solutions to my problem? Note that I'm talking about dynamic linking at runtime, I am able to control the command line at compile and link time. Thanks.

    Read the article

  • Margin on the top in LaTex

    - by Tim
    I am now writing a thesis which is required to have 1 and half inches on the left or binding side, and 1 inch on the other three sides. I just have my print-out checked in the binding office and I was told the margins are not okay, especially the one on the top is not enough, a little less than 1 inch. See the figure below: I wonder which commands are responsible for the margins on the four sides, especially for the one on the top? I provide links to two files I believe that control the layout and format of the thesis:jhu12.clo and thesis.cls. Or instead of modifying the LaTex files, is thre some setting when printing the pdf file to control these margins on the print-out? Thanks and regards! EDIT: This is the print-out of command \layout It shows "one inch + \voffset" for the top (the NO. 2 item). But the staff at the binding office use his ruler to show it is less than 1 inch. How to adjust the margins in LaTex then? Thanks!

    Read the article

  • How to cache an HTTP POST response?

    - by KARASZI István
    I would like to create a cacheable HTTP response for a POST request. My actual implementation responses the following for the POST request: HTTP/1.1 201 Created Expires: Sat, 03 Oct 2020 15:33:00 GMT Cache-Control: private,max-age=315360000,no-transform Content-Type: application/x-www-form-urlencoded; charset=UTF-8 Content-Length: 9 ETag: 2120507660800737950 Last-Modified: Wed, 06 Oct 2010 15:33:00 GMT ......... But it looks like that the browsers (Safari, Firefox tested) are not cacheing the response. In the HTTP RFC the corresponding part says: Responses to this method are not cacheable, unless the response includes appropriate Cache-Control or Expires header fields. However, the 303 (See Other) response can be used to direct the user agent to retrieve a cacheable resource. So I think it should be cached. I know I could set a session variable and set a cookie and do a 303 redirect, but I want to cache the response of the POST request. Is there any way to do this? P.S.: I've started with a simple 200 OK, so it does not work. Thanks,

    Read the article

  • static block instance block java Order

    - by Rollerball
    Having read this question In what order are the different parts of a class initialized when a class is loaded in the JVM? and the related JLS. I would like to know in more detail why for example having class Animal (superclass) and class Dog (subclass) as following: class Animal { static{ System.out.println("This is Animal's static block speaking"): } { System.out.println("This is Animal's instance block speaking"); } class Dog{ static{ System.out.println("This is Dog's static block speaking"); } { System.out.println("This is Dog's instance block speaking"); } public static void main (String [] args) { Dog dog = new Dog(); } } Ok before instantiating a class its direct superclass needs to be initialized (therefore all the statics variables and block need to be executed). So basically the question is: Why after initializing the static variables and static blocks of the super class, control goes down to the subclass for static variables initialization rather then finishing off the initialization of also the instance member? The control goes like: superclass (Animal): static variables and static blocks subclass (Dog): static variables and static blocks superclass (Animal): instance variables and instance blocks sublcass (Dog):instance variables and instance blocks What is the reason why it is in this way rather than : superclass -> static members superclass -> instance members subclass -> static members sublcass-> instance members

    Read the article

< Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >