Search Results

Search found 16899 results on 676 pages for 'local'.

Page 592/676 | < Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >

  • How to reduce the need for IISRESET for developing ASP.NET web app in IIS 5.1

    - by John Galt
    I have a web application project on my dev PC running WinXP and hence IIS 5.1. The changes I'm making to this site seem to "take effect" only after I do IISRESET. That is, I make a source change, Rebuild the project and then Start without Debugging (or with debugging). The newly changed code is not "visible" or in effect unless I intervene with an IISRESET. BTW, the "web" tab on the Properties display for the web app project is configured to use the Local IIS web server at project Url: http://localhost/myVirtualDirectory ... but I've noticed the same issue when using the VStudio Dev Server (i.e. I have to stop it by visiting the taskbar tray area in order to see my source changes take effect). Is this something I can change? EDIT UPDATE: Just wanting to clear this up if possible. Two answers diverge below; not sure how to move forward. One states this is to be expected (weakness of IIS 5.1 which in turn is the best WinXP can provide). Another states this is not expected behavior (and I tend to agree since this is the first I've encounted this on the same old WinXP dev platform I've had a long time). I suspect it may be something "deep inside" the Visual Studio 2008 web app which was upgraded to this new IDE from VStudio 2002 (ASP.NET 1.1). I've tried to add comment/questions down each answer path. Thanks.

    Read the article

  • iOS Simulator is black on app execution

    - by Terryn
    I am running xcode 5.1.1 and have been trying to learn objective C / iOS development. Right now whenever I try to run my code on the emulator (I do not have an actual device atm) it comes up with a black screen. Code can be found here. Compiling and running gives me the following error: 2014-08-23 10:42:57.429 Calculator[1862:60b] *** Terminating app due to uncaught exception 'NSUnknownKeyException', reason: '[<XYZViewController 0xe436640> setValue:forUndefinedKey:]: this class is not key value coding-compliant for the key didgetPressed.' *** First throw call stack: ( 0 CoreFoundation 0x017ed1e4 __exceptionPreprocess + 180 1 libobjc.A.dylib 0x0156c8e5 objc_exception_throw + 44 2 CoreFoundation 0x0187cfe1 -[NSException raise] + 17 3 Foundation 0x0122cd9e -[NSObject(NSKeyValueCoding) setValue:forUndefinedKey:] + 282 4 Foundation 0x011991d7 _NSSetUsingKeyValueSetter + 88 5 Foundation 0x01198731 -[NSObject(NSKeyValueCoding) setValue:forKey:] + 267 6 Foundation 0x011fab0a -[NSObject(NSKeyValueCoding) setValue:forKeyPath:] + 412 7 UIKit 0x004e31f4 -[UIRuntimeOutletConnection connect] + 106 8 libobjc.A.dylib 0x0157e7de -[NSObject performSelector:] + 62 9 CoreFoundation 0x017e876a -[NSArray makeObjectsPerformSelector:] + 314 10 UIKit 0x004e1d4d -[UINib instantiateWithOwner:options:] + 1417 11 UIKit 0x0034a6f5 -[UIViewController _loadViewFromNibNamed:bundle:] + 280 12 UIKit 0x0034ae9d -[UIViewController loadView] + 302 13 UIKit 0x0034b0d3 -[UIViewController loadViewIfRequired] + 78 14 UIKit 0x0034b5d9 -[UIViewController view] + 35 15 UIKit 0x0026b267 -[UIWindow addRootViewControllerViewIfPossible] + 66 16 UIKit 0x0026b5ef -[UIWindow _setHidden:forced:] + 312 17 UIKit 0x0026b86b -[UIWindow _orderFrontWithoutMakingKey] + 49 18 UIKit 0x002763c8 -[UIWindow makeKeyAndVisible] + 65 19 UIKit 0x00226bc0 -[UIApplication _callInitializationDelegatesForURL:payload:suspended:] + 2097 20 UIKit 0x0022b667 -[UIApplication _runWithURL:payload:launchOrientation:statusBarStyle:statusBarHidden:] + 824 21 UIKit 0x0023ff92 -[UIApplication handleEvent:withNewEvent:] + 3517 22 UIKit 0x00240555 -[UIApplication sendEvent:] + 85 23 UIKit 0x0022d250 _UIApplicationHandleEvent + 683 24 GraphicsServices 0x037e2f02 _PurpleEventCallback + 776 25 GraphicsServices 0x037e2a0d PurpleEventCallback + 46 26 CoreFoundation 0x01768ca5 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 53 27 CoreFoundation 0x017689db __CFRunLoopDoSource1 + 523 28 CoreFoundation 0x0179368c __CFRunLoopRun + 2156 29 CoreFoundation 0x017929d3 CFRunLoopRunSpecific + 467 30 CoreFoundation 0x017927eb CFRunLoopRunInMode + 123 31 UIKit 0x0022ad9c -[UIApplication _run] + 840 32 UIKit 0x0022cf9b UIApplicationMain + 1225 33 Calculator 0x00002c8d main + 141 34 libdyld.dylib 0x01e34701 start + 1 35 ??? 0x00000001 0x0 + 1 ) libc++abi.dylib: terminating with uncaught exception of type NSException (lldb) I have attempted the following things, and so far nothing has worked 1) checked the deployment info, it has the main interface set to main 2) double checked all break points to turn them off and disabled all of them through Debug-Disable Breakpoints 3) Reset Content and Settings on the iOS Simulator. 4) Checked the issue with the LLDB Debugger, however from what I have read the issue is no longer present with the 5.1.1 xcode. 5) checked the local host is still set to 127.0.0.1 Thanks! - Terryn

    Read the article

  • Loading/Displaying large amount of data on webpage.

    - by jb
    I have a webpage which contains a table for displaying a large amount of data (on average from 2,000 to 10,000 rows). This page takes a long time to load/render. Which is understandable. The problem is, while the page is loading the PCs memory usage skyrockets (500mb on my test system is in use by iexplorer) and the whole PC grinds to a halt until it has finished, which can take a minute or two. IE hangs until it is complete, switching to another running program is the same. I need to fix this - and ideally i want to accomplish 2 things: 1) Load individual parts of the page seperately. So the page can render initially without the large data table. A loading div will be placed there until it is ready. 2) Dont use up so much memory or local resources while rendering - so at least they can use a different tab/application at the same time. How would I go about doing both or either of these? I'm an applications programmer by trade so i am still a little fizzy on the things I can do in a web environment. Cheers all.

    Read the article

  • How do people handle foreign keys on clients when synchronizing to master db

    - by excsm
    Hi, I'm writing an application with offline support. i.e. browser/mobile clients sync commands to the master db every so often. I'm using uuid's on both client and server-side. When synching up to the server, the servre will return a map of local uuids (luid) to server uuids (suid). Upon receiving this map, clients updated their records suid attributes with the appropriate values. However, say a client record, e.g. a todo, has an attribute 'list_id' which holds the foreign key to the todos' list record. I use luids in foreign_keys on clients. However, when that attribute is sent over to the server, it would dirty the server db with luids rather than the suid the server is using. My current solution, is for the master server to keep a record of the mappings of luids to suids (per client id) and for each foreign key in a command, look up the suid for that particular client and use the suid instead. I'm wondering wether others have come across thus problem and if so how they have solved it? Is there a more efficient, simpler way? I took a look at this question "Synchronizing one or more databases with a master database - Foreign keys (5)" and someone seemed to suggest my current solution as one option, composite keys using suids and autoincrementing sequences and another option using -ve ids for client ids and then updating all negative ids with the suids. Both of these other options seem like a lot more work. Thanks, Saimon

    Read the article

  • CVS update doesn't fetch most recent commit

    - by mizipzor
    I just commited changes to a file (bringing it to revision 1.3). I then updated the file and to my surprise I got back revision 1.2 (how the file looked before my commit). Checking the history if the file shows that there is a revision 1.3 (most recent) and a 1.2 before that (which is identical to my local copy). Ive tried removing the file and updating the folder, causing the file to be downloaded again. Ive also tried fetching a clean copy of the file of HEAD. Clearing cache and fetching HEAD doesnt work. But forcing an update to revision 1.3 explicitly does. But doing a normal update after that causes it to go back to revision 1.2. This is on a Windows XP machine, server is also a Windows box. Im using TortoiseCVS. Does anyone know what could be causing this? (If you dont know how to fix it, I will award bonus points to anyone that can tell me why CVS broke and, hopefully, how horrible it will be to fix it. I want to add it to my list so that maybe some day I can convince my colleagues to finally give it up in favor of a more modern VCS.)

    Read the article

  • Sharing a COM port over TCP

    - by guinness
    What would be a simple design pattern for sharing a COM port over TCP to multiple clients? For example, a local GPS device that could transmit co-ordinates to remote hosts in realtime. So I need a program that would open the serial port and accept multiple TCP connections like: class Program { public static void Main(string[] args) { SerialPort sp = new SerialPort("COM4", 19200, Parity.None, 8, StopBits.One); Socket srv = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); srv.Bind(new IPEndPoint(IPAddress.Any, 8000)); srv.Listen(20); while (true) { Socket soc = srv.Accept(); new Connection(soc); } } } I would then need a class to handle the communication between connected clients, allowing them all to see the data and keeping it synchronized so client commands are received in sequence: class Connection { static object lck = new object(); static List<Connection> cons = new List<Connection>(); public Socket socket; public StreamReader reader; public StreamWriter writer; public Connection(Socket soc) { this.socket = soc; this.reader = new StreamReader(new NetworkStream(soc, false)); this.writer = new StreamWriter(new NetworkStream(soc, true)); new Thread(ClientLoop).Start(); } void ClientLoop() { lock (lck) { connections.Add(this); } while (true) { lock (lck) { string line = reader.ReadLine(); if (String.IsNullOrEmpty(line)) break; foreach (Connection con in cons) con.writer.WriteLine(line); } } lock (lck) { cons.Remove(this); socket.Close(); } } } The problem I'm struggling to resolve is how to facilitate communication between the SerialPort instance and the threads. I'm not certain that the above code is the best way forward, so does anybody have another solution (the simpler the better)?

    Read the article

  • Why is Func<T> ambiguous with Func<IEnumerable<T>>?

    - by Matt Hamilton
    This one's got me flummoxed, so I thought I'd ask here in the hope that a C# guru can explain it to me. Why does this code generate an error? class Program { static void Main(string[] args) { Foo(X); // the error is on this line } static String X() { return "Test"; } static void Foo(Func<IEnumerable<String>> x) { } static void Foo(Func<String> x) { } } The error in question: Error 1 The call is ambiguous between the following methods or properties: 'ConsoleApplication1.Program.Foo(System.Func<System.Collections.Generic.IEnumerable<string>>)' and 'ConsoleApplication1.Program.Foo(System.Func<string>)' C:\Users\mabster\AppData\Local\Temporary Projects\ConsoleApplication1\Program.cs 12 13 ConsoleApplication1 It doesn't matter what type I use - if you replace the "String" declarations with "int" in that code you'll get the same sort of error. It's like the compiler can't tell the difference between Func<T> and Func<IEnumerable<T>>. Can someone shed some light on this?

    Read the article

  • identify accounts using zend-openid [HELP]

    - by tawfekov
    Hi! , i had successfully implemented simple open ID authentication with the [gmail , yahoo , myopenid, aol and many others ] exactly as stackoverflow do but i had a problem with identifying accounts for example 'openid_ns' => string 'http://specs.openid.net/auth/2.0' (length=32) 'openid_mode' => string 'id_res' (length=6) 'openid_op_endpoint' => string 'https://www.google.com/accounts/o8/ud' (length=37) 'openid_response_nonce' => string '2010-05-02T06:35:40Z9YIASispxMvKpw' (length=34) 'openid_return_to' => string 'http://dev.local/download/index/gettingback' (length=43) 'openid_assoc_handle' => string 'AOQobUf_2jrRKQ4YZoqBvq5-O9sfkkng9UFxoE9SEBlgWqJv7Sq_FYgD7rEXiLLXhnjCghvf' (length=72) 'openid_signed' => string 'op_endpoint,claimed_id,identity,return_to,response_nonce,assoc_handle' (length=69) 'openid_sig' => string 'knieEHpvOc/Rxu1bkUdjeE9YbbFX3lqZDOQFxLFdau0=' (length=44) 'openid_identity' => string 'https://www.google.com/accounts/o8/id?id=AItOawk0NAa51T3xWy-oympGpxFj5CftFsmjZpY' (length=80) 'openid_claimed_id' => string 'https://www.google.com/accounts/o8/id?id=AItOawk0NAa51T3xWy-oympGpxFj5CftFsmjZpY' (length=80) i thought that openid_claimed_id or openid_identity is identical for every account but after some testing turns out the claimed id is changeable by the users them self in yahoo and randomly changing in case of gmail FYI am using zend openid with a bit modification on consumer class and i can't use SERG [Simple Refistration Extension] cuz currently it uses openid v1 specification :( i hadn't read the specification of openid so if any body know please help me to how to identify openid accounts to know if this account has signed in before or not

    Read the article

  • How to make use of Grails Dependencies in your IDE

    - by raoulsson
    Hi All, So I finally got my dependencies working with Grails. Now, how can my IDE, eg IntelliJ or Eclipse, take advantage of it? Or do I really have to manually manage what classes my IDE knows about at "development time"? If the BuildConfig.groovy script is setup right (see here), you will be able to code away with vi or your favorite editor without any troubles, then run grails compile which will resolve and download the dependencies into the Ivy cache and off you go... If, however, you are using an IDE like Eclipse or IntelliJ, you will need the dependencies at hand while coding. Obviously - as these animals will need them for the "real time" error detection/compilation process. Now, while it is certainly possible to code with all the classes shining up in bright red all over the place that are unknown to your IDE, it is certainly not much fun... The Maven support or whatever it is officially called lives happily with the pom file, no extra "jar directory" pointers needed, at least in IntelliJ. I would like to be able to do the same with Grails dependencies. Currently I am defining them in the BuildConfig.groovy and additionally I copy/paste the current jars around on my local disk and let the IDE point to it. Not very satisfactory, as I am working in a highly volatile project module environment with respect to code change. And this situation ports me directly into "jar hell", as my "develop- and build-dependencies" easily get out of sync and I have to manage manually, that is, with my brain... And my brain should be busy with other stuff... Thanks! Raoul P.S: I'm currently using Grails 1.2M4 and IntelliJ 92.105. But feel free to add answers on future versions of Grails and different, future IDEs, as the come in...

    Read the article

  • Using Kerberos authentication for SQL Server 2008

    - by vivek m
    I am trying to configure my SQL Server to use Kerberos authentication. My setup is like this - My setup is like this- I have 2 virtual PCs in a Windows XP Pro SP3 host. Both VPCs are Windows Server 2003 R2. One VPC acts as the DC, DNS Server, DHCP server, has Active Directory installed and the SQL Server default instance is also running on this VPC. The second VPC is the domain member and it acts as the SQL Server client machine. I configured the SPN on the SQL Server service account to get the Kerberos working. On the client VPC it seems like it is using Kerberos authentication (as desired)- C:\Documents and Settings\administrator.SHAREPOINTSVC>sqlcmd -S vm-winsrvr2003 1> select auth_scheme from sys.dm_exec_connections where session_id=@@spid 2> go auth_scheme ---------------------------------------- KERBEROS (1 rows affected) 1> but on the server computer (where the SQL Server instance is actually running) it looks like it is still using NTLM authentication- . This is not a remote instance, the sql server is local to this machine. C:\Documents and Settings\Administrator>sqlcmd 1> select auth_scheme from sys.dm_exec_connections where session_id=@@spid 2> go auth_scheme ---------------------------------------- NTLM (1 rows affected) 1> What can i do so that it uses Kerberos on the server computer as well ? (or is this something that I should not expect)

    Read the article

  • Credit Card storage solution

    - by jtnire
    Hi Everyone, I'm developing a solution that is designed to store membership details, as well as credit card details. I'm trying to comply with PCI DSS as much as I can. Here is my design so far: PAN = Primary account number == long number on credit card Server A is a remote server. It stores all membership details (Names, Address etc..) and provides indivudal Key A's for each PAN stored Server B is a local server, and actually holds the encrypted PANs, as well as Key B, and does the decryption. To get a PAN, the client has to authenticate with BOTH servers, ask Server A for the respective Key A, then give Key A to server B, which will return the PAN to the client (provided authentication was sucessful). Server A will only ever encrypt Key A with Server B's public Key, as it will have it beforehand. Server B will probably have to send a salt first though, however I doin't think that has to be encrypted I havn't really thought about any implementation (i.e. coding) specifics yet regarding the above, however the solution is using Java's Cajo framework (wrapper for RMI) so that is how the servers will communicate with each other (Currently, membership details are transfered in this way). The reason why I want Server B to do the decryption, and not the client, is that I am afraid of decryption keys going into the client's RAM, even though it's probably just as bad on the server... Can anyone see anything wrong with the above design? It doesn't matter if the above has to be changed. Thanks jtnire

    Read the article

  • Model value not being set on return from View to Controller

    - by sagesky36
    I have a boolean model variable who's value is supposed to be set to TRUE in order to perform a process on return back into the Controller. It works absolutely fine on my local machine, but not on the remote web server. Can somebody PLEASE inform me what I am missing? Below is the "proof of the pudding": The boolean value in quesion is "ShouldGeneratePdf"; MODEL: namespace PDFConverterModel.ViewModels { public partial class ViewModelTemplate_Guarantors { public ViewModelTemplate_Guarantors() { Templates = new List<PDFTemplate>(); Guarantors = new List<tGuarantor>(); } public int SelectedTemplateId { get; set; } public List<PDFTemplate> Templates { get; set; } public int SelectedGuarantorId { get; set; } public List<tGuarantor> Guarantors { get; set; } public string LoanId { get; set; } public string DepartmentId { get; set; } public bool isRepeat { get; set; } public string ddlDept { get; set; } public string SelectedDeptText { get; set; } public string LoanTypeId { get; set; } public string LoanType { get; set; } public string Error { get; set; } public string ErrorT { get; set; } public string ErrorG { get; set; } public bool ShowGeneratePDFBtn { get; set; } public bool ShouldGeneratePdf { get; set; } } } MasterPage: <!DOCTYPE html> <html> <head> <title>@ViewBag.Title</title> <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2012.2.913/kendo.common.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2012.2.913/kendo.dataviz.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2012.2.913/kendo.blueopal.min.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.7.1.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/modernizr-2.5.3.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/kendo/2012.2.913/kendo.all.min.js")"></script> <script src="@Url.Content("~/Scripts/kendo/2012.2.913/kendo.aspnetmvc.min.js")"></script> </head> <body> <div class="page"> <header> <div id="title"> <h1>BHG :: PDF Service Generator</h1> </div> </header> <section id="main"> @RenderBody() </section> <footer> </footer> </div> </body> </html> View: @model PDFConverterModel.ViewModels.ViewModelTemplate_Guarantors @using (Html.BeginForm("ProcessForm", "Home", new AjaxOptions { HttpMethod = "POST" })) { <table style="width: 1000px"> @Html.HiddenFor(x => x.ShouldGeneratePdf) <tr> <td> <img alt="BHG Logo" src="~/Images/logo.gif" /> </td> </tr> <tr> <td> @(Html.Kendo().IntegerTextBox() .Placeholder("Enter Loan Id") .Name("LoanId") .Format("{0:#######}") .Value(Convert.ToInt32(Model.LoanId)) ) </td> </tr> <tr> <td>@Html.Label("Loan Type: ") @Html.DisplayFor(model => Model.LoanType) </td> <td> <label for="ddlDept">Department:</label> @(Html.Kendo().DropDownListFor(model => Model.ddlDept) .Name("ddlDept") .DataTextField("DepartmentName") .DataValueField("DepartmentID") .Events(e => e.Change("Refresh")) .DataSource(source => { source.Read(read => { read.Action("GetDepartments", "Home"); }); }) .Value(Model.ddlDept.ToString()) ) </td> </tr> @if (Model.ShowGeneratePDFBtn == true) { if (Model.ErrorT == string.Empty) { <tr> <td> <u><b>@Html.Label("Templates:")</b></u> </td> </tr> <tr> @for (int i = 0; i < Model.Templates.Count; i++) { <td> @Html.CheckBoxFor(model => Model.Templates[i].IsChecked) @Html.DisplayFor(model => Model.Templates[i].TemplateId) </td> } </tr> } else { <tr> <td> <b>@Html.DisplayFor(model => Model.ErrorT)</b> </td> </tr> } if (Model.ErrorG == string.Empty) { <tr> <td> <u><b>@Html.Label("Guarantors:")</b></u> </td> </tr> <tr> @for (int i = 0; i < Model.Guarantors.Count; i++) { <td> @Html.CheckBoxFor(model => Model.Guarantors[i].isChecked) @Html.DisplayFor(model => Model.Guarantors[i].GuarantorFirstName)&nbsp;@Html.DisplayFor(model => Model.Guarantors[i].GuarantorLastName) </td> } </tr> } else { <tr> <td> <b>@Html.DisplayFor(model => Model.ErrorG)</b> </td> </tr> } } <tr> <td> <input type="submit" name="submitbutton" id="btnRefresh" value='Refresh' /> </td> @if (Model.ShowGeneratePDFBtn == true) { <td> <input type="submit" name="submitbutton" id="btnGeneratePDF" value='Generate PDF' /> </td> } </tr> <tr> <td style="color: red; font: bold"> @Model.Error </td> </tr> </table> } <script type="text/javascript"> $('#btnRefresh').click(function () { Refresh(); }); function Refresh() { var LoanID = $("#LoanID").val(); if (parseInt(LoanID) != 0) { $('#ShouldGeneratePdf').val(false) document.forms[0].submit(); } else { alert("Please enter a LoanId"); } } //$(function () { // //DOM loaded // $('#btnGeneratePDF').click(function () { // DisableGeneratePDF(); // $('#ShouldGeneratePdf').val(true) // }); //}); //function DisableGeneratePDF() { // $('#btnGeneratePDF').attr("disabled", true); // $('#btnRefresh').attr("disabled", true); //} $('#btnGeneratePDF').click(function () { alert("inside click function"); DisableGeneratePDF(); $('#ShouldGeneratePdf').val(true) tof = $('#ShouldGeneratePdf').val(); alert("ShouldGeneratePdf set to " + tof); }); function DisableGeneratePDF() { alert("begin DisableGeneratePDF function"); $('#btnGeneratePDF').attr("disabled", true); $('#btnRefresh').attr("disabled", true); alert("end DisableGeneratePDF function"); } </script> Controller: [HttpPost] public ActionResult ProcessForm(string submitbutton, ViewModelTemplate_Guarantors model, FormCollection collection) if ((submitbutton == "Refresh") || (submitbutton == null) && (model.ShouldGeneratePdf == false)) { } else if ((submitbutton == "Generate PDF") || (model.ShouldGeneratePdf == true)) { } The "Alerts" in the script above come out to exactly what they should be on the remote server. The last alert shows that the value of the bool variable is "true". However, when I do page source views of the hidden variable, below is the result. The values of the hidden variable when the page loads and when the last alert button finishes are as follows: My local machine: The remote machine: As you can see, the value on my machine is set to true when the process executes. However, on the remote machine, it is set to false where it then doesn't excute. Why isn't the value in the model being returned as TRUE on the remote machine?

    Read the article

  • Should I close sockets from both ends?

    - by Roman
    I have the following problem. My client program monitor for availability of server in the local network (using Bonjour, but it does not rally mater). As soon as a server is "noticed" by the client application, the client tries to create a socket: Socket(serverIP,serverPort);. At some point the client can loose the server (Bonjour says that server is not visible in the network anymore). So, the client decide to close the socket, because it is not valid anymore. At some moment the server appears again. So, the client tries to create a new socket associated with this server. But! The server can refuse to create this socket since it (server) has already a socket associated with the client IP and client port. It happens because the socket was closed by the client, not by the server. Can it happen? And if it is the case, how this problem can be solved? Well, I understand that it is unlikely that the client will try to connect to the server from the same port (client port), since client selects its ports randomly. But it still can happen (just by chance). Right?

    Read the article

  • Using a CMS with an external database

    - by George Reith
    I am looking at building an external site with a CMS, probably Drupal or ExpressionEngine. The problem is that our company already has a membership database that is designed to work with our existing enterprise software. Migrating data from the database manually is not an option as modifications and new data must be accessible in real-time. Because the design of the external database will differ from the CMS's own I have decided the best way forward is to use two databases and force the CMS to use the external to read user information (cannot write to) and a local for everything else the CMS needs to do (read + write). Is this feasible with these Drupal or ExpressionEngine? Ideally I need to be able to use hooks as I do not wan't to modify core CMS files. Sifting through the docs I am not able to find what I would hook into for ether CMS. (Note: I know it is possible, but I want to know if it's feasible). Finally if there is a better way of handling this situation please also chime in. Perhaps there is something at the database level to reference a field or table in an external database? I'm clutching at straws someone can point me in the right direction I'm sure.

    Read the article

  • Code promotion: Enforcing the rules

    - by jbarker7
    So here is our problem: We have a small team of developers with their own ways of doing things-- I am trying to formalize a process in which we are required to promote our code in the following order: Local sandbox Dev UAT Staging Live Developers develop/test as they go on their own sandbox, Dev is its own box that we would use for continuous integration, UAT is another site in IIS on the dev box, which uses our dev database. We then promote to staging, which is a site in IIS on the Live box and using live data (just like the live, hence staging). Then, finally, we promote to live. Here are a few of my questions: 1.) Does this seem to be best practice? If not, what needs to be done differently? 2.) How do I enforce the rules to the developers? Often developers skip steps in order to save time... this should not be tolerated and would be great if it could be physically enforced. 3.) How do I enforce these rules to the business group? The business group just wants to get features out FAST. Do we promote only on certain days? Thanks! Josh

    Read the article

  • What hash algorithms are paralellizable? Optimizing the hashing of large files utilizing on mult-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • Ember Data Sycn - LocalStorage+REST+RealTime+Online/Offline

    - by Miguel Madero
    We have a combination of requirements in terms o data access. Pre-load some reference data. We need reference data to survive browser restarts instead of just living in memory to avoid loading it all the time. I'm currently using the LocalStorageAdapter for that. Once we have it, we would like to sync changes (polling or using Socket.IO in the background and updating the LocalStorage could do the trick) There're other models that are more transactional, where we would need to directly go to the Server and get/save them. It would be nice to use something like the RESTAdapter for that. Lastly, there're some operations that should work off-line and changes should be synced later. To make it more concrete: * We pre-load vendor and "favorite products" into Local Storage. We work offline with those. * We need to sync server changes to vendor and product information. * If they search the full catalog, that requires them to be online. * When offline, we need to allow users to add something to their cart or even submit and order. We would like to queue this action and submit it when they have an Internet Connection. So a few questions are derived from this: * Is there a way to user RESTAdapter in combination with LocalStorage? * Is there some Socket.IO support? (Happy to do this part manually) * Is there Queueing support? Ideally at the Ember-Data level. I know we will have to do a lot of this manually and pull together the different lego pieces, but I wanted to ask for some perspective from experience Ember devs.

    Read the article

  • Do all the HTML5 storage systems work together ?

    - by azera
    While there are a lot of good stuff about html5, one thing I don't get is the redondant storage mechanism, first there is localstorage and sessionstorage, which are key value stores, one is for one instance of the app ("one tab"), and the other works for all the instances of that application so they can share data. Both are saved when you close your browser and have a limited size (usually 5MB), that's great and everything would be nice if we stopped there. But then there is the "Web SQL Database", which has the same security system as the localstorage, the same size limit, the same everything except it works like/is sqlite, with tables and sql syntax and all of that. And the bummer is, they don't work on the same data at all ! This is not two way to access your data, this is really two storage for every html 5 app out there (not created by default yes, but still you see my point). What I would like to know is, is there a reason for both of this mechanisms to exist at the same time ? Or did they just look at sql and nosql movement to pick the best then went "screw it let's add both !" ? Why not implement local/session storage as a table inside web sql db ?

    Read the article

  • csv to hash data structure conversion using perl

    - by Kavya S
    1. Convert a .csv file to perlhash data structure Format of a .csv file: sw,s1,s2,s3,s4 ver,v1,v2,v3,v4 msword,v2,v3,v1,v1 paint,v4,v2,v3,v3 outlook,v1,v1,v3,v2 my perl script: #!/usr/local/bin/perl use strict; use warnings; use Data::Dumper; my %hash; open my $fh, '<', 'some_file.csv' or die "Cannot open: $!"; while (my $line = <$fh>) { $line =~ s/,,/-/; chomp ($line); my @array = split /,/, $line; my $key = shift @array; $hash{$key} = $line; $hash{$key} = \@array; } print Dumper(\%hash); close $fh; perl hash i.e output should look like: $sw_ver_db = { s1 => { msword => {ver => v2}, paint => {ver => v4}, outlook => {ver => v1}, }, s2 => { msword => {ver => v3}, paint => {ver => v2}, outlook => {ver => v1}, }, s3 => { msword => {ver =>v1}, paint => {ver =>v3}, outlook => {ver =>v3}, }, s4 => { msword => {ver =>v1}, paint => {ver =>v3}, outlook => {ver =>v2}, }, };

    Read the article

  • Communicating with all network computers regardless of IP address

    - by Stephen Jennings
    I'm interested in finding a way to enumerate all accessible devices on the local network, regardless of their IP address. For example, in a 192.168.1.X network, if there is a computer with a 10.0.0.X IP address plugged into the network, I want to be able to detect that rogue computer and preferrably communicate with it as well. Both computers will be running this custom software. I realize that's a vague description, and a full solution to the problem would be lengthy, so I'm really looking for help finding the right direction to go in ("Look into using class XYZ and ABC in this manner") rather than a full implementation. The reason I want this is that our company ships imaged computers to thousands of customers, each of which have different network settings (most use the same IP scheme, but a large percentage do not, and most do not have DHCP enabled on their networks). Once the hardware arrives, we have a hard time getting it up on the network, especially if the IP scheme doesn't match, since there is no one technically oriented on-site. Ideally, I want to design some kind of console to be used from their main workstation which looks out on the network, finds all computers running our software, displays their current IP address, and allows you to change the IP. I know it's possible to do this because we sell a couple pieces of custom hardware which have exactly this capability (plug the hardware in anywhere and view it from another computer regardless of IP), but I'm hoping it's possible to do in .NET 2.0, but I'm open to using .NET 3.5 or P/Invoke if I have to.

    Read the article

  • rails 4 -- working with js format from ajax

    - by user101289
    I'm still working on learning Rails, and I have a page with team information that will get updated based on a team's icon click, which fires an ajax call to the controller to populate some tabs. I've read some good info about how to use format.js in the controller to render a partial from a js.coffee or js.erb file. The problem I'm running into is in the coffeescript I think. Right now, I'm getting some data called @schedules from the controller, and passing it to a schedule.js.coffee file that should populate a partial for each record returned and attach it to a table. // schedule.js.coffee $.each @schedules, (schedule) -> ($ '#schedule_data').append("<%= j render(partial: 'schedules/schedule', locals: { s: schedule }) %>") This throws an error `> undefined local variable or method `schedule' for #<#<Class:0x007fe535cd2900>:0x007fe535d32a30>` I tried simplifying the coffeescript to just log the output: $.each @schedules, (schedule) -> console.log(schedule) but this prints nothing. Am I missing something? I am very inexperienced with coffeescript, but it seems like I should be getting some data-- I verified that the schedule items do exist for this team item.

    Read the article

  • Outgoing Emails (Git Patches) Blocked by Windows Live

    - by SteveStifler
    Just recently I dove into the VideoLAN open source project. This was my first time using git, and when sending in my first patch (using git send-email --to [email protected] patches), I was sent the following message from my computer's local mail in the terminal (I'm on OSX 10.6 by the way): Mail rejected by Windows Live Hotmail for policy reasons. We generally do not accept email from dynamic IP's as they are not typically used to deliver unauthenticated SMTP e-mail to an Internet mail server. http:/www.spamhaus.org maintains lists of dynamic and residential IP addresses. If you are not an email/network admin please contact your E-mail/Internet Service Provider for help. Email/network admins, please visit http://postmaster.live.com for email delivery information and support They must think I'm a spammer. I have a dynamic IP and my ISP (Charter) won't let me get a static one, so I tried editing git preferences: git config --global user.email "[email protected]" to my gmail account. However I got the exact same message again. My guess is that it has something to do with the native mail's preferences, but I have no idea how to access them or modify them. Anybody have any ideas for solving this? Thanks!

    Read the article

  • Apache Content negotiation returns wrong Content-Type with Mulitviews :(

    - by edbras
    I am using Apache webserver 2.2 with Content Negotiation through Multiviews. I have: <Directory /home/develop/web/prodBuild> Options +MultiViews AddEncoding x-gzip .gz </Directory> This directory contains the gzip files. So if a browser requests the file bla.js, the server will return the file bla.js.gz as the file bla.js doesn't exists. However, the Content-Type on my Ubuntu server is set to be "application/x-gzip" which is wrong as it's a javascript file, so is has be "application/javascript" I have this working on my local Windows Apache server, but don't seem to get it working on the remote Ubuntu server. :(... NO idea why... Who/why is it setting this wrong Content-Type ? BTW: I don't have this line "AddType application/x-gzip .gz .tgz" somwhere in my config, as then I understand why it goes wrong. My workaround: add the following line under the above "AddEncoding": AddType "" .gz It will then set an empty string as Content-Type and then it works.. I think that the browser will try to discover the content type itself... I hope somebody can explain to me why this is not working and why this content type is set ? :(

    Read the article

  • LoadError in Ruby

    - by wilhelmtell
    I'm having issues requiring 'digest/sha1'. ~$ ./configure --prefix=$HOME/usr --program-suffix=19 --enable-shared ~$ make ~$ make install ~$ irb19 irb(main):001:0> require 'digest/sha1' LoadError: dlopen(/Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle, 9): Symbol not found: _rb_Digest_SHA1_Finish Referenced from: /Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle Expected in: flat namespace - /Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle from (irb):1:in `require' from (irb):1 from /Users/matan/usr/bin/irb19:12:in `<main>' irb(main):002:0> I know some standard modules require fine, while others don't. If i'd say require 'yaml' or even require 'digest' then that works fine. I am using OS X 10.5.8, with Ruby 1.9.1-p378. The system-wide install of Ruby 1.8.6 works fine. Just last week I uninstalled Ruby and re-installed it. When I first installed Ruby I installed it in a similar manner, from source prefixed at my local $HOME/usr directory. I tried removing each and every file make install installs, then re-installing, but that didn't help. Do you have an idea what the issue is and how to resolve it?

    Read the article

  • Visual Studio 2010 / ASP.NET MVC 2 / Publish

    - by SevenCentral
    I just did a clean install on Windows 7 x64 Professional with the final release of Visual Studio 2010 Premium. In order to duplicate what I'm experiencing do the following in: Create a new ASP.NET MVC 2 Web Application Right click the project and select Properties On the Web tab, select "Use Local IIS Web Server" Click on Create Virtual Directory Save all Unload the project Edit the project file Change MvcBuildViews to true Save all Reload project Right click the project and select Publish Choose the file system publish method Enter a target location Choose Delete all existing files Select Publish Right click the project Select Publish Each time I do the above I get the following errror: "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level..." The error originates from obj\debug\package\packagetmp\web.config, relative to the project directory. I can repeat this all day long with any MVC 2 project I've built. In order to fix this problem, I need to set MvcBuildViews to false in the project file. That's not really an option. This wasn't a problem in Visual Studio 2008 and it seems to be an issue with the way the Publish command stages files beneath the project directory. Can anyone else duplicate this error? Is this a bug or by design? Is there a fix, workaround, etc...? Thanks.

    Read the article

< Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >