Search Results

Search found 5662 results on 227 pages for 'processor socket'.

Page 200/227 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Advice on designing and building distributed application to track vehicles

    - by dario-g
    I'm working on application for tracking vehicles. There will be about 10k or more vehicles. Each will be sending ~250bytes in each minute. Data contains gps location and everything from CAN Bus (every data that we can read from vehicle computer and dashboard). Data are sent by GSM/GPRS (using UDP protocol). Estimated rows with this data per day is ~2000k. I see there 3 main blocks. 1. Multithreaded Socket Server (MSS) - I have it. MSS stores received data to the queue (using NServiceBus). 2. Rule Processor Server (RPS) - this is core of this system. This block is responsible for parsing received data, storing in the database, processing rules, sending messages to Notifier Server (this will be sending e-mails/sms texts). Rule example. As I said between received bytes there will be information about current speed. When speed will be above 120 then: show alert in web application for specified users, send e-mail, send sms text. (There can be more than one instance of RPS). 3. Web application - allows reporting and defining rules by users, monitoring alerts, etc. I'm looking for advice how to design communication between RPS and Web application. Some questions: - Should Web application and RPS have separated databases or one central database will be enough? I have one domain model in web application. If there will be one central database then can I use the same model (objects) on RPS? So, how to send changed rules to RPS? I try to decouple this blocks as much as possible. I'm planning to create different instance of application for each client (each client will have separated database). One client will be have 10k vehicles, others only 100 vehicles.

    Read the article

  • How do I find out what process Id and thread id / name has a file open

    - by peter
    Hi All, I am using C# in an application and am having some problems with a file becoming locked. The piece of code does this, while (true) { Read a packet from a socket (with data in it to add to the file) Open a file Writes data to it Close a file } But in the process the file becomes locked. I don't really understand how, we are are definately catching and reporting exceptions so I don't see how the file doesn't get closed every time. My best guess is that something else is opening the file, but I want to prove it. Can someone please provide a piece of code to check whether the file is open and if so report what processid and threadid has the file open. For example if I had this code, StreamWriter streamWriter1 = new StreamWriter(@"c:\logs\test.txt"); streamWriter1.WriteLine("Test"); // code to check for locks?? StreamWriter streamWriter2 = new StreamWriter(@"c:\logs\test.txt"); streamWriter1.Close(); streamWriter2.Close(); That will throw an exception because the file is locked when we try and open it the second time. So where the comment is what could I put in there to report that the current app (process Id) and the current thread (thread Id) have the file locked? Thanks.

    Read the article

  • XSP2 crashes serving static images

    - by Dawkins
    Hi, Requesting a simple HTML page with a jpg image makes XSP2 crash. If I remove the image from the HTML then the page is served OK all the time. The version is XSP2 2.0 mono 2.6.1. the version 2.4.2.2 in the same machine works fine. I have tested it in two different machines, both Windows Vista Business SP1. Anyone has experienced the same? Any clue of what can be the problem? Below is the stack trace displayed by the console: (The line in Spanish says "it has been forced the interruption of an existing connection by the remote host") EDIT: since another user is having the same problem I have submited a bug to Novell and created a litle zip to reproduce the problem: https://bugzilla.novell.com/show_bug.cgi?id=582162 Peer unexpectedly closed the connection on write. Closing our end. System.IO.IOException: Write failure ---> System.Net.Sockets.SocketException: Se ha forzado la interrupción de una conexión existente por el host remoto. at System.Net.Sockets.Socket.Send (System.Byte[] buf, Int32 offset, Int32 size , SocketFlags flags) [0x00000] in <filename unknown>:0 at System.Net.Sockets.NetworkStream.Write (System.Byte[] buffer, Int32 offset, Int32 size) [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Write (System.Byte[] buffer, Int32 offset, Int32 size) [0x00000] in <filename unknown>:0 at Mono.WebServer.XSPWorker.Write (System.Byte[] buffer, Int32 position, Int32 size) [0x00000] in <filename unknown>:0 Peer unexpectedly closed the connection on write. Closing our end. System.ObjectDisposedException: The object was used after being disposed. at System.Net.Sockets.NetworkStream.CheckDisposed () [0x00000] in <filename un known>:0 at System.Net.Sockets.NetworkStream.Write (System.Byte[] buffer, Int32 offset, Int32 size) [0x00000] in <filename unknown>:0 at Mono.WebServer.XSPWorker.Write (System.Byte[] buffer, Int32 position, Int32 size) [0x00000] in <filename unknown>:0 Thank you.

    Read the article

  • lightweight webserver to integrate on client end.

    - by Gopal
    Hi ,... I need to create a python module that will be installed on end-user machines. One of the scripts in that module should be able to receive http POSTS (usually with some JSON formatted data in the body) and then pass on that data to an appropriate python script. I can think of two ways to do this: a) Open a listening server socket on port 80, wait for that http request to come in, parse it and then pass that data to another python script depending on the url that arrived. This method will not require the end-user to install a webserver. End user only has to install the python module. b) Have a mini-webserver installed along with the python module. The webserver will do the same job as [a] via CGI without me requiring to write the CGI functionality. But then the user will have to install the web-server (ie., the hassle of yet another install). Would like to avoid that if possible. IF [b] is the easier option, what is the smallest simplest webserver there is (preferably one that can be packaged as part of the python module itself so that it does not have to be separately installed). Must be opensource of course. regards Gopal

    Read the article

  • Suitable ESXi Spec

    - by Canacourse
    Finally I have some money to buy a new server and replace the one I have been using for 10 years. Im thinking of running ESXi on the new server. And intend to use it as follows; One W2008 R2 Guest running Exchange, File store, SVN and an accounting application for day to day running of the company. Multiple Guest VMs W2K, XP, Vista & WIN7 that were setup for testing in-house & real customer images also for testing. Probably Two Server Guest Os's W2003 & W2008 running at the same time again for testing. One Guest VM for builds & Continuous integration. Possibly one Guest running W220R2 for a customer website (Portal) This server will have to last another 10 years so I want to get the spec right. Althought I am clear on the memory and disk requirments I am not so clear on the processor(s). Im thinking of 2 Quadcore processors but welcome advice on this. Proposed Spec 10GB Ram 2TB Sata Drives (Hardware Raid 1) 2 Processors (TBC) Normally 3 Server VM's will running concurrently and the other VMs will be started as required. Max expected VMs running about 7. Max users = 4. TIA..

    Read the article

  • When is a>a true ?

    - by Cricri
    Right, I think I really am living a dream. I have the following piece of code which I compile and run on an AIX machine: AIX 3 5 PowerPC_POWER5 processor type IBM XL C/C++ for AIX, V10.1 Version: 10.01.0000.0003 #include <stdio.h> #include <math.h> #define RADIAN(x) ((x) * acos(0.0) / 90.0) double nearest_distance(double radius,double lon1, double lat1, double lon2, double lat2){ double rlat1=RADIAN(lat1); double rlat2=RADIAN(lat2); double rlon1=lon1; double rlon2=lon2; double a=0,b=0,c=0; a = sin(rlat1)*sin(rlat2)+ cos(rlat1)*cos(rlat2)*cos(rlon2-rlon1); printf("%lf\n",a); if (a > 1) { printf("aaaaaaaaaaaaaaaa\n"); } b = acos(a); c = radius * b; return radius*(acos(sin(rlat1)*sin(rlat2)+ cos(rlat1)*cos(rlat2)*cos(rlon2-rlon1))); } int main(int argc, char** argv) { nearest_distance(6367.47,10,64,10,64); return 0; } Now, the value of 'a' after the calculation is reported as being '1'. And, on this AIX machine, it looks like 1 1 is true as my 'if' is entered !!! And my acos of what I think is '1' returns NanQ since 1 is bigger than 1. May I ask how that is even possible ? I do not know what to think anymore ! The code works just fine on other architectures where 'a' really takes the value of what I think is 1 and acos(a) is 0.

    Read the article

  • Android Multiple Handlers Design Question

    - by Soumya Simanta
    This question is related to an existing question I asked. I though I'll ask a new question instead of replying back to the other question. Cannot "comment" on my previous question because of a word limit. Marc wrote - I've more than one Handlers in an Activity." Why? If you do not want a complicated handleMessage() method, then use post() (on Handler or View) to break the logic up into individual Runnables. Multiple Handlers makes me nervous. I'm new to Android. Is having multiple handlers in a single activity a bad design ? I'm new to Android. My question is - is having multiple handlers in a single activity a bad design ? Here is the sketch of my current implementation. I've a mapActivity that creates a data thread (a UDP socket that listens for data). My first handler is responsible for sending data from the data thread to the activity. On the map I've a bunch of "dynamic" markers that are refreshed frequently. Some of these markers are video markers i.e., if the user clicks a video marker, I add a ViewView that extends a android.opengl.GLSurfaceView to my map activity and display video on this new vide. I use my second handler to send information about the marker that the user tapped on ItemizedOverlay onTap(int index) method. The user can close the video view by tapping on the video view. I use my third handler for this. I would appreciate if people can tell me what's wrong with this approach and suggest better ways to implement this. Thanks.

    Read the article

  • Triangle numbers problem....show within 4 seconds

    - by Daredevil
    The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be: 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ... Let us list the factors of the first seven triangle numbers: 1: 1 3: 1,3 6: 1,2,3,6 10: 1,2,5,10 15: 1,3,5,15 21: 1,3,7,21 28: 1,2,4,7,14,28 We can see that 28 is the first triangle number to have over five divisors. Given an integer n, display the first triangle number having at least n divisors. Sample Input: 5 Output 28 Input Constraints: 1<=n<=320 I was obviously able to do this question, but I used a naive algorithm: Get n. Find triangle numbers and check their number of factors using the mod operator. But the challenge was to show the output within 4 seconds of input. On high inputs like 190 and above it took almost 15-16 seconds. Then I tried to put the triangle numbers and their number of factors in a 2d array first and then get the input from the user and search the array. But somehow I couldn't do it: I got a lot of processor faults. Please try doing it with this method and paste the code. Or if there are any better ways, please tell me.

    Read the article

  • How to create a CGBitmapContext which works for Retina display and not wasting space for regular display?

    - by ????
    Is it true that if it is in UIKit, including drawRect, the HD aspect of Retina display is automatically handled? So does that mean in drawRect, the current graphics context for a 1024 x 768 view is actually a 2048 x 1536 pixel Bitmap context? (is there a way to print this size out to verify it). We actually enjoy the luxury of 1 point = 4 pixels automatically handled for us. However, if we use CGBitmapContextCreate, then those will really be pixels, not points? (at least if we provide a data buffer for that bitmap, the size is not for the higher resolution, but for the standard resolution, and even if we pass NULL as the buffer so that CGBitmapContextCreate handles the buffer for us, the size probably is the same as if we pass in a data buffer, and it is just standard resolution, not Retina's resolution). We can always create 2048 x 1536 for iPad 1 and iPad 2 as well as the New iPad, but it will waste memory and processor and GPU power, as it is only needed for the New iPad. So do we have to use a if () { } else { } to create such a bitmap context and how do we actually do so? And all our code CGContextMoveToPoint has to be adjusted for Retina display to use x * 2 and y * 2 vs non-retina display of just using x, y as well? That can be quite messy for the code. (or maybe we can define a local variable scaleFactor and set it to 1 for standard resolution and 2 if it is retina, so our x and y will always be x * scaleFactor, y * scaleFactor instead of just x and y.) It seems that UIGraphicsBeginImageContextWithOptions can create one for Retina automatically if the scale of 0.0 is passed in, but I don't think it can be used if I need to create the context and keep it (and using ivar or property of UIViewController to hold it). If I don't release it using UIGraphicsEndImageContext, then it stays in the graphics context stack, so it seems like I have to use CGBitmapContextCreate instead. (or do we just let it stay at the bottom of the stack and not worry about it?)

    Read the article

  • How can I automatically release resources RAII-style in Perl?

    - by Philip Potter
    Say I have a resource (e.g. a filehandle or network socket) which has to be freed: open my $fh, "<", "filename" or die "Couldn't open filename: $!"; process($fh); close $fh or die "Couldn't close filename: $!"; Suppose that process might die. Then the code block exits early, and $fh doesn't get closed. I could explicitly check for errors: open my $fh, "<", "filename" or die "Couldn't open filename: $!"; eval {process($fh)}; my $saved_error = $@; close $fh or die "Couldn't close filename: $!"; die $saved_error if $saved_error; but this kind of code is notoriously difficult to get right, and only gets more complicated when you add more resources. In C++ I would use RAII to create an object which owns the resource, and whose destructor would free it. That way, I don't have to remember to free the resource, and resource cleanup happens correctly as soon as the RAII object goes out of scope - even if an exception is thrown. Unfortunately in Perl a DESTROY method is unsuitable for this purpose as there are no guarantees for when it will be called. Is there a Perlish way to ensure resources are automatically freed like this even in the presence of exceptions? Or is explicit error checking the only option?

    Read the article

  • WCF Fails when using impersonation over 2 machine boundaries (3 machines)

    - by MrTortoise
    These scenarios work in their pieces. Its when i put it all together that it breaks. I have a WCF service using netTCP that uses impersonation to get the callers ID (role based security will be used at this level) on top of this is a WCF service using basicHTTP with TransportCredientialOnly which also uses impersonation I then have a client front end that connects to the basicHttp. the aim of the game is to return the clients username from the netTCP service at the bottom - so ultimatley i can use role based security here. each service is on a different machine - and each service works when you remove any calls they make to other services when you run a client for them both locally and remotley. IE the problem only manifests when you jump accross more than one machine boundary. IE the setup breaks when i connect each part together - but they work fine on their own. I also specify [OperationBehavior(Impersonation = ImpersonationOption.Required)] in the method and have IIS setup to only allow windows authentication (actually i have ananymous enabled still, but disabling makes no difference) This impersonation works fine in the scenario where i have a netTCP Service on Machine A with a client with a basicHttp service on machine B with a clinet for the basicHttp service also on machine B ... however if i move that client to any machine C i get the following error: The exception is 'The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:10:00'' the inner message is 'An existing connection was forcibly closed by the remote host' Am beginning to think this is more a network issue than config ... but then im grasping at straws ... the config files are as follows (heading from the client down to the netTCP layer) <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="basicHttpBindingEndpoint" closeTimeout="00:02:00" openTimeout="00:02:00" receiveTimeout="00:10:00" sendTimeout="00:02:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://panrelease01/WCFTopWindowsTest/Service1.svc" binding="basicHttpBinding" bindingConfiguration="basicHttpBindingEndpoint" contract="ServiceReference1.IService1" name="basicHttpBindingEndpoint" behaviorConfiguration="ImpersonationBehaviour" /> </client> <behaviors> <endpointBehaviors> <behavior name="ImpersonationBehaviour"> <clientCredentials> <windows allowedImpersonationLevel="Impersonation"/> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> </configuration> the service for the client (basicHttp service and the client for the netTCP service) <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> <system.serviceModel> <bindings> <netTcpBinding> <binding name="netTcpBindingEndpoint" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> <basicHttpBinding> <binding name="basicHttpWindows"> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Windows"></transport> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="net.tcp://5d2x23j.panint.com/netTCPwindows/Service1.svc" binding="netTcpBinding" bindingConfiguration="netTcpBindingEndpoint" contract="ServiceReference1.IService1" name="netTcpBindingEndpoint" behaviorConfiguration="ImpersonationBehaviour"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> <behaviors> <endpointBehaviors> <behavior name="ImpersonationBehaviour"> <clientCredentials> <windows allowedImpersonationLevel="Impersonation" allowNtlm="true"/> </clientCredentials> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="WCFTopWindowsTest.basicHttpWindowsBehaviour"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <services> <service name="WCFTopWindowsTest.Service1" behaviorConfiguration="WCFTopWindowsTest.basicHttpWindowsBehaviour"> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="basicHttpWindows" name ="basicHttpBindingEndpoint" contract ="WCFTopWindowsTest.IService1"> </endpoint> </service> </services> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <directoryBrowse enabled="true" /> </system.webServer> </configuration> then finally the service for the netTCP layer <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <authentication mode="Windows"></authentication> <authorization> <allow roles="*"/> </authorization> <compilation debug="true" targetFramework="4.0" /> <identity impersonate="true" /> </system.web> <system.serviceModel> <bindings> <netTcpBinding> <binding name="netTCPwindows"> <security mode="Transport"> <transport clientCredentialType="Windows"></transport> </security> </binding> </netTcpBinding> </bindings> <services> <service behaviorConfiguration="netTCPwindows.netTCPwindowsBehaviour" name="netTCPwindows.Service1"> <endpoint address="" bindingConfiguration="netTCPwindows" binding="netTcpBinding" name="netTcpBindingEndpoint" contract="netTCPwindows.IService1"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mextcp" binding="mexTcpBinding" contract="IMetadataExchange"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:8721/test2" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="netTCPwindows.netTCPwindowsBehaviour"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="false" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <directoryBrowse enabled="true" /> </system.webServer> </configuration>

    Read the article

  • Handling XMLHttpRequest to call external application

    - by Ian
    I need a simple way to use XMLHttpRequest as a way for a web client to access applications in an embedded device. I'm getting confused trying to figure out how to make something thin and light that handles the XMLHttpRequests coming to the web server and can translate those to application calls. The situation: The web client using Ajax (ExtJS specifically) needs to send and receive asynchronously to an existing embedded application. This isn't just to have a thick client/thin server, the client needs to run background checking on the application status. The application can expose a socket interface, with a known set of commands, events, and configuration values. Configuration could probably be transmitted as XML since it comes from a SQLite database. In between the client and app is a lighttpd web server running something that somehow handles the translation. This something is the problem. What I think I want: Lighttpd can use FastCGI to route all XMLHttpRequest to an external process. This process will understand HTML/XML, and translate between that and the application's language. It will have custom logic to simulate pushing notifications to the client (receive XMLHttpRequest, don't respond until the next notification is available). C/C++. I'd really like to avoid installing Java/PHP/Perl on an embedded device. So I'll need more low level understanding. How do I do this? Are there good C++ libraries for interpreting the CGI headers and HTML so that I don't have to do any syntax processing, I can just deal with the request/response contents? Are there any good references to exactly what goes on, server side, when handling the XMLHttpRequest and CGI interfaces? Is there any package that does most of this job already, or will I have to build the non-HTTP/CGI stuff from scratch? Thanks for any help! I am really having trouble learning about the server side of these technologies.

    Read the article

  • What's the relationship between the Intel Atom Developer Program and the MeeGo operating system?

    - by Arne Evertsson
    I'm trying to understand the relationship between the Intel Atom Developer Program (IADP) and the new OS called MeeGo. IADP let's me create applications that run on both MeeGo as well as Windows devices, as long as the device is based on the Atom processor. The IADP apps are published in an app store called AppUp, which is very much like the Apple App Store. The MeeGo operating system merges Intel's Moblin and Nokia's Maemo into one OS. The purpose seems to be to make it possible to develop software that will run on Intel powered devices, Nokia-made devices, as well devices from other companies. Nokia has its Ovi Store that will support MeeGo apps. With its OS independent runtime, the question is what an IADP app really is? Is an IADP app a beast of its own, or is it just a MeeGo app that has been restricted to run only on Atom powered devices? Will it be possible to recompile my IADP app to run on all MeeGo devices? Sold in Ovi Store? Intel and Nokia have me really confused. Where should I go as a developer?

    Read the article

  • XMLStreamReader and a real stream

    - by Yuri Ushakov
    Update There is no ready XML parser in Java community which can do NIO and XML parsing. This is the closest I found, and it's incomplete: http://wiki.fasterxml.com/AaltoHome I have the following code: InputStream input = ...; XMLInputFactory xmlInputFactory = XMLInputFactory.newInstance(); XMLStreamReader streamReader = xmlInputFactory.createXMLStreamReader(input, "UTF-8"); Question is, why does the method #createXMLStreamReader() expects to have an entire XML document in the input stream? Why is it called a "stream reader", if it can't seem to process a portion of XML data? For example, if I feed: <root> <child> to it, it would tell me I'm missing the closing tags. Even before I begin iterating the stream reader itself. I suspect that I just don't know how to use a XMLStreamReader properly. I should be able to supply it with data by pieces, right? I need it because I'm processing a XML stream coming in from network socket, and don't want to load the whole source text into memory. Thank you for help, Yuri.

    Read the article

  • Visual Studio 2008 freeze after save

    - by Klay
    I recently added about a dozen classes from another solution into my current solution in Visual Studio. After adding these classes, Visual Studio started freezing for about 10 seconds whenever I Save. The cursor disappears and mouse clicks and keys do nothing. Some interesting points: Even after I removed the classes, the freezing behavior is still there. Freezing occurs whether I've made changes to the code or not. This behavior ONLY seems to affect this particular version of this solution. No other solutions exhibit this behavior. Older versions of this solution are not affected. In Sysinternals Process Explorer, whenever I save in Visual Studio, the I/O bytes graph jumps from 0 to 2MB for about 5 seconds, then drops to about 1 MB for a split second, then jumps back to 2MB for another 5 seconds. Processor use goes up to about 3-5% during this time. Here are the details of my setup: C# Silverlight project (maybe 20 classes), .NET version 3.5 SP1, Visual Studio 2008 v9.0.30729 SP1. EDIT: I edited this question extensively to reflect the more detailed information. I thought this might be preferable to starting a new question.

    Read the article

  • Bugzilla Install question - I'm stuck

    - by Nabeel
    I run Bugzilla's checksetup.pl (migrating an older version), and it always returns: Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.005 Had to create DBD::mysql::dr::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Had to create DBD::mysql::db::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. There was an error connecting to MySQL: Undefined subroutine &DBD::mysql::db::_login called at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBD/mysql.pm line 142, <DATA> line 225. MySQL Version: [root@bugzilla-core TMP]# mysql --version mysql Ver 14.12 Distrib 5.0.60sp1, for redhat-linux-gnu (x86_64) using readline 5.1 And mysql_config: [root@bugzilla-core TMP]# mysql_config Usage: /data01/mysql-5.0.60/bin/mysql_config [OPTIONS] Options: --cflags [-I/data01/mysql-5.0.60/include -g] --include [-I/data01/mysql-5.0.60/include] --libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient -lz -lcrypt -lnsl -lm -lmygcc] --libs_r [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient_r -lz -lpthread -lcrypt -lnsl -lm -lpthread -lmygcc] --socket [/tmp/mysql.sock] --port [0] --version [5.0.60sp1] --libmysqld-libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqld -lz -lpthread -lcrypt -lnsl -lm -lpthread -lrt -lmygcc] Now, I've tried the latest version of DBD-mysql (4.0.14). I'm completely lost and stumped. I'm not sure where to go from here. Scouring the 'webs haven't returned anything fruitful. Any ideas?

    Read the article

  • XSLT fails to load huge XML doc after matching certain elements

    - by krisvandenbergh
    I'm trying to match certain elements using XSLT. My input document is very large and the source XML fails to load after processing the following code (consider especially the first line). <xsl:template match="XMI/XMI.content/Model_Management.Model/Foundation.Core.Namespace.ownedElement/Model_Management.Package/Foundation.Core.Namespace.ownedElement"> <rdf:RDF> <rdf:Description rdf:about=""> <xsl:for-each select="Foundation.Core.Class"> <xsl:for-each select="Foundation.Core.ModelElement.name"> <owl:Class rdf:ID="@Foundation.Core.ModelElement.name" /> </xsl:for-each> </xsl:for-each> </rdf:Description> </rdf:RDF> </xsl:template> Apparently the XSLT fails to load after "Model_Management.Model". The PHP code is as follows: if ($xml->loadXML($source_xml) == false) { die('Failed to load source XML: ' . $http_file); } It then fails to perform loadXML and immediately dies. I think there are two options now. 1) I should set a maximum executing time. Frankly, I don't know how that I do this for the built-in PHP 5 XSLT processor. 2) Think about another way to match. What would be the best way to deal with this? The input document can be found at http://krisvandenbergh.be/uml_pricing.xml Any help would be appreciated! Thanks.

    Read the article

  • Password Cracking in 2010 and Beyond

    - by mttr
    I have looked a bit into cryptography and related matters during the last couple of days and am pretty confused by now. I have a question about password strength and am hoping that someone can clear up my confusion by sharing how they think through the following questions. I am becoming obsessed about these things, but need to spend my time otherwise :-) Let's assume we have an eight-digit password that consists of upper and lower-case alphabetic characters, numbers and common symbols. This means we have 8^96 ~= 7.2 quadrillion different possible passwords. As I understand there are at least two approaches to breaking this password. One is to try a brute-force attack where we try to guess each possible combination of characters. How many passwords can modern processors (in 2010, Core i7 Extreme for eg) guess per second (how many instructions does a single password guess take and why)? My guess would be that it takes a modern processor in the order of years to break such a password. Another approach would consist of obtaining a hash of my password as stored by operating systems and then search for collisions. Depending on the type of hash used, we might get the password a lot quicker than by the bruteforce attack. A number of questions about this: Is the assertion in the above sentence correct? How do I think about the time it takes to find collisions for MD4, MD5, etc. hashes? Where does my Snow Leopard store my password hash and what hashing algorithm does it use? And finally, regardless of the strength of file encryption using AES-128/256, the weak link is still my en/decryption password used. Even if breaking the ciphered text would take longer than the lifetime of the universe, a brute-force attack on my de/encryption password (guess password, then try to decrypt file, try next password...), might succeed a lot earlier than the end of the universe. Is that correct? I would be very grateful, if people could have mercy on me and help me think through these probably simple questions, so that I can get back to work.

    Read the article

  • memory alignment within gcc structs

    - by Mumbles
    I am porting an application to an ARM platform in C, the application also runs on an x86 processor, and must be backward compatible. I am now having some issues with variable alignment. I have read the gcc manual for __attribute__((aligned(4),packed)) I interpret what is being said as the start of the struct is aligned to the 4 byte boundry and the inside remains untouched because of the packed statement. originally I had this but occasionally it gets placed unaligned with the 4 byte boundary. typedef struct { unsigned int code; unsigned int length; unsigned int seq; unsigned int request; unsigned char nonce[16]; unsigned short crc; } __attribute__((packed)) CHALLENGE; so I change it to this. typedef struct { unsigned int code; unsigned int length; unsigned int seq; unsigned int request; unsigned char nonce[16]; unsigned short crc; } __attribute__((aligned(4),packed)) CHALLENGE; The understand I stated earlier seems to be incorrect as both the struct is now aligned to a 4 byte boundary, and and the inside data is now aligned to a four byte boundary, but because of the endianess, the size of the struct has increased in size from 42 to 44 bytes. This size is critical as we have other applications that depend on the struct being 42 bytes. Could some describe to me how to perform the operation that I require. Any help is much appreciated.

    Read the article

  • How to distinguish between two different UDP clients on the same IP address?

    - by Ricket
    I'm writing a UDP server, which is a first for me; I've only done a bit of TCP communications. And I'm having trouble figuring out exactly how to distinguish which user is which, since UDP deals only with packets rather than connections and I therefore cannot tell exactly who I'm communicating with. Here is pseudocode of my current server loop: DatagramPacket p; socket.receive(p); // now p contains the user's IP and port, and the data int key = getKey(p); if(key == 0) { // connection request key = makeKey(p); clients.add(key, p.ip); send(p.ip, p.port, key); // give the user his key } else { // user has a key // verify key belongs to that IP address // lookup the user's session data based on the key // react to the packet in the context of the session } When designing this, I kept in mind these points: Multiple users may exist on the same IP address, due to the presence of routers, therefore users must have a separate identification key. Packets can be spoofed, so the key should be checked against its original IP address and ignored if a different IP tries to use the key. The outbound port on the client side might change among packets. Is that third assumption correct, or can I simply assume that one user = one IP+port combination? Is this commonly done, or should I continue to create a special key like I am currently doing? I'm not completely clear on how TCP negotiates a connection so if you think I should model it off of TCP then please link me to a good tutorial or something on TCP's SYN/SYNACK/ACK mess. Also note, I do have a provision to resend a key, if an IP sends a 0 and that IP already has a pending key; I omitted it to keep the snippet simple. I understand that UDP is not guaranteed to arrive, and I plan to add reliability to the main packet handling code later as well.

    Read the article

  • How should I read from a buffered reader?

    - by Roman
    I have the following example of reading from a buffered reader: while ((inputLine = input.readLine()) != null) { System.out.println("I got a message from a client: " + inputLine); } The code in the loop println will be executed whenever something appears in the buffered reader (input in this case). In my case, if a client-application writes something to the socket, the code in the loop (in the server-application) will be executed. But I do not understand how it works. inputLine = input.readLine() waits until something appears in the buffered reader and when something appears there it returns true and the code in the loop is executed. But when null can be returned. There is another question. The above code was taken from a method which throws Exception and I use this code in the run method of the Thread. And when I try to put throws Exception before the run the compiler complains: overridden method does not throw exception. Without the throws exception I have another complain from the compiler: unreported exception. So, what can I do?

    Read the article

  • How can I get bitfields to arrange my bits in the right order?

    - by Jim Hunziker
    To begin with, the application in question is always going to be on the same processor, and the compiler is always gcc, so I'm not concerned about bitfields not being portable. gcc lays out bitfields such that the first listed field corresponds to least significant bit of a byte. So the following structure, with a=0, b=1, c=1, d=1, you get a byte of value e0. struct Bits { unsigned int a:5; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); (Actually, this is C++, so I'm talking about g++.) Now let's say I'd like a to be a six bit integer. Now, I can see why this won't work, but I coded the following structure: struct Bits2 { unsigned int a:6; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); Setting b, c, and d to 1, and a to 0 results in the following two bytes: c0 01 This isn't what I wanted. I was hoping to see this: e0 00 Is there any way to specify a structure that has three bits in the most significant bits of the first byte and six bits spanning the five least significant bits of the first byte and the most significant bit of the second? Please be aware that I have no control over where these bits are supposed to be laid out: it's a layout of bits that are defined by someone else's interface.

    Read the article

  • Trouble using the ref keyword. Very newbie question!

    - by Sergio Tapia
    Here's my class: public class UserInformation { public string Username { get; set; } public string ComputerName { get; set; } public string Workgroup { get; set; } public string OperatingSystem { get; set; } public string Processor { get; set; } public string RAM { get; set; } public string IPAddress { get; set; } public UserInformation GetUserInformation() { var CompleteInformation = new UserInformation(); GetPersonalDetails(ref CompleteInformation); GetMachineDetails(ref CompleteInformation); return CompleteInformation; } private void GetPersonalDetails(ref UserInformation CompleteInformation) { } private void GetMachineDetails(ref UserInformation CompleteInformation) { } } I'm under the impression that the ref keyword tells the computer to use the same variable and not create a new one. Am I using it correctly? Do I have to use ref on both the calling code line and the actual method implementation?

    Read the article

  • how to do these in visio 2007?

    - by user285825
    I am very annoyed with this software. I am unable to do many things with this software. In the book 'UML distilled' many features of UML are discussed which I am not sure how to accomplish with visio 2007. For instance, 1) I can't find the unary association. In the UML static structure under shapes panel on left, there are a lot of components like package, class, blah, blah, binary association, blah, blah, association class. But where is unary association. 2) For sequence diagram, I created a message (any of sync, async, call type message, ordinary message). The I tried to incorporate some parameter information. I went to the properties. There were a category called arguments. But selecting that shows a table where arguments are supposed to be shown. But all are diasbled. 3) For sequence diagram, there is no component delete (the big X) in the UML Sequence. 4) For class diagram, there is supposed to be a comment compartment where comments like mentioning the responsibilities are allowed using a comment prefixed with "--". But I am not sure how to accomplish them. 5) there is also supposed to be a way to indicate static properties of a class. But I am not sure how to do that in visio. 6) there is supposed to be a stereotype for class <. But in the stereotype drop down there is no stereotype. 7) where is the ball and socket component?

    Read the article

  • Primefaces push encoding

    - by kalaoke
    I use primefaces push to write a message in a p:notificationBar. if i push a message with specials characters (like russians char), I've got '?' in my message. How can I fix this problem ? Thanks for your help. (config : primefaces 3.4 and jsf2. All my html pages are utf8 encoding). Here is my view code : <p:notificationBar id="bar" widgetVar="pushNotifBar" position="bottom" style="z-index: 3;border: 8px outset #AB1700;width: 97%;background: none repeat scroll 0 0 #CA837D;"> <h:form prependId="false"> <p:commandButton icon="ui-icon-close" title="#{messages['gen.close']}" styleClass="ui-notificationbar-close" type="button" onclick="pushNotifBar.hide();"/> </h:form> <h:panelGrid columns="1" style="width: 100%; text-align: center;"> <h:outputText id="pushNotifSummary" value="#{growlBean.summary}" style="font-size:36px;text-align:center;"/> <h:outputText id="pushNotifDetail" value="#{growlBean.detail}" style="font-size: 20px; float: left;" /> </h:panelGrid> </p:notificationBar> <p:socket onMessage="handleMessage" channel="/notifications"/> <script type="text/javascript"> function handleMessage(data) { var substr = data.split(' %% '); $('#pushNotifSummary').html(substr[0]); $('#pushNotifDetail').html(substr[1]); pushNotifBar.show(); } </script> and my Bean code : public void send() { PushContext pushContext = PushContextFactory.getDefault().getPushContext(); String var = summary + " %% " + detail; pushContext.push("/notifications", var);

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >