Search Results

Search found 5662 results on 227 pages for 'processor socket'.

Page 199/227 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • Simple, fast SQL queries for flat files.

    - by plinehan
    Does anyone know of any tools to provide simple, fast queries of flat files using a SQL-like declarative query language? I'd rather not pay the overhead of loading the file into a DB since the input data is typically thrown out almost immediately after the query is run. Consider the data file, "animals.txt": dog 15 cat 20 dog 10 cat 30 dog 5 cat 40 Suppose I want to extract the highest value for each unique animal. I would like to write something like: cat animals.txt | foo "select $1, max(convert($2 using decimal)) group by $1" I can get nearly the same result using sort: cat animals.txt | sort -t " " -k1,1 -k2,2nr And I can always drop into awk from there, but this all feels a bit awkward (couldn't resist) when a SQL-like language would seem to solve the problem so cleanly. I've considered writing a wrapper for SQLite that would automatically create a table based on the input data, and I've looked into using Hive in single-processor mode, but I can't help but feel this problem has been solved before. Am I missing something? Is this functionality already implemented by another standard tool? Halp!

    Read the article

  • Unable to load huge XML document (incorrectly suppose it's due to the XSLT processing)

    - by krisvandenbergh
    I'm trying to match certain elements using XSLT. My input document is very large and the source XML fails to load after processing the following code (consider especially the first line). <xsl:template match="XMI/XMI.content/Model_Management.Model/Foundation.Core.Namespace.ownedElement/Model_Management.Package/Foundation.Core.Namespace.ownedElement"> <rdf:RDF> <rdf:Description rdf:about=""> <xsl:for-each select="Foundation.Core.Class"> <xsl:for-each select="Foundation.Core.ModelElement.name"> <owl:Class rdf:ID="@Foundation.Core.ModelElement.name" /> </xsl:for-each> </xsl:for-each> </rdf:Description> </rdf:RDF> </xsl:template> Apparently the XSLT fails to load after "Model_Management.Model". The PHP code is as follows: if ($xml->loadXML($source_xml) == false) { die('Failed to load source XML: ' . $http_file); } It then fails to perform loadXML and immediately dies. I think there are two options now. 1) I should set a maximum executing time. Frankly, I don't know how that I do this for the built-in PHP 5 XSLT processor. 2) Think about another way to match. What would be the best way to deal with this? The input document can be found at http://krisvandenbergh.be/uml_pricing.xml Any help would be appreciated! Thanks.

    Read the article

  • parameterized doctype in xsl:stylesheet

    - by Flavius
    How could I inject a --stringparam (xsltproc) into the DOCTYPE of a XSL stylesheet? The --stringparam is specified from the command line. I have several books in docbook5 format I want to process with the same customization layer, each book having an unique identifier, here "demo", so I'm running something like xsltproc --stringparam course.name demo ... for each book. Obviously the parameter is not recognized as such, but as verbatim text, giving the error: warning: failed to load external entity "http://edu.yet-another-project.com/course/$(course.name)/entities.ent" Here it is how I've tried, which won't work: <?xml version='1.0'?> <!DOCTYPE stylesheet [ <!ENTITY % myent SYSTEM "http://edu.yet-another-project.com/course/$(course.name)/entities.ent"> %myent; ]> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <!-- the docbook template used --> <xsl:import href="http://docbook.org/ns/docbook/xhtml/chunk.xsl"/> <!-- processor parameters --> <xsl:param name="html.stylesheet">default.css</xsl:param> <xsl:param name="use.id.as.filename">1</xsl:param> <xsl:param name="chunker.output.encoding">UTF-8</xsl:param> <xsl:param name="chunker.output.indent">yes</xsl:param> <xsl:param name="navig.graphics">1</xsl:param> <xsl:param name="generate.revhistory.link">1</xsl:param> <xsl:param name="admon.graphics">1</xsl:param> <!-- here more stuff --> </xsl:stylesheet> Ideas?

    Read the article

  • ARMv6 FIQ, acknowledge interrupt

    - by fastmonkeywheels
    I'm working with an i.mx35 armv6 core processor. I have Interrupt 62 configured as a FIQ with my handler installed and being called. My handler at the moment just toggles an output pin so I can test latency with a scope. With the code below, once I trigger the FIQ it continues forever as fast as it can, apparently not being acknowledged. I'm triggering the FIQ by means of the Interrupt Force Register so I'm assured that the source isn't triggering it this fast. If I disable Interrupt 62 in the AVIC in my FIQ routine the interrupt only triggers once. I have read the sections on the VIC Port in the ARM1136JF-S and ARM1136J-S Technical Reference Manual and it covers proper exit procedure. I'm only having one FIQ handler so I have no need to branch. The line that I don't understand is: STR R0, [R8,#AckFinished] I'm not sure what AckFinished is supposed to be or what this command is supposed to do. My FIQ handler is below: ldr r9, IOMUX_ADDR12 ldr r8, [r9] orr r8, #0x08 @ top LED str r8,[r9] @turn on LED bic r8, #0x08 @ top LED str r8,[r9] @turn off LED subs pc, r14, #4 IOMUX_ADDR12: .word 0xFC2A4000 @remapped IOMUX addr My handler returns just fine and normal system operation resumes if I disable it after the first go, otherwise it triggers constantly and the system appears to hang. Do you think my assumption is right that the core isn't acknowledging the AVIC or could there be another cause of this FIQ triggering?

    Read the article

  • What vertex shader code should be used for a pixel shader used for simple 2D SpriteBatch drawing in XNA?

    - by Michael
    Preface First of all, why is a vertex shader required for a SilverlightEffect (.slfx file) in Silverlight 5? I'm trying to port a simple 2D XNA game to Silverlight 5 RC, and I would like to use a basic pixel shader. This shader works great in XNA for Windows and Xbox, but I can't get it to compile with Silverlight as a SilverlightEffect. The MS blog for the Silverlight Toolkit says that "there is no difference between .slfx and .fx", but apparently this isn't quite true -- or at least SpriteBatch is working some magic for us in "regular XNA", and it isn't in "Silverlight XNA". If I try to directly copy my pixel shader file into a Silverlight project (and change it to the supported "Effect - Silverlight" importer/processor), when I try to compile I see the following error message: Invalid effect file. Unable to find vertex shader in pass "P0" Indeed, there isn't a vertex shader in my pixel shader file. I haven't needed one with my other 2D XNA apps since I'm just doing basic SpriteBatch drawing. I tried adding a vertex shader to my shader file, using Remi Gillig's comment on this Shawn Hargreaves blog post for guidance, but it doesn't quite work. The shader file successfully compiles, and I see some semblance of my game on screen, but it's tiny, twisted, repeated, and all jumbled up. So clearly something's not quite right. The Real Question So that brings me to my real question: Since a vertex shader is required, is there a basic vertex shader function that works for simple 2D SpriteBatch drawing? And if the vertex shader requires world/view/project matricies as parameters, what values am I supposed to use for a 2D game? Can any shader pros help? Thanks!

    Read the article

  • Locate modules by stack address

    - by PaulH
    I have a Winodws Mobile 6.1 application running on an ARMV4I processor. Given a stack address (from unwinding an exception), I like to determine what module owns that address. Using the ToolHelpAPI, I'm able to determine most modules using the following method: HANDLE snapshot = ::CreateToolhelp32Snapshot( TH32CS_SNAPMODULE | TH32CS_GETALLMODS, 0 ); if( INVALID_HANDLE_VALUE != snapshot ) { MODULEENTRY32 mod = { 0 }; mod.dwSize = sizeof( mod ); if( ::Module32First( snapshot, &mod ) ) { do { if( stack_address > (DWORD)mod.modBaseAddr && stack_address < (DWORD)( mod.modBaseAddr + mod.modBaseSize ) ) { // Found the module! // offset = stack_address - mod.modBaseAddr break; } } while( ::Module32Next( snapshot, &mod ) ); } ::CloseToolhelp32Snapshot( snapshot ); } But, I don't always seem to be able to find a module that matches an address. For example: stack address module offset 0x03f65bd8 coredll.dll + 0x0001bbd8 0x785cab1c mylib.dll + 0x0002ab1c 0x785ca9e8 mylib.dll + 0x0002a9e8 0x785ca0a0 mylib.dll + 0x0002a0a0 0x785c8144 mylib.dll + 0x00028144 0x3001d95c ??? 0x3001dd44 ??? 0x3001db90 ??? 0x03f88030 coredll.dll + 0x0003e030 0x03f8e46c coredll.dll + 0x0004446c 0x801087c4 ??? 0x801367b4 ??? 0x8010ce78 ??? 0x801086dc ??? 0x03f8e588 coredll.dll + 0x00044588 0x785a56a4 mylib.dll + 0x000056a4 0x785bdd60 mylib.dll + 0x0001dd60 0x785bbd0c mylib.dll + 0x0001bd0c 0x785bdb38 mylib.dll + 0x0001db38 0x3001db20 ??? 0x3001dc40 ??? 0x3001a8a4 ??? 0x3001a79c ??? 0x03f67348 coredll.dll + 0x0001d348 Where do I find those stack addresses that are missing? Any suggestions? Thanks, PaulH

    Read the article

  • Non-blocking TCP buffer issues.

    - by Poni
    Hi! I think I'm in a problem. I have two TCP apps connected to each other which use winsock I/O completion ports to send/receive data (non-blocking sockets). Everything works just fine until there's a data transfer burst. The sender starts sending incorrect/malformed data. I allocate the buffers I'm sending on the stack, and if I understand correctly, that's a wrong to do, because these buffers should remain as I sent them until I get the "write complete" notification from IOCP. Take this for example: void some_function() { char cBuff[1024]; // filling cBuff with some data WSASend(...); // sending cBuff, non-blocking mode // filling cBuff with other data WSASend(...); // again, sending cBuff // ..... and so forth! } If I understand correctly, each of these WSASend() calls should have its own unique buffer, and that buffer can be reused only when the send completes. Correct? Now, what strategies can I implement in order to maintain a big sack of such buffers, how should I handle them, how can I avoid performance penalty, etc'? And, if I am to use buffers that means I should copy the data to be sent from the source buffer to the temporary one, thus, I'd set SO_SNDBUF on each socket to zero, so the system will not re-copy what I already copied. Are you with me? Please let me know if I wasn't clear.

    Read the article

  • How do I fix my "Stream closed" error in spring-ws?

    - by mcherm
    I have working code using the spring-ws library to respond to soap requests. I moved this code to a different project (I'm merging projects) and now it is failing. I would like to figure out the reason for the failure. The symptom I get is this: when the HTTP request arrives, spring begins handling the call. Then I get the following exception: org.springframework.ws.soap.saaj.SaajSoapEnvelopeException: Could not access envelope: java.io.IOException: Stream closed; nested exception is javax.xml.soap.SOAPException: java.io.IOException: Stream closed at org.springframework.ws.soap.saaj.SaajSoapMessage.getEnvelope(SaajSoapMessage.java:109) <<more lines that don't matter>> Caused by: java.io.IOException: Stream closed at java.io.PushbackInputStream.ensureOpen(PushbackInputStream.java:57) at java.io.PushbackInputStream.read(PushbackInputStream.java:116) at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source) at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source) at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl.parse(Unknown Source) at org.apache.axis.encoding.DeserializationContext.parse(DeserializationContext.java:227) at org.apache.axis.SOAPPart.getAsSOAPEnvelope(SOAPPart.java:696) ... 30 more Examining it in a debugger, it appears that spring successfully handles HTTP headers, but then when it begins to process the contents of the SOAP message itself, it chokes when reading the very first character of the body. Some googling for the error message suggests that the problem is that a PushbackInputStream which is apparently used for reading from the socket is read twice or perhaps has close() called and then is read afterward. It is happening inside of spring-ws, not my code, and since it worked fine before I moved the code to a new project it must have something to do with versions of spring, or something it uses like axis or xerces. But I can't find any differences in versions of these! Has anyone encountered this error before? Or do you have any suggestions of approaches I could take in troubleshooting this?

    Read the article

  • Automatically release resources RAII-style in Perl

    - by Philip Potter
    Say I have a resource (e.g. a filehandle or network socket) which has to be freed: open my $fh, "<", "filename" or die "Couldn't open filename: $!"; process($fh); close $fh or die "Couldn't close filename: $!"; Suppose that process might die. Then the code block exits early, and $fh doesn't get closed. I could explicitly check for errors: open my $fh, "<", "filename" or die "Couldn't open filename: $!"; eval {process($fh)}; my $saved_error = $@; close $fh or die "Couldn't close filename: $!"; die $saved_error if $saved_error; but this kind of code is notoriously difficult to get right, and only gets more complicated when you add more resources. In C++ I would use RAII to create an object which owns the resource, and whose destructor would free it. That way, I don't have to remember to free the resource, and resource cleanup happens correctly as soon as the RAII object goes out of scope - even if an exception is thrown. Unfortunately in Perl a DESTROY method is unsuitable for this purpose as there are no guarantees for when it will be called. Is there a Perlish way to ensure resources are automatically freed like this even in the presence of exceptions? Or is explicit error checking the only option?

    Read the article

  • java BufferedReader specific length returns NUL characters

    - by Bastien
    I have a TCP socket client receiving messages (data) from a server. messages are of the type length (2 bytes) + data (length bytes), delimited by STX & ETX characters. I'm using a bufferedReader to retrieve the two first bytes, decode the length, then read again from the same bufferedReader the appropriate length and put the result in a char array. most of the time, I have no problem, but SOMETIMES (1 out of thousands of messages received), when attempting to read (length) bytes from the reader, I get only part of it, the rest of my array being filled with "NUL" characters. I imagine it's because the buffer has not yet been filled. char[] bufLen = new char[2]; _bufferedReader.read(bufLen); int len = decodeLength(bufLen); char[] _rawMsg = new char[len]; _bufferedReader.read(_rawMsg); return _rawMsg; I solved the problem in several iterative ways: first I tested the last char of my array: if it wasn't ETX I would read chars from the bufferedReader one by one until I would reach ETX, then start over my regular routine. the consequence is that I would basically DROP one message. then, in order to still retrieve that message, I would find the first occurence of the NUL char in my "truncated" message, read & store additional characters one at a time until I reached ETX, and append them to my "truncated" messages, confirming length is ok. it works also, but I'm really thinking there's something I could do better, like checking if the total number of characters I need are available in the buffer before reading it, but can't find the right way to do it... any idea / pointer ? thanks !

    Read the article

  • Floating point precision in Visual C++

    - by Luigi Giaccari
    HI, I am trying to use the robust predicates for computational geometry from Jonathan Richard Shewchuk. I am not a programmer, so I am not even sure of what I am saying, I may be doing some basic mistake. The point is the predicates should allow for precise aritmthetic with adaptive floating point precision. On my computer: Asus pro31/S (Core Due Centrino Processor) they do not work. The problem may stay in the fact the my computer may use some improvements in the floating point precision taht conflicts with the one used by Shewchuk. The author says: /* On some machines, the exact arithmetic routines might be defeated by the / / use of internal extended precision floating-point registers. Sometimes / / this problem can be fixed by defining certain values to be volatile, / / thus forcing them to be stored to memory and rounded off. This isn't / / a great solution, though, as it slows the arithmetic down. */ Now what I would like to know is that there is a way, maybe some compiler option, to turn off the internal extended precision floating-point registers. I really appriaciate your help

    Read the article

  • image analysis and 64bit OS

    - by picciopiccio
    I developed a C# application that makes use of Congex vision library (VPro). My application is developed with Visual Studio 2008 Pro on a 32bit Windows PC with 3GB of RAM. During the startup of application I see that a large amount of memory is allocated. So far so good, but when I add many and many vision elaboration the memory allocation increases and a part of application (only Cognex OCX) stops working well. The rest of application stills to work (working threads, com on socket....) I did whatever I could to save memory, but when the memory allocated is about 700MB I begin to have the problems. A note on the documentation of Cognex library tells that /LARGEADDRESSWARE is not supported. Anyway I'm thinking to try the migration of my app on win64 but what do I have to do? Can I simply use a processor with 64bit and windows 64bit without recompiling my application that would remain a 32bit application to take advantage of 64bit ? Or I should recompile my application ? If I don't need to recompile my application, can I link it with 64bit Congnex library? If I have to recompile my application, is it possible to cross compile the application so that my develop suite is on a 32bit PC? Every help will be very appreciated!! Thank in advance

    Read the article

  • Map element position in data file to class property

    - by Augusto
    I need to read/write files, following a format provided by a third party specification. The specification itself is pretty simple: it says the position and the size of the data that will be saved in the file. For example: Position Size Description -------------------------------------------------- 0001 10 Device serial number 0011 02 Hour 0013 02 Minute 0015 02 Second 0017 02 Day 0019 02 Month 0021 02 Year The list is very long, it has about 400 elements. But lots of them can be combined. For example, hour, minute, second, day, month and year can be combined in a single DateTime object. I've splitted the elements into about 4 categories, and created separeted classes for holding the data. So, instead of a big structure representing the data, I have some smaller classes. I've also created different classes for reading and writing the data. The problem is: how to map the positions in the file to the objects properties, so that I don't need to repeat the values in the reading/writing class? I could use some custom attributes and retrieve them via reflection. But since the code will be running on devices with small memory and processor, it would be nice to find another way. My current read code looks like this: public void Read() { DataFile dataFile = new DataFile(); // the arguments are: position, size dataFile.SerialNumber = ReadLong(1, 10); //... } Any ideas on this one? Thanks!

    Read the article

  • EPIPE blocks server

    - by timn
    I have written a single-threaded asynchronous server in C running on Linux: The socket is non-blocking and as for polling, I am using epoll. Benchmarks show that the server performs fine and according to Valgrind, there are no memory leaks or other problems. The only problem is that when a write() command is interrupted (because the client closed the connection), the server will encounter a SIGPIPE. I am doing the interrupted artifically by running the benchmarking utility "siege" with the parameter -b. It does lots of requests in a row which all work perfectly. Now I press CTRL-C and restart the "siege". Sometimes I am lucky and the server does not manage to send the full response because the client's fd is invalid. As expected errno is set to EPIPE. I handle this situation, execute close() on the fd and then free the memory related to the connection. Now the problem is that the server blocks and does not answer properly anymore. Here is the strace output: accept(3, {sa_family=AF_INET, sin_port=htons(50611), sin_addr=inet_addr("127.0.0.1")}, [16]) = 5 fcntl64(5, F_GETFD) = 0 fcntl64(5, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 epoll_ctl(4, EPOLL_CTL_ADD, 5, {EPOLLIN|EPOLLERR|EPOLLHUP|EPOLLET, {u32=158310248, u64=158310248}}) = 0 epoll_wait(4, {{EPOLLIN, {u32=158310248, u64=158310248}}}, 128, -1) = 1 read(5, "GET /user/register HTTP/1.1\r\nHos"..., 4096) = 161 write(5, "HTTP/1.1 200 OK\r\nContent-Type: t"..., 106) = 106 <<<<< write(5, "00001000\r\n", 10) = -1 EPIPE (Broken pipe) <<<<< Why did the previous write() work fine but not this one? --- SIGPIPE (Broken pipe) @ 0 (0) --- As you can see, the client establishes a new connection which consequently is accepted. Then, it's added to the EPOLL queue. epoll_wait() signalises that the client sent data (EPOLLIN). The request is parsed and and a response is composed. Sending the headers works fine but when it comes to the body, write() results in an EPIPE. It is not a bug in "siege" because it blocks any incoming connections, no matter from which client.

    Read the article

  • Node.js fetching Twitter Streaming API - EADDRNOTAVAIL

    - by Jordan Scales
    I have the following code written in node.js to access to the Twitter Streaming API. If I curl the URL below, it works. However, I cannot get it to work with my code. var https = require('https'); https.request('https://USERNAME:[email protected]/1.1/statuses/sample.json', function(res) { res.on('data', function(chunk) { var d = JSON.parse(chunk); console.log(d); }); }); But I receive the following node.js:201 throw e; // process.nextTick error, or 'error' event on first tick ^ Error: connect EADDRNOTAVAIL at errnoException (net.js:642:11) at connect (net.js:525:18) at Socket.connect (net.js:589:5) at Object.<anonymous> (net.js:77:12) at new ClientRequest (http.js:1073:25) at Object.request (https.js:80:10) at Object.<anonymous> (/Users/jordan/Projects/twitter-stream/app.js:3:7) at Module._compile (module.js:441:26) at Object..js (module.js:459:10) at Module.load (module.js:348:31) If anyone can offer an alternative solution, or explain to me why this doesn't work, I would be very grateful.

    Read the article

  • Is it possible to make a persistent connection between a Python web service and a .Net WCF Client?

    - by Ad Hock
    I have a .Net 3.5 SOAP client written in C# using the WCF. It's expecting basicHTTPBinding and a persistent connection with HTTP/1.1. I'm trying to create a Python 2.6 application that will act as a web-service for the client. My problem is that the client keeps closing the connection and opening a new one for every command to the web service. How does the .Net WCF client know to stay open when connecting with a .Net Service? When I create a dummy .Net web service the client connects fine and the connection remains persistent. From what I can tell, when connected to a .Net server, there are no special HTTP headers being sent, that makes sense since HTTP/1.1 assumes a persistent connection unless otherwise specified (right?). However, with the python web service I accept/open a connection and eventually the client will send a TCP FIN and the connection will close (the client never sends a FIN or RST when connecting to a .Net service). The communication goes something like this: Incoming -- HTTP Header for SOAP Command #1 Outgoing -- HTTP Header with a Continue Incoming -- Body of Command #1 Outgoing -- ACK Command #1 (HTTP headers and body) Incoming -- HTTP Header for SOAP Command #2 Outgoing -- HTTP Header with a Continue Incoming -- TCP FIN <Connection closes> <New connection opens and SOAP command #2 (with full HTTP headers) is sent> I'm using a SocketServer.ThreadingTCPServer as the server and a BaseHTTPServer.BaseHTTPRequestHandler for any requests. The handler is actually a derived class of that with a do_POST method to handle the HTTP headers. I've looked at WireShark captures and I'm stumped. I've tried setting socket options to SO_KEEPALIVE and SO_REUSEADDR in the server but that didn't seem to change anything. What am I missing?

    Read the article

  • ASP.NET MVC intermittent slow response

    - by arehman
    Problem In our production environment, system occasionally delays the page response of an ASP.NET MVC application up to 30 seconds or so, even though same page renders in 2-3 seconds most of the times. This happens randomly with any arbitrary page, and GET or POST type requests. For example, log files indicates, system took 15 seconds to complete a request for jquery script file or for other small css file it took 10 secs. Similar Problems: Random Slow Downs Production Environment: Windows Server 2008 - Standard (32-bit) - App Pool running in integrated mode. ASP.NET MVC 1.0 We have tried followings/observations: Moved the application to a stand alone web server, but, it didn't help. We didn't ever notice same issue on the server for any 'ASP.NET' application. App Pool settings are fine. No abrupt recycles/shutdowns. No cpu spikes or memory problems. No delays due to SQL queries or so. It seems as something causing delay along HTTP Pipeline or worker processor seeing the request late. Looking for other suggestions. -- Thanks

    Read the article

  • Multithreading and Interrupts

    - by Nicholas Flynt
    I'm doing some work on the input buffers for my kernel, and I had some questions. On Dual Core machines, I know that more than one "process" can be running simultaneously. What I don't know is how the OS and the individual programs work to protect collisions in data. There are two things I'd like to know on this topic: (1) Where do interrupts occur? Are they guaranteed to occur on one core and not the other, and could this be used to make sure that real-time operations on one core were not interrupted by, say, file IO which could be handled on the other core? (I'd logically assume that the interrupts would happen on the 1st core, but is that always true, and how would you tell? Or perhaps does each core have its own settings for interrupts? Wouldn't that lead to a scenario where each core could react simultaneously to the same interrupt, possibly in different ways?) (2) How does the dual core processor handle opcode memory collision? If one core is reading an address in memory at exactly the same time that another core is writing to that same address in memory, what happens? Is an exception thrown, or is a value read? (I'd assume the write would work either way.) If a value is read, is it guaranteed to be either the old or new value at the time of the collision? I understand that programs should ideally be written to avoid these kinds of complications, but the OS certainly can't expect that, and will need to be able to handle such events without choking on itself.

    Read the article

  • Help with Neuroph neural network

    - by user359708
    For my graduate research I am creating a neural network that trains to recognize images. I am going much more complex than just taking a grid of RGB values, downsampling, and and sending them to the input of the network, like many examples do. I actually use over 100 independently trained neural networks that detect features, such as lines, shading patterns, etc. Much more like the human eye, and it works really well so far! The problem is I have quite a bit of training data. I show it over 100 examples of what a car looks like. Then 100 examples of what a person looks like. Then over 100 of what a dog looks like, etc. This is quite a bit of training data! Currently I am running at about one week to train the network. This is kind of killing my progress, as I need to adjust and retrain. I am using Neuroph, as the low-level neural network API. I am running a dual-quadcore machine(16 cores with hyperthreading), so this should be fast. My processor percent is at only 5%. Are there any tricks on Neuroph performance? Or Java peroformance in general? Suggestions? I am a cognitive psych doctoral student, and I am decent as a programmer, but do not know a great deal about performance programming.

    Read the article

  • Performance of java on different hardware?

    - by tangens
    In another SO question I asked why my java programs run faster on AMD than on Intel machines. But it seems that I'm the only one who has observed this. Now I would like to invite you to share the numbers of your local java performance with the SO community. I observed a big performance difference when watching the startup of JBoss on different hardware, so I set this program as the base for this comparison. For participation please download JBoss 5.1.0.GA and run: jboss-5.1.0.GA/bin/run.sh (or run.bat) This starts a standard configuration of JBoss without any extra applications. Then look for the last line of the start procedure which looks like this: [ServerImpl] JBoss (Microcontainer) [5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221634)] Started in 25s:264ms Please repeat this procedure until the printed time is somewhat stable and post this line together with some comments on your hardware (I used cpu-z to get the infos) and operating system like this: java version: 1.6.0_13 OS: Windows XP Board: ASUS M4A78T-E Processor: AMD Phenom II X3 720, 2.8 GHz RAM: 2*2 GB DDR3 (labeled 1333 MHz) GPU: NVIDIA GeForce 9400 GT disc: Seagate 1.5 TB (ST31500341AS) Use your votes to bring the fastest configuration to the top. I'm very curious about the results. EDIT: Up to now only a few members have shared their results. I'd really be interested in the results obtained with some other architectures. If someone works with a MAC (desktop) or runs an Intel i7 with less than 3 GHz, please once start JBoss and share your results. It will only take a few minutes.

    Read the article

  • what webserver / mod / technique should I use to serve everything from memory?

    - by reinier
    I've lots of lookuptables from which I'll generate my webresponse. I think IIS with Asp.net enables me to keep static lookuptables in memory which I can use to serve up my responses very fast. Are there however also non .net solutions which can do the same? I've looked at fastcgi, but I think this starts X processes, of which anyone can handle Y requests. But the processes are by definition shielded from eachother. I could configure fastcgi to use just 1 process, but does this have scalability implications? anything using PHP or any other interpreted language won't fly because it is also cgi or fastcgi bound right? I understand memcache could be an option, though this would require another (local) socket connection which I'd rather avoid since everything in memory would be much faster. The solution can work under WIndows or Unix... it doesn't matter too much. The only thing which matters is that there will be a lot of requests (100/sec now and growing to 500/sec in a year), and I want to reduce the amount of webservers needed to process it. The current solution is done using PHP and memcache (and the occasional hit to the SQL server backend). Although it is fast (for php anyway), Apache has real problems when the 50/sec is passed. I've put a bounty on this question since I've not seen enough responses to make a wise choice. At the moment I'm considering either Asp.net or fastcgi with C(++).

    Read the article

  • Load Balancing of PHP/MYSQL script without big code changes

    - by DR.GEWA
    Sorry for my Dummy Question, but... I am making a script on php/mysql (codeigniter) and I am extremally interested in knowing if there is a way without big architectural changes of the script make a load balancing. I mean, that for example now I will rent a medium dedicated server with 2GB ram, 200GB memory and good processor, and this will be enough lets say half year for the users which will come. But when they will become more and more, and as its a social net and at nights the server is waiting to have 500-1500 or 5000-8000 users online, I wander if there is a way for lets say just add second server with some config which will bear next pressure. After again one and so on... ???? <? if($answer=YES) { how(??); } esle{ whatToDo(??); } ?> If there is no way, than maybe you could point to a easiest way of load balancing solution.... I will be extremally thanksfull if you can tell me for such purposes , should I move lets say to PostgreSQl or FireBird? Which of them will be more easy in the future to handle ? I am getting on the mysite.com/users/show/$userId page something like 60queries for all data... maybe too much, but anyway....after some optimization it can be 20-30....

    Read the article

  • XSP2 crashes serving static images

    - by Dawkins
    Hi, Requesting a simple HTML page with a jpg image makes XSP2 crash. If I remove the image from the HTML then the page is served OK all the time. The version is XSP2 2.0 mono 2.6.1. the version 2.4.2.2 in the same machine works fine. I have tested it in two different machines, both Windows Vista Business SP1. Anyone has experienced the same? Any clue of what can be the problem? Below is the stack trace displayed by the console: (The line in Spanish says "it has been forced the interruption of an existing connection by the remote host") EDIT: since another user is having the same problem I have submited a bug to Novell and created a litle zip to reproduce the problem: https://bugzilla.novell.com/show_bug.cgi?id=582162 Peer unexpectedly closed the connection on write. Closing our end. System.IO.IOException: Write failure ---> System.Net.Sockets.SocketException: Se ha forzado la interrupción de una conexión existente por el host remoto. at System.Net.Sockets.Socket.Send (System.Byte[] buf, Int32 offset, Int32 size , SocketFlags flags) [0x00000] in <filename unknown>:0 at System.Net.Sockets.NetworkStream.Write (System.Byte[] buffer, Int32 offset, Int32 size) [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Write (System.Byte[] buffer, Int32 offset, Int32 size) [0x00000] in <filename unknown>:0 at Mono.WebServer.XSPWorker.Write (System.Byte[] buffer, Int32 position, Int32 size) [0x00000] in <filename unknown>:0 Peer unexpectedly closed the connection on write. Closing our end. System.ObjectDisposedException: The object was used after being disposed. at System.Net.Sockets.NetworkStream.CheckDisposed () [0x00000] in <filename un known>:0 at System.Net.Sockets.NetworkStream.Write (System.Byte[] buffer, Int32 offset, Int32 size) [0x00000] in <filename unknown>:0 at Mono.WebServer.XSPWorker.Write (System.Byte[] buffer, Int32 position, Int32 size) [0x00000] in <filename unknown>:0 Thank you.

    Read the article

  • Triangle numbers problem....show within 4 seconds

    - by Daredevil
    The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be: 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ... Let us list the factors of the first seven triangle numbers: 1: 1 3: 1,3 6: 1,2,3,6 10: 1,2,5,10 15: 1,3,5,15 21: 1,3,7,21 28: 1,2,4,7,14,28 We can see that 28 is the first triangle number to have over five divisors. Given an integer n, display the first triangle number having at least n divisors. Sample Input: 5 Output 28 Input Constraints: 1<=n<=320 I was obviously able to do this question, but I used a naive algorithm: Get n. Find triangle numbers and check their number of factors using the mod operator. But the challenge was to show the output within 4 seconds of input. On high inputs like 190 and above it took almost 15-16 seconds. Then I tried to put the triangle numbers and their number of factors in a 2d array first and then get the input from the user and search the array. But somehow I couldn't do it: I got a lot of processor faults. Please try doing it with this method and paste the code. Or if there are any better ways, please tell me.

    Read the article

  • lightweight webserver to integrate on client end.

    - by Gopal
    Hi ,... I need to create a python module that will be installed on end-user machines. One of the scripts in that module should be able to receive http POSTS (usually with some JSON formatted data in the body) and then pass on that data to an appropriate python script. I can think of two ways to do this: a) Open a listening server socket on port 80, wait for that http request to come in, parse it and then pass that data to another python script depending on the url that arrived. This method will not require the end-user to install a webserver. End user only has to install the python module. b) Have a mini-webserver installed along with the python module. The webserver will do the same job as [a] via CGI without me requiring to write the CGI functionality. But then the user will have to install the web-server (ie., the hassle of yet another install). Would like to avoid that if possible. IF [b] is the easier option, what is the smallest simplest webserver there is (preferably one that can be packaged as part of the python module itself so that it does not have to be separately installed). Must be opensource of course. regards Gopal

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >