Search Results

Search found 11380 results on 456 pages for 'cpu speed'.

Page 389/456 | < Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >

  • Looking for ideas how to refactor my (complex) algorithm

    - by _simon_
    I am trying to write my own Game of Life, with my own set of rules. First 'concept', which I would like to apply, is socialization (which basicaly means if the cell wants to be alone or in a group with other cells). Data structure is 2-dimensional array (for now). In order to be able to move a cell to/away from a group of another cells, I need to determine where to move it. The idea is, that I evaluate all the cells in the area (neighbours) and get a vector, which tells me where to move the cell. Size of the vector is 0 or 1 (don't move or move) and the angle is array of directions (up, down, right, left). This is a image with representation of forces to a cell, like I imagined it (but reach could be more than 5): Let's for example take this picture: Forces from lower left neighbour: down (0), up (2), right (2), left (0) Forces from right neighbour : down (0), up (0), right (0), left (2) sum : down (0), up (2), right (0), left (0) So the cell should go up. I could write an algorithm with a lot of if statements and check all cells in the neighbourhood. Of course this algorithm would be easiest if the 'reach' parameter is set to 1 (first column on picture 1). But what if I change reach parameter to 10 for example? I would need to write an algorithm for each 'reach' parameter in advance... How can I avoid this (notice, that the force is growing potentialy (1, 2, 4, 8, 16, 32,...))? Can I use specific design pattern for this problem? Also: the most important thing is not speed, but to be able to extend initial logic. Things to take into consideration: reach should be passed as a parameter i would like to change function, which calculates force (potential, fibonacci) a cell can go to a new place only if this new place is not populated watch for corners (you can't evaluate right and top neighbours in top-right corner for example)

    Read the article

  • How to fix RapidXML String ownership concerns?

    - by Roddy
    RapidXML is a fast, lightweight C++ XML DOM Parser, but it has some quirks. The worst of these to my mind is this: 3.2 Ownership Of Strings. Nodes and attributes produced by RapidXml do not own their name and value strings. They merely hold the pointers to them. This means you have to be careful when setting these values manually, by using xml_base::name(const Ch *) or xml_base::value(const Ch *) functions. Care must be taken to ensure that lifetime of the string passed is at least as long as lifetime of the node/attribute. The easiest way to achieve it is to allocate the string from memory_pool owned by the document. Use memory_pool::allocate_string() function for this purpose. Now, I understand it's done this way for speed, but this feels like an car crash waiting to happen. The following code looks innocuous but 'name' and 'value' are out of scope when foo returns, so the doc is undefined. void foo() { char name[]="Name"; char value[]="Value"; doc.append_node(doc.allocate_node(node_element, name, value)); } The suggestion of using allocate_string() as per manual works, but it's so easy to forget. Has anyone 'enhanced' RapidXML to avoid this issue?

    Read the article

  • Flash/Flex: "Warning: filter will not render" problem

    - by davidemm
    In my flex application, I have a custom TitleWindow that pops up in modal fashion. When I resize the browser window, I get this warning: Warning: Filter will not render. The DisplayObject’s filtered dimensions (1286, 107374879) are too large to be drawn. Clearly, I have nothing set with a height of 107374879. After that, any time I mouse over anything in the Flash Player (v. 10), the CPU churns at 100%. When I close the TitleWindow, the problem subsides. Sadly, the warning doesn't seem to indicate which DisplayObject object is too large to draw. I've tried attaching explicit height/widths to the TitleWindow and the components within, but still no luck. [Edit] The plot thickens: I found that the problem only occures when I set the PopUpManager's createPopUp modal parameter to "true." I don't see the behavior when modal is set to "false." It's failing while applying the graying filter to other components that comes from being modal. Any ideas how I can track down the one object that has not been initialized but is being filter during the modal phase? Thanks for reading.

    Read the article

  • Linux Kernel wait_for_completion_timeout not wakeup by complete

    - by Jun Li
    I am working on a strange issue with the i2c-omap driver. I am not sure if the problem happens at other time or not, but it happens around 5% of the time I tried to power off the system. During system power off, I write to some registers in the PMIC via I2C. In i2c-omap.c, I can see that the calling thread is waiting on wait_for_completion_timeout with a timeout value set to 1 second. And I can see the IRQ called "complete" (I added printk AFTER "complete"). However, after "complete" gets called, the wait_for_completion_timeout did not return. Instead, it takes up to 5 MINUTES before it returns. And the return value of wait_for_completion_timeout is positive indicating that there is no timeout. And the whole I2C transaction was successful. In the meantime, I can see printk messages from other drivers. And the serial console still works. It is on Android, and if I use "top" I can see system_server is taking about 95% of the CPU. Killing system_server can make the wait_for_completion_timeout return immediately. So my question is what could a user space app (system_server) do to make a kernel "wait_for_completion_timeout" not being wake up? Thanks!

    Read the article

  • Secure hash and salt for PHP passwords

    - by luiscubal
    It is currently said that MD5 is partially unsafe. Taking this into consideration, I'd like to know which mechanism to use for password protection. Is “double hashing” a password less secure than just hashing it once? Suggests that hashing multiple times may be a good idea. How to implement password protection for individual files? Suggests using salt. I'm using PHP. I want a safe and fast password encryption system. Hashing a password a million times may be safer, but also slower. How to achieve a good balance between speed and safety? Also, I'd prefer the result to have a constant number of characters. The hashing mechanism must be available in PHP It must be safe It can use salt (in this case, are all salts equally good? Is there any way to generate good salts?) Also, should I store two fields in the database(one using MD5 and another one using SHA, for example)? Would it make it safer or unsafer? In case I wasn't clear enough, I want to know which hashing function(s) to use and how to pick a good salt in order to have a safe and fast password protection mechanism. EDIT: The website shouldn't contain anything too sensitive, but still I want it to be secure. EDIT2: Thank you all for your replies, I'm using hash("sha256",$salt.":".$password.":".$id) Questions that didn't help: What's the difference between SHA and MD5 in PHP Simple Password Encryption Secure methods of storing keys, passwords for asp.net How would you implement salted passwords in Tomcat 5.5

    Read the article

  • Windows system monitoring and profiling

    - by Aris
    I have several dozen 64-bit Windows 2003 servers in a high performance environment with very bursty system utilization. I am looking for a tool (or tools) to monitor and analyze system performance (eg, CPU utilization, bandwidth, etc). This tool can either query servers from a central location (SNMP) or require installation of a component on each server. It should poll on a 1-second interval. It should be able to generate pretty graphs which show trends over time. As a nice-to-have, it should be able to send out emails or IMs when certain thresholds are exceeded. The tools I have investigated so far, including Solarwinds and PRTG, are not designed to poll this frequently. They seem to be designed for a ~30-second interval. Solarwinds wouldn't go lower than 3-sec, and PRTG chokes at 1-sec. They both default to 1-min. All three tools seem more focused on outage monitoring and reporting than metric collection. Given the bursty nature of our applications, such infrequent polling would result in a very inaccurate picture of performance. I am considering rolling my own solution using perfmon. This would be a lot of work, and seems like reinventing the wheel. Are there any tools out there that meet my needs?

    Read the article

  • How to implement a counter when using golang's goroutine?

    - by MrROY
    I'm trying to make a queue struct that have push and pop functions. I need to use 10 threads push and another 10 threads pop data, just like i did in the code below. Questions : 1. I need to print out how much i have pushed/popped, but i don't know how to do that. 2. Is there anyway to speed up my code ? the code is too slow for me. package main import ( "runtime" "time" ) const ( DATA_SIZE_PER_THREAD = 10000000 ) type Queue struct { records string } func (self Queue) push(record chan interface{}) { // need push counter record <- time.Now() } func (self Queue) pop(record chan interface{}) { // need pop counter <- record } func main() { runtime.GOMAXPROCS(runtime.NumCPU()) //record chan record := make(chan interface{},1000000) //finish flag chan finish := make(chan bool) queue := new(Queue) for i:=0; i<10; i++ { go func() { for j:=0; j<DATA_SIZE_PER_THREAD; j++ { queue.push(record) } finish<-true }() } for i:=0; i<10; i++ { go func() { for j:=0; j<DATA_SIZE_PER_THREAD; j++ { queue.pop(record) } finish<-true }() } for i:=0; i<20; i++ { <-finish } }

    Read the article

  • How to keep unreachable code?

    - by Gabriel
    I'd like to write a function that would have some optional code to execute or not depending on user settings. The function is cpu-intensive and having ifs in it would be slow since the branch predictor is not that good. My idea is making a copy in memory of the function and replace NOPs with jumps when I don't want to execute some code. My working example goes like this: int Test() { int x = 2; for (int i=0 ; i<10 ; i++) { x *= 2; __asm {NOP}; // to skip it replace this __asm {NOP}; // by JMP 2 (after the goto) x *= 2; // Op to skip or not x *= 2; } return x; } In my test's main, I copy this function into a newly allocated executable memory and replace the NOPs by a JMP 2 so that the following x *= 2 is not executed. The problem is that I would have to change the JMP operand every time I change the code to be skipped. An alternative that would fix this problem would be: __asm {NOP}; // to skip it replace this __asm {NOP}; // by JMP 2 (after the goto) goto dont_do_it; x *= 2; // Op to skip or not dont_do_it: x *= 2; This way, as a goto uses 2 bytes of binary, I would be able to replace the NOPs by a fixed JMP of alway 2 in order to skip the goto. Unfortunately, in full optimization mode, the goto and the x*=2 are removed because they are unreachable at compilation time. Hence the need to keep that dead code.

    Read the article

  • Stack and Hash joint

    - by Alexandru
    I'm trying to write a data structure which is a combination of Stack and HashSet with fast push/pop/membership (I'm looking for constant time operations). Think of Python's OrderedDict. I tried a few things and I came up with the following code: HashInt and SetInt. I need to add some documentation to the source, but basically I use a hash with linear probing to store indices in a vector of the keys. Since linear probing always puts the last element at the end of a continuous range of already filled cells, pop() can be implemented very easy without a sophisticated remove operation. I have the following problems: the data structure consumes a lot of memory (some improvement is obvious: stackKeys is larger than needed). some operations are slower than if I have used fastutil (eg: pop(), even push() in some scenarios). I tried rewriting the classes using fastutil and trove4j, but the overall speed of my application halved. What performance improvements would you suggest for my code? What open-source library/code do you know that I can try?

    Read the article

  • Problem with non blocking fifo in bash

    - by timdel
    Hi! I'm running a few Team Fortress 2 servers and I want to write a little management script. Basically the TF2 servers are a fg process which provides a server console, so I can start the server, type status and get an answer from it: ***@purple:~/tf2$ ./start_server_testing Auto detecting CPU Using AMD Optimised binary. Server will auto-restart if there is a crash. Console initialized. [bla bla bla] Connection to Steam servers successful. VAC secure mode is activated. status hostname: Team Fortress version : 1.0.6.1/15 3883 secure udp/ip : ***.***.133.31:27600 map : ctf_2fort at: 0 x, 0 y, 0 z players : 0 (2 max) # userid name uniqueid connected ping loss state adr Great, now I want to create a script which sends the command sm_reloadadmins to all my servers. The best way I found to do this is using a fifo named pipe. Now what I want to do is having this pipe readonly and non blocking to the server process, so I can write into the pipe and the server executes it, but still I want to write via console one the server, so if I switch back to the fg process of the server and I type status I want an answer printed. I tried this (assuming serverfifo is mkfifo serverfifo): ./start_server_testing < serverfifo Not working, the server won't start until something is written to the pipe. ./start_server_testing <> serverfifo Thats actually working pretty good, I can see the console output of the server and I can write to the fifo and the server executes the commands, but I can't write via console to the server anymore. Also, if I write 'exit' to the pipe (which should end the server) and I'm running it in a screen the screen window is getting killed for some reason (wtf why?). I only need the server to read the fifo without blocking AND all my keyboard input on the server itself should be send to the server AND all server ouput should be written to the console. Is that possible? If yes, how?

    Read the article

  • Could not load file or assembly 'GMap.NET.Core' or one of its dependencies. An attempt was made to load a program with an incorrect format.

    - by Sam M
    I have a wcf Service application in VS2010.My local machine is a 32 bit OS where as the server is a 64 bit. There are around 6 services in my solution. Im successfully able to host the application on IIS on my local machine.And it works fine. But when i try host that service application on Server i gets the below error Could not load file or assembly 'GMap.NET.Core' or one of its dependencies. An attempt was made to load a program with an incorrect format. I do have reference added in my solution for GMap.NET.Core . I have tried to set the properties in my solution to Any CPU . Also in the application pool i have set the Enable 32-Bit Application to True. i have also set the Copy Local to TRUE in my solution before publishing. When i run the source on through my solution i dont get any error and the solution is built successfully. What else can i try to get my services successfully hosted on the Server and should be accessed through my application.

    Read the article

  • Debugging SQL Server Slowness: Same Database, Different Servers

    - by Craig Walker
    For a while now we've been having anecdotal slowness on our newly-minted (VMWare-based) SQL Server 2005 database servers. Recently the problem has come to a head and I've started looking for the root cause of the issue. Here's the weird part: on the stored procedure that I'm using as a performance test case, I get a 30x difference in the execution speed depending on which DB server I run it on. This is using the same database (mdf) and log (ldf) files, detached, copied, and reattached from the slow server to the fast one. This doesn't appear to be a (virtualized) hardware issue: he slow server has 4x the CPU capacity and 2x the memory as the fast one. As best as I can tell, the problem lies in the environment/configuration of the servers (either operating system or SQL Server installation). However, I've checked a bunch of variables (SQL Server config options, running services, disk fragmentation) and found nothing that has made a difference in testing. What things should I be looking at? What tools can I use to investigate why this is happening?

    Read the article

  • Does replace into have a where clause?

    - by Lajos Arpad
    I'm writing an application and I'm using MySQL as DBMS, we are downloading property offers and there were some performance issues. The old architecture looked like this: A property is updated. If the number of affected rows is not 1, then the update is not considered successful, elseway the update query solves our problem. If the update was not successful, and the number of affected rows is more than 1, we have duplicates and we delete all of them. After we deleted duplicates if needed if the update was not successful, an insert happens. This architecture was working well, but there were some speed issues, because properties are deleted if they were not updated for 15 days. Theoretically the main problem is deleting properties, because some properties are alive for months and the indexes are very far from each other (we are talking about 500, 000+ properties). Our host told me to use replace into instead of deleting properties and all deprecated properties should be considered as DEAD. I've done this, but problems started to occur because of syntax error and I couldn't find anywhere an example of replace into with a where clause (I'd like to replace a DEAD property with the new property instead of deleting the old property and insert a new to assure optimization). My query looked like this: replace into table_name(column1, ..., columnn) values(value1, ..., valuen) where ID = idValue Of course, I've calculated idValue and handled everything but I had a syntax error. I would like to know if I'm wrong and there is a where clause for replace into. I've found an alternative solution, which is even better than replace into (using simply an update query) because deletes are happening behind the curtains if I use replace into, but I would like to know if I'm wrong when I say that replace into doesn't have a where clause. For more reference, see this link: http://dev.mysql.com/doc/refman/5.0/en/replace.html Thank you for your answers in advance, Lajos Árpád

    Read the article

  • Unhandled Exception with c++ app on Visual Studio 2008 release build - occurs when returning from fu

    - by Rich
    Hi, I have a (rather large) application that I have written in C++ and until recently it has been running fine outside of visual studio from the release build. However, now, whenever I run it it says "Unhandled exception at 0x77cf205b in myprog.exe: 0xC0000005: Access violation writing location 0x45000200.", and leads me to "crtexe.c" at line 582 ("mainret = main(argc, argv, envp);") if I attempt to debug it. Note that this problem never shows if I run my debug executable outside of visual studio, or if I run my debug or release build within visual studio. It only happens when running the release build outside of visual studio. I have been through and put plenty of printfs and a couple of while(1)s in it to see when it actually crashed, and found that the access violation occurs at exactly the point that the value is returned from the function (I'm returning a pointer to an object). I don't fully understand why I would get an access violation at the point it returns, and it doesn't seem to matter what I'm returning as it still occurs when I return 0. The point it started crashing was when I added a function which does a lot of reading from a file using ifstream. I am opening the stream every time I attempt to read a new file and close it when I finish reading it. If I keep attempting to run it, it will run once in about 20 tries. It seems a lot more reliable if I run it off my pen drive (it seems to crash the first 3 or 4 times then run fine after that - maybe it's due to its slower read speed). Thanks for your help, and if I've missed anything let me know.

    Read the article

  • Separation of multipage tiff with compression "CCITT T.6" very slow

    - by Alex
    I need to separate multiframe tiff files, and use the following method: public static Image[] GetFrames(Image sourceImage) { Guid objGuid = sourceImage.FrameDimensionsList[0]; FrameDimension objDimension = new FrameDimension(objGuid); int frameCount = sourceImage.GetFrameCount(objDimension); Image[] images = new Image[frameCount]; for (int i = 0; i < frameCount; i++) { MemoryStream ms = new MemoryStream(); sourceImage.SelectActiveFrame(objDimension, i); sourceImage.Save(ms, ImageFormat.Tiff); images[i] = Image.FromStream(ms); } return images; } It works fine, but if the source image was encoded using the CCITT T.6 compression, separating a 20-frame-file takes up to 15 seconds on my 2,5ghz CPU.(edit: One core is at 100% during the process) When saving the images afterward to a single file using standard compression (LZW), the separation time of the LZW-file is under 1 second. Saving with CCITT compression also takes very long. Is there a way to speed up the process?

    Read the article

  • Transition from 2D to 3D later in game development

    - by Axarydax
    Hi, I'd like to work on a game, but for rapidly prototyping it, I'd like to keep it as simple as possible, so I'd do everything in top-down 2D in GDI+ and WinForms (hey, I like them!), so I can concentrate on the logic and architecture of the game itself. I thinking about having the whole game logic (server) in one assembly, where the WinForms app would be a client to that game, and if/when the time is right, I'd write a 3D client. I am tempted to use XNA, but I haven't really looked into it, so I don't know if it won't take too much time getting up to speed - I really don't want to spent much time doing other stuff than the game logic, at least while I have the inspiration. But I wouldn't have to abandon everything and transfer to new platform when transitioning from 2D to 3D. Another idea is just to get over it and learn XNA/Unity/SDL/something at least to that level so I can make the same 2D version as I could in GDI+, and I won't have to worry about switching frameworks anymore. Let's just say that the game is the kind where you watch a dude from behind, you run around the gameworld and interact with objects. So the bird's eye perspective could be doable for now. Thanks.

    Read the article

  • Oracle rownum in db2 - Java data archiving

    - by HonorGod
    I have a data archiving process in java that moves data between db2 and sybase. FYI - This is not done through any import/export process because there are several conditions on each table that are available on run-time and so this process is developed in java. Right now I have single DatabaseReader and DatabaseWriter defined for each source and destination combination so that data is moved in multiple threads. I guess I wanted to expand this further where I can have Multiple DatabaseReaders and Multiple DatabaseWriters defined for each source and destination combination. So, for example if the source data is about 100 rows and I defined 10 readers and 10 writer, each reader will read 10 rows and give them to the writer. I hope process will give me extreme performance depending on the resources available on the server [CPU, Memory etc]. But I guess the problem is these source tables do not have primary keys and it is extremely difficult to grab rows in multiple sets. Oracle provides rownum concept and i guess the life is much simpler there....but how about db2? How can I achieve this behavior with db2? Is there a way to say fetch first 10 records and then fetch next 10 records and so on? Any suggestions / ideas ? Db2 Version - DB2 v8.1.0.144 Fix Pack Num - 16 Linux

    Read the article

  • Find unique vertices from a 'triangle-soup'

    - by sum1stolemyname
    I am building a CAD-file converter on top of two libraries (Opencascade and DWF Toolkit). However, my question is plattform agnostic: Given: I have generated a mesh as a list of triangular faces form a model constructed through my application. Each Triangle is defined through three vertexes, which consist of three floats (x, y & z coordinate). Since the triangles form a mesh, most of the vertices are shared by more then one triangle. Goal: I need to find the list of unique vertices, and to generate an array of faces consisting of tuples of three indices in this list. What i want to do is this: //step 1: build a list of unique vertices for each triangle for each vertex in triangle if not vertex in listOfVertices Add vertex to listOfVertices //step 2: build a list of faces for each triangle for each vertex in triangle Get Vertex Index From listOfvertices AddToMap(vertex Index, triangle) While I do have an implementation which does this, step1 (the generation of the list of unique vertices) is really slow in the order of O(n!), since each vertex is compared to all vertices already in the list. I thought "Hey, lets build a hashmap of my vertices' components using std::map, that ought to speed things up!", only to find that generating a unique key from three floating point values is not a trivial task. Here, the experts of stackoverflow come into play: I need some kind of hash-function which works on 3 floats, or any other function generating a unique value from a 3d-vertex position.

    Read the article

  • Rails apps blew up on mediatemple's (dv) server

    - by BandsOnABudget
    i managed to fix this issue but i wanted to document it here for any others whom might have similar problems. I'm running a mediatemple (dv) rage server. monit started sending me alerts that i was having resource limitations on the server. logged into plesk and the CPU was pinned at 99.9%. Rebooted the server, catastrophe avoided... Not quite - all my rails apps were not loading My Setup ruby 1.8.6 Rails 2.3.5 w/ passenger installed as an apache module. I noticed a defunct ruby process so i killed and rebooted the server but runby continued to come back as defunct. Started trolling thru the apache log and i saw errors irt updating rubygems. i updated to the latest but then continued to get gem errors. Basically, I had to go thru all my apps and update any gems manually, reboot apache, and all was restored. Not really sure the cause of the issue but wanted to note it for posterity. Anybody else in the community ever have similar issues???

    Read the article

  • Static Data Structures on Embedded Devices (Android in particular)

    - by Mark
    I've started working on some Android applications and have a question regarding how people normally deal with situations where you have a static data set and have an application where that data is needed in memory as one of the standard java collections or as an array. In my current specific issue i have a spreadsheet with some pre-calculated data. It consists of ~100 rows and 3 columns. 1 Column is a string, 1 column is a float, 1 column is an integer. I need access to this data as an array in java. It seems like i could: 1) Encode in XML - This would be cpu intensive to decode in my experience. 2) build into SQLite database - seems like a lot of overhead for static access to data i only need array style access to in ram. 3) Build into binary blob and read in. (never done this in java, i miss void *) 4) Build a python script to take the CSV version of my data and spit out a java function that adds the values to my desired structure with hard coded values. 5) Store a string array via androids resource mechanism and compute the other 2 columns on application load. In my case the computation would require a lot of calls to Math.log, Math.pow and Math.floor which i'd rather not have to do for load time and battery usage reasons. I mostly work in low power embedded applications in C and as such #4 is what i'm used to doing in these situations. It just seems like it should be far easier to gain access to static data structures in java/android. Perhaps I'm just being too battery usage conscious and in my single case i imagine the answer is that it doesn't matter much, but if every application took that stance it could begin to matter. What approaches do people usually take in this situation? Anything I missed?

    Read the article

  • What is the fastest way to pull a few element values out of XML files in Perl?

    - by Anon Guy
    I have a bunch of XML files that are about 1-2 megabytes in size. Actually, more than a bunch, there are millions. They're all well-formed and many are even validated against their schema (confirmed with libxml2). All were created by the same app, so they're in a consistent format (though this could theoretically change in the future). I want to check the values of one element in each file from within a Perl script. Speed is important (I'd like to take less than a second per file) and as noted I already know the files are well-formed. I am sorely tempted to simply 'open' the files in Perl and scan through until I see the element I am looking for, grab the value (which is near the start of the file), and close the file. On the other hand, I could use an XML parser (which might protect me from future changes to the XML formatting) but I suspect it will be slower than I'd like. Can anyone recommend an appropriate approach and/or parser? Thanks in advance.

    Read the article

  • Uploadify and rails 3 authenticity tokens

    - by Ceilingfish
    Hi chaps, I'm trying to get a file upload progress bar working in a rails 3 app using uploadify (http://www.uploadify.com) and I'm stuck at authenticity tokens. My current uploadify config looks like <script type="text/javascript" charset="utf-8"> $(document).ready(function() { $("#zip_input").uploadify({ 'uploader': '/flash/uploadify.swf', 'script': $("#upload").attr('action'), 'scriptData': { 'format': 'json', 'authenticity_token': encodeURIComponent('<%= form_authenticity_token if protect_against_forgery? %>') }, 'fileDataName': "world[zip]", //'scriptAccess': 'always', // Incomment this, if for some reason it doesn't work 'auto': true, 'fileDesc': 'Zip files only', 'fileExt': '*.zip', 'width': 120, 'height': 24, 'cancelImg': '/images/cancel.png', 'onComplete': function(event, data) { $.getScript(location.href) }, // We assume that we can refresh the list by doing a js get on the current page 'displayData': 'speed' }); }); </script> But I am getting this response from rails: Started POST "/worlds" for 127.0.0.1 at 2010-04-22 12:39:44 ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken): Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/_trace.erb (1.0ms) Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/_request_and_response.erb (6.6ms) Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/diagnostics.erb within rescues/layout (12.2ms) This appears to be because I'm not sending the authentication cookie along with the request. Does anyone know how I can get the values I should be sending there, and how I can make rails read it from HTTP POST rather than trying to find it as a cookie?

    Read the article

  • How do I compress a Json result from ASP.NET MVC with IIS 7.5

    - by Gareth Saul
    I'm having difficulty making IIS 7 correctly compress a Json result from ASP.NET MVC. I've enabled static and dynamic compression in IIS. I can verify with Fiddler that normal text/html and similar records are compressed. Viewing the request, the accept-encoding gzip header is present. The response has the mimetype "application/json", but is not compressed. I've identified that the issue appears to relate to the MimeType. When I include mimeType="*/*", I can see that the response is correctly gzipped. How can I get IIS to compress WITHOUT using a wildcard mimeType? I assume that this issue has something to do with the way that ASP.NET MVC generates content type headers. The CPU usage is well below the dynamic throttling threshold. When I examine the trace logs from IIS, I can see that it fails to compress due to not finding a matching mime type. <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files" noCompressionForProxies="false"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="application/json" enabled="true" /> </staticTypes> </httpCompression>

    Read the article

  • Setting up a "cookieless domain" to improve site performance

    - by Django Reinhardt
    I was reading in Google's documentation about improving site speed. One of their recommendations is serving static content (images, css, js, etc.) from a "cookieless domain": Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies. Google then says that the best way to do this is to buy a new domain and set it to point to your current one: To reserve a cookieless domain for serving static content, register a new domain name and configure your DNS database with a CNAME record that points the new domain to your existing domain A record. Configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain. In your web pages, reference the domain name in the URLs for the static resources. This is pretty straight forward stuff, except for the bit where it says to "configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain". From what I've read, there's no setting in IIS that allows you to say "serve static resources", so how do I prevent ASP.NET from setting cookies on this new domain? At present, even if I'm just requesting a .jpg from the new domain, it sets a cookie on my browser, even though our application's cookies are set to our old domain. For example, ASP.NET sets an ".ASPXANONYMOUS" cookie that (as far as I'm aware) we're not telling it to do. Apologies if this is a real newb question, I'm new at this! Thanks.

    Read the article

  • java distributed cache for low latency, high availability

    - by Shahbaz
    I've never used distributed caches/DHTs like memcached, jboss cache, ehcache, etc. I'm wondering which, if any, is appropriate for my use. First, I'm not doing web applications (as most of these project seem to be geared towards web apps). I write servers (Order Management Systems actually) for financial trading firms. The servers themselves are not too complicated. They need to receive information (market data, orders, executions, etc.) rout them to their destination while possibly transforming some of these messages. I am looking at these products to solve the following problems: * Safe repository of the state of the server. I'd rather build the logic of my application as a bunch of transformers (similar to Apache Camel) and store the state in a 'safe' place * This repository should be distributed: in case one of these data stores crashes, one or two more should be up and I should be able to switch to them seamlessly * This repository should be fast. Single digits milliseconds count here, in other words, systems which consume/process this data are automated systems, not humans clicking on links. This system needs to have high-throughput and low latency. By sending my data outside the process, I am necessarily slowing performance, but I am trying to balance absolute raw speed and absolute protection of data. * This repository should be safe. Similar to the point about several on-line backups, this system needs to write data to disk (potentially more than one disk). I'd really like to stop writing my own 'transaction servers.' Am I correct to be looking into projects such as jboss cache, ehcache, etc.? Thanks

    Read the article

< Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >