Search Results

Search found 9889 results on 396 pages for 'pointer speed'.

Page 377/396 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • c - dereferencing issue

    - by Joe
    Hi, I have simplified an issue that I've been having trying to isolate the problem, but it is not helping. I have a 2 dimensional char array to represent memory. I want to pass a reference to that simulation of memory to a function. In the function to test the contents of the memory I just want to iterate through the memory and print out the contents on each row. The program prints out the first row and then I get seg fault. My program is as follows: #include <stdio.h> #include <stdlib.h> #include <ctype.h> #include <string.h> void test_memory(char*** memory_ref) { int i; for(i = 0; i < 3; i++) { printf("%s\n", *memory_ref[i]); } } int main() { char** memory; int i; memory = calloc(sizeof(char*), 20); for(i = 0; i < 20; i++) { memory[i] = calloc(sizeof(char), 33); } memory[0] = "Mem 0"; memory[1] = "Mem 1"; memory[2] = "Mem 2"; printf("memory[1] = %s\n", memory[1]); test_memory(&memory); return 0; } This gives me the output: memory[1] = Mem 1 Mem 0 Segmentation fault If I change the program and create a local version of the memory in the function by dereferencing the memory_ref, then I get the right output: So: #include <stdio.h> #include <stdlib.h> #include <ctype.h> #include <string.h> void test_memory(char*** memory_ref) { char** memory = *memory_ref; int i; for(i = 0; i < 3; i++) { printf("%s\n", memory[i]); } } int main() { char** memory; int i; memory = calloc(sizeof(char*), 20); for(i = 0; i < 20; i++) { memory[i] = calloc(sizeof(char), 33); } memory[0] = "Mem 0"; memory[1] = "Mem 1"; memory[2] = "Mem 2"; printf("memory[1] = %s\n", memory[1]); test_memory(&memory); return 0; } gives me the following output: memory[1] = Mem 1 Mem 0 Mem 1 Mem 2 which is what I want, but making a local version of the memory is useless because I need to be able to change the values of the original memory from the function which I can only do by dereferencing the pointer to the original 2d char array. I don't understand why I should get a seg fault on the second time round, and I'd be grateful for any advice. Many thanks Joe

    Read the article

  • Slide div inside another div effect

    - by mariki
    I am trying to create sliding images effect: Online demolink text. The problem is when you click rapidly and randomly on the buttons and not wait for animation to finish causing unpredicted results and even stopping the functionality to work. (and once it stopped in the middle of 2 images...) How Can I make it work without bugs? I know that there are some other tools for that sliding like jquery and other approaches like changing the position attribute and not the scrollLeft attribute. But I want to do it as it is, with scrollLeft, if it possible of course. The Code: <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <title>Untitled Document</title> <style type="text/css"> body { padding:0px; margin:0px; } #imageContainer { width: 500px; position: relative; } #div { overflow: hidden; white-space: nowrap; } #div img { cursor: pointer; vertical-align: bottom; width: 500px; padding:0px; margin:0px; display:inline; } </style> <script type="text/javascript"> var scrollIntervalID; var currentIndex=0; function Scroll(ind){ var dir = ind ==currentIndex?0:ind<currentIndex?-1:1; var steps = Math.abs(ind-currentIndex); scrollIntervalID = setInterval(function(){ var i=(steps*10)-1; return function(){ if (i <= 0) clearInterval(scrollIntervalID); var elm = document.getElementById("div"); elm.scrollLeft +=dir * 50; i--; document.getElementById("span").innerHTML=elm.scrollLeft; } }(), 15); currentIndex=ind; } </script> </head> <body> <div id="imageContainer"> <div id="div"> <img id="image1" src="Images/pic1.jpg" width="500px"/><img id="image2" src="Images/pic2.jpg" width="500px"/><img id="image3" src="Images/pic3.jpg" width="500px"/><img id="image4" src="Images/pic4.jpg" width="500px"/> </div> </div> <div> <input type="button" value="1" onclick="Scroll(0);"/> <input type="button" value="2" onclick="Scroll(1);"/> <input type="button" value="3" onclick="Scroll(2);"/> <input type="button" value="4" onclick="Scroll(3);"/> </div> <span id="span"></span> </body> </html>

    Read the article

  • swf listens to XML, AS3

    - by VideoDnd
    My swf listens to XML from a socket and document. How do I get 'my variables' to grab XML from the socket instead of the XML document? PURPOSE My purpose is to control variables with a XML Socket Server. I hope my question is clear, but ask if there's any questions. EXAMPLE Flash File import flash.net.*; import flash.display.*; import flash.events.*; import flash.system.Security; import flash.utils.Timer; import flash.events.TimerEvent; //MY VARIABLES, LINE 8-12 var timer:Timer = new Timer(10); var myString:String = ""; var count:int = 0; var myStg:String = ""; var fcount:int = 0; var xml_s=new XMLSocket(); xml_s.addEventListener(Event.CONNECT, socket_event_catcher);//OnConnect// xml_s.addEventListener(Event.CLOSE, socket_event_catcher);//OnDisconnect// xml_s.addEventListener(IOErrorEvent.IO_ERROR, socket_event_catcher);//Unable To Connect// xml_s.addEventListener(DataEvent.DATA, socket_event_catcher);//OnDisconnect// xml_s.connect("localhost", 1999); function socket_event_catcher(Event):void { switch (Event.type) { case 'ioError' : trace("ioError: " + Event.text);//Unable to Connect :(// break; case 'connect' : trace("Connection Established!");//Connected :)// break; case 'data' : trace("Received Data: " + Event.data); //var myXML:XML=new XML(Event.data); //trace("myXML.body.file: " + myXML.body.file); //var myLoader:String=myXML.body.file; //var urlReq:URLRequest=new URLRequest(myLoader); //trace("file to load URL: " + urlReq); break; case 'close' : trace("Connection Closed!");//OnDisconnect :( // xml_s.close(); break; } } //LOAD XML var myXML:XML; var myLoader:URLLoader = new URLLoader(); myLoader.load(new URLRequest("time.xml")); myLoader.addEventListener(Event.COMPLETE, processXML); //var myXML:XML=new XML(Event.data); //trace("myXML.body.file: " + myXML.body.file); //var DOINK:String=myXML.body.file; //var urlReq:URLRequest=new URLRequest(DOINK); //trace("file to load URL: " + urlReq); //PARSE XML function processXML(e:Event):void { myXML = new XML(e.target.data); trace(myXML.COUNT.text()); //-77777 //grab the data as a string myString = myXML.COUNT.text() //grab the data as an int count = int(myXML.COUNT.text()); //grab the data as a string myString = myXML.COUNT.text() //grab the data as an int count = int(myXML.COUNT.text()); //grab the data as a string myStg = myXML.COUNT.text() //grab the data as an int fcount = int(myXML.COUNT.text()); //grab the data as a string myStg = myXML.COUNT.text() //grab the data as an int fcount = int(myXML.COUNT.text()); trace("String: ", myString); trace("Int: ", count); trace(count - 1); //just to show you that it's a number that you can do math with (-77778) //TEXT var text:TextField = new TextField(); text.text = myString; addChild(text); } Ruby Server 'Snippet' msg1 = {"msg" => {"head" => {"type" => "frctl", "seq_no" => seq_no, "version" => 1.0}, "SESSION" => {"text" => "88888", "timer" => -1000, "count" => 10, "fcount" => "10"}}} XML <?xml version="1.0" encoding="utf-8"?> <SESSION> <TIMER TITLE="speed">100</TIMER> <COUNT TITLE="starting position">88888</COUNT> <FCOUNT TITLE="ramp">1000</FCOUNT> </SESSION> ENVIROMENT AS3 Ruby 186-25 PROBLEM Coding errors "coercion of value"

    Read the article

  • to understand the code- how the heap is written in process migration in solaris

    - by akshay
    hi guys i need help understanding what this piece of code actually does as it is a part of my project i am stuck here. the code is from libckpt on solaris. /********************************** * function: write_heap * args: map_fd -- file descriptor for map file * data_fd -- file descriptor for data file * returns: no. of chunks written on success, -1 on failure * side effects: writes all included segments of the heap to ckpt files * misc.: If we are forking and copyonwrite is set, we will write the heap from bottom to top, moving the brk pointer up each time so that we don't get a page copied if the * called from: take_ckpt() ***********************************/ static int write_heap(int map_fd, int data_fd) { Dlist curptr, endptr; int no_chunks=0, pn; long size; caddr_t stop, addr; if(ckptflags.incremental){ /-- incremental checkpointing on? --/ endptr = ckptglobals.inc_list-main-flink; /*-- for each included chunk of the heap --*/ for(curptr = ckptglobals.inc_list->main->blink->blink; curptr != endptr; curptr = curptr->blink){ /*-- write out the last page in the included chunk --*/ stop = curptr->addr; pn = ((long)curptr->stop - (long)sys.DATASTART) / PAGESIZE; if(isdirty(pn)){ addr = (caddr_t)max((long)curptr->addr, (long)((pn * PAGESIZE) + sys.DATASTART)); size = (long)curptr->stop - (long)addr; debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x, pn = %d\n", addr, addr+size, pn); if(write_chunk(addr, size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } /*-- write out all the whole pages in the middle of the chunk --*/ for(pn--; pn * PAGESIZE + sys.DATASTART >= stop; pn--){ if(isdirty(pn)){ addr = (caddr_t)((pn * PAGESIZE) + sys.DATASTART); debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x, pn = %d\n", addr, addr+PAGESIZE, pn); if(write_chunk(addr, PAGESIZE, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } /*-- write out the first page in the included chunk --*/ addr = curptr->addr; size = ((pn+1) * PAGESIZE + sys.DATASTART) - addr; if(size > 0 && (isdirty(pn))){ debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x\n", addr, addr+size); if(write_chunk(addr, size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } } else{ /-- incremental checkpointing off! --/ endptr = ckptglobals.inc_list-main-blink; /*-- for each included chunk of the heap --*/ for(curptr = ckptglobals.inc_list->main->flink->flink; curptr != endptr; curptr = curptr->flink){ debug(stderr, "DEBUG: saving memory from 0x%x to 0x%x\n", curptr->addr, curptr->addr+curptr->size); if(write_chunk(curptr->addr, curptr->size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } return no_chunks; }

    Read the article

  • C++ std::vector problems

    - by Faur Ioan-Aurel
    For 2 days i tried to explain myself some of the things that are happening in my c++ code,and i can't get a good explanation.I must say that i'm more a java programmer.Long time i used quite a bit the C language but i guess Java erased those skills and now i'm hitting a wall trying to port a few classes from java to c++. So let's say that we have this 2 classes: class ForwardNetwork { protected: ForwardLayer* inputLayer; ForwardLayer* outputLayer; vector<ForwardLayer* > layers; public: void ForwardNetwork::getLayers(std::vector< ForwardLayer* >& result ) { for(int i= 0 ;i< layers.size(); i++){ ForwardLayer* lay = dynamic_cast<ForwardLayer*>(this->layers.at(i)); if(lay != NULL) result.push_back(lay); else cout << "Layer at#" << i << " is null" << endl; } } void ForwardNetwork::addLayer ( ForwardLayer* layer ) { if(layer != NULL) cout << "Before push layer is not null" << endl; //setup the forward and back pointer if ( this->outputLayer != NULL ) { layer->setPrevious ( this->outputLayer ); this->outputLayer->setNext ( layer ); } //update the input layer and outputLayer variables if ( this->layers.size() == 0 ) this->inputLayer = this->outputLayer = layer; else this->outputLayer = layer; //push layer in vector this->layers.push_back ( layer ); for(int i = 0; i< layers.size();i++) if(layers[i] != NULL) cout << "Check::Layer[" << i << "] is not null!" << endl; } }; Second class: class Backpropagation : public Train { public: Backpropagation::Backpropagation ( FeedForwardNetwork* network ){ this->network = network; vector<FeedforwardLayer*> vec; network->getLayers(vec); } }; Now if i add from main() some layers into network via addLayer(..) method it's all good.My vector is just as it should.But after i call Backpropagation() constructor with a network object ,when i enter getLayers(), some of my objects from vector have their address set to NULL(they are randomly chosen:for example if i run my app once with 3 layer's into vector ,the first object from vector is null.If i run it second time first 2 objects are null,third time just first object null and so on). Now i can't explain why this is happening.I must say that all the objects that should be in vector they also live inside the network and they are not NULL; This happens everywhere after i done with addLayer() so not just in the getLayers(). I cant get a good grasp to this problem.I thought first that i might modify my vector.But i can't find such thing. Also why if the reference from vector is NULL ,the reference that lives inside ForwardNetwork as a linked list (inputLayer and outputLayer) is not NULL? I must ask for your help.Please ,if you have some advices don't hesitate! PS: as compiler i use g++ part of gcc 4.6.1 under ubuntu 11.10

    Read the article

  • How do I solve "Two different CRTLDLLs are loaded" when using packages in C++ Builder 2010?

    - by David M
    Hi, We are trying to split up our monolithic EXE into a combination of an EXE and several packages. So far, we have one package that we're trying to use, and when running the EXE Codeguard shows the following error on startup: CG Error Two different CRTLDLLs are loaded. CG might report false errors (C:\Windows\system32\CC32100MT.DLL) (D:\Projects\Foo\Bar.bpl) OK I read this as two different runtime libraries being loaded - one, the correct one (CC32100MT.dll), one incorrect, which is the package we're trying to use. Continuing to run the program shows odd errors, especially casting between classes or passing a pointer to a class as a parameter in a method that crosses the EXE/DLL boundary. Codeguard itself doesn't show any other errors at all though. How do we solve this? Some more details We've looked at as many things as we (the developer working on this and I) can collectively think of: Each project is built using runtime packages. The EXE host lists Bar in its package list. Each project is set to compile with dynamic RTL. However, changing this does not solve the problem. The package is linked to the EXE via its BPI file, but linking via a LIB makes no difference either. The EXE and BPL are compiled with the same project settings, where the same options exist for both types of project. We think, anyway :) There is only one copy of the BPL and BPI on the system: it's definitely linking to the right one. Examining the EXE and BPL with Depends and TDump show they are both using C:\Windows\system32\CC32100MT.DLL. They should both be using the one RTL. Creating a new project (a plain VCL forms application) and linking to the BPL (via its BPI) works fine. Something in the process of adding all the files and LIBs that make our EXE contain the code it needs to changes this, but we haven't been able to figure out what. The LIBs all either correspond to DLLs we use (flat C interface, usually look as though they were built with MSVC) or are simple projects with lots of related files, compiled to a lib for the purpose of linking into the EXE - these correspond roughly to the areas of the program we want to split to BPLs, by the way. There don't seem to be project options for the LIB projects that would affect RTL linking, unless we've missed them. I have exhaustively hunted through Depends and looked at all RTL and CC32*.dll files the EXE and every single DLL references. All are identical: rtl140.bpl and CC32100MT.DLL. Fully qualified paths show they are the same files, too. Everything should be using the one same run-time library. We're stumped. Absolutely stumped. We've had other problems using BPLs (they seem to be surprisingly tricky things, especially using C++) but have managed to solve them all. This one we've had no luck at all and we'd really appreciate any insights :) We're using C++Builder 2010 (as part of RAD Studio actually, but with little Delphi code apart from components.)

    Read the article

  • Blogger issue with custom background images on Chrome and Safari

    - by hdx
    This is weird, the site in question is blog.andrebatista.info and it is a hosted at blogger.com. I'm trying to make the blogger template look the same as the one in my main website, www.andrebatista.info. For some reason if I go directly to the blog address Chrome and Safari fail to display all of my background images... all of them. However if I first go to www.andrebatista.info first and then go to the blog it renders just fine ?¿ The way I'm customizing it is by adding a link to my main site's stylesheet at the very end of the head tag on the blogger template. That stylesheet is displayed below: *{ margin:0; padding:0; border:0; } html,body { background:#064169 url(http://www.andrebatista.info/images/main_gradient_slice.jpg) repeat-x; font-family: Arial, "MS Trebuchet", sans-serif; font-size:18px; } #main_wrapper{ margin: 0 auto; width:1024px; } #header{ background: url(http://www.andrebatista.info/images/header.jpg); height:133px; border:none; margin:0; } #menu_wrapper{ background: url(http://www.andrebatista.info/images/menu.jpg); height:34px; overflow:hidden; } #menu_wrapper .menu_item{ color:white; float: left; border: 1 px solid red; height: 34px; padding-top:10px; text-align:center; width:100px; } #menu_wrapper .first{ padding-left:240px; float:left; width:100px; } #menu_wrapper .active,#menu_wrapper .menu_item:hover{ background-color:white; color:Teal; cursor: pointer; } #content_area_wrapper{ background: url(http://www.andrebatista.info/images/body_bg_slice.jpg) repeat-y; } #content_area{ min-height:524px; background: url(http://www.andrebatista.info/images/main_content_top.jpg) repeat-x; } #main_banner{ background: url(http://www.andrebatista.info/images/main_banner.jpg) no-repeat center center; width:662px; height:338px; margin: 0 auto; } #main_banner div{ color:white; padding-left:47px; padding-right:164px; padding-top:105px; } #text_area{ overflow:hidden; width:662px; margin:0 auto; padding:14px; } #contentList{ padding:0 20px 20px 20px; } #text_area .left{ width:50%; float:left; } #text_area .left{ width:50%; float:right; } #footer2{ background: url(http://www.andrebatista.info/images/footer.jpg); height:62px; } #footer{ background: url(http://www.andrebatista.info/images/footer.jpg); height:62px; } Any ideas on what I could be missing?

    Read the article

  • jQuery fadeIn and fadeOut buggy on hover of image

    - by user186319
    Hi All, I have four images on a page that on hover, needs to replace the main text with relevant text pertaining to that image. It's working but buggy. If I roll over slowly and roll off slowly, I get the desired effect. When I rollover quickly both div's content show. Here is a thinned out version of what I need to be able to do. <img src="btn-open.gif" class="btn" /> <div class="mainText"> <h1>Main text</h1> <p>Morbi mollis auctor magna, eu sodales mi posuere elementum. Donec lacus lorem, vestibulum sed luctus ac, tincidunt sit amet eros. Nullam tristique lectus lobortis nibh pharetra placerat. Aliquam quis tellus mauris. Quisque eu convallis elit. Sed vitae libero est. Suspendisse laoreet magna magna, vitae malesuada diam. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Fusce eros ipsum, interdum et volutpat sed, commodo aliquam odio. Maecenas auctor condimentum mi. Maecenas ante eros, tristique nec viverra sed, molestie sit amet nulla. Suspendisse vitae turpis ac felis rutrum interdum.</p> </div> <div class="replacementText"> <h1>Replacement text</h1> <p>Nulla ac magna nec quam cursus mollis eget a nulla! Vestibulum quis nibh ipsum, ut vehicula leo. Etiam ac felis suscipit mi semper vehicula. Praesent est mi, suscipit sit amet bibendum at, porta quis elit. Integer lectus est, consequat non sodales ac, pharetra sit amet tellus. Suspendisse porttitor massa a dolor suscipit sed ullamcorper ipsum vehicula. In malesuada augue sit amet ante volutpat euismod. Ut vel felis sed enim placerat ultricies. Aliquam erat volutpat. Vivamus rutrum; ante vitae euismod accumsan, felis odio lacinia magna, eu viverra nisl metus non ligula? In metus nisi, viverra vel scelerisque non, ullamcorper sed arcu? In hac habitasse platea dictumst. Donec laoreet dapibus quam vitae pulvinar. Morbi ut purus nisl. Nulla eu velit ipsum; vel mattis magna. Aenean sodales faucibus dapibus.</p> </div> $(document).ready(function() { $(".btn").hover( function() { $(this).css({ cursor: "pointer" }); $(".mainText").hide(); $(".replacementText").slideDown("slow"); }, function() { $(".replacementText").hide(); $(".mainText").slideDown("slow"); }); });

    Read the article

  • Policy-based template design: How to access certain policies of the class?

    - by dehmann
    I have a class that uses several policies that are templated. It is called Dish in the following example. I store many of these Dishes in a vector (using a pointer to simple base class), but then I'd like to extract and use them. But I don't know their exact types. Here is the code; it's a bit long, but really simple: #include <iostream> #include <vector> struct DishBase { int id; DishBase(int i) : id(i) {} }; std::ostream& operator<<(std::ostream& out, const DishBase& d) { out << d.id; return out; } // Policy-based class: template<class Appetizer, class Main, class Dessert> class Dish : public DishBase { Appetizer appetizer_; Main main_; Dessert dessert_; public: Dish(int id) : DishBase(id) {} const Appetizer& get_appetizer() { return appetizer_; } const Main& get_main() { return main_; } const Dessert& get_dessert() { return dessert_; } }; struct Storage { typedef DishBase* value_type; typedef std::vector<value_type> Container; typedef Container::const_iterator const_iterator; Container container; Storage() { container.push_back(new Dish<int,double,float>(0)); container.push_back(new Dish<double,int,double>(1)); container.push_back(new Dish<int,int,int>(2)); } ~Storage() { // delete objects } const_iterator begin() { return container.begin(); } const_iterator end() { return container.end(); } }; int main() { Storage s; for(Storage::const_iterator it = s.begin(); it != s.end(); ++it){ std::cout << **it << std::endl; std::cout << "Dessert: " << *it->get_dessert() << std::endl; // ?? } return 0; } The tricky part is here, in the main() function: std::cout << "Dessert: " << *it->get_dessert() << std::endl; // ?? How can I access the dessert? I don't even know the Dessert type (it is templated), let alone the complete type of the object that I'm getting from the storage. This is just a toy example, but I think my code reduces to this. I'd just like to pass those Dish classes around, and different parts of the code will access different parts of it (in the example: its appetizer, main dish, or dessert).

    Read the article

  • JQuery Help: slidetoggle() is not working properly

    - by John Smith
    Hi all, I'm trying to use JQuery Slidetoggle functionality, but not able to use properly. The problem I'm currently facing is my div is sliding down on the click of slide image icon, but after that suddenly data div (in which data is loading) disappears. Means sliding down is perfect but div (in which data is loading , here #divMain) is not visible after that. I want to achieve sliding effect in my code, like this has (Please see Website Design, Redesign Services slider): Here is my code: HTML: <div> <div class="jquery_inner_mid"> <div class="main_heading"> <a href="#"> <img src="features.jpg" alt="" title="" border="0" /></a> </div> <div class="plus_sign"> <img id="imgFeaturesEx" src="images/plus.jpg" alt="" title="" border="0" /> <img id="imgFeaturesCol" src="images/minus.jpg" alt="" title="" border="0" /></div> <div class="toggle_container"> <div id="divMain" > </div> </div> </div> <div class="jquery_inner_mid"> <div class="main_heading"> <img src="About.jpg" alt="" title="" /></div> <div class="plus_sign"> <img id="imgTechnoEx" src="images/plus.jpg" alt="" title="" border="0" /> <img id="imgTechnoCol" src="images/minus.jpg" alt="" title="" border="0" /></div> <div class="toggle_container"> <div id="divTechnossus" > </div> </div> </div> </div> JQuery: $(function() { $('#imgFeaturesCol').hide(); $('#imgTechnoCol').hide(); $('#imgFeaturesEx').click(function() { $.getJSON("/Visitor/GetFeatureInfo", null, function(strInfo) { $('#divMain').html(strInfo).slideToggle("slow"); LoadDiv(); }); $('#imgFeaturesEx').hide(); $('#imgFeaturesCol').show(); }); $('#imgFeaturesCol').click(function() { $('#divMain').html("").slideToggle("slow"); $('#imgFeaturesCol').hide(); $('#imgFeaturesEx').show(); }); $('#imgTechnoEx').click(function() { $.getJSON("/Visitor/GetTechnossusInfo", null, function(strInfo) { $('#divTechnossus').html(strInfo).slideToggle("slow"); }); $('#imgTechnoEx').hide(); $('#imgTechnoCol').show(); }); $('#imgTechnoCol').click(function() { $('#divTechnossus').html("").slideToggle("slow"); $('#imgTechnoCol').hide(); $('#imgTechnoEx').show(); }); })(); function LoadDiv() { $('#s4').cycle({ speed: 200, timeout: 0, pager: '#nav' }); }

    Read the article

  • ublas::bounded_vector<> being resized?

    - by n2liquid
    Now, seriously... I'll refrain from using bad words here because we're talking about the Boost fellows. It MUST be my mistake to see things this way, but I can't understand why, so I'll ask it here; maybe someone can enlighten me in this matter. Here it goes: uBLAS has this nice class template called bounded_vector<> that's used to create fixed-size vectors (or so I thought). From the Effective uBLAS wiki (http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?Effective_UBLAS): The default uBLAS vector and matrix types are of variable size. Many linear algebra problems involve vectors with fixed size. 2 and 3 elements are common in geometry! Fixed size storage (akin to C arrays) can be implemented efficiently as it does not involve the overheads (heap management) associated with dynamic storage. uBLAS implements fixed sizes by changing the underling storage of a vector/matrix to a "bounded_array" from the default "unbounded_array". Alright, this bounded_vector<> thing is used to free you from specifying the underlying storage of the vector to a bounded_array<> of the specified size. Here I ask you: doesn't it look like this bounded vector thing has fixed size to you? Well, it doesn't have. At first I felt betrayed by the wiki, but then I reconsidered the meaning of "bounded" and I think I can let it pass. But in case you, like me (I'm still uncertain), is still wondering if this makes sense, what I found out is that the bounded_vector<> actually can be resized, it may only not be greater than the size specified as template parameter. So, first off, do you think they've had a good reason not to make a real fixed<< size vector or matrix type? Do you think it's okay to "sell" this bounded -- as opposed to fixed-size -- vector to the users of my library as a "fixed-size" vector replacement, even named "Vector3" or "Vector2", like the Effective uBLAS wiki did? Do you think I should somehow implement a vector with fixed size for this purpose? If so, how? (Sorry, but I'm really new to uBLAS; just tried it today) I am developing a 3D game. Should uBLAS be used for the calculations involved in this ("hey, geometry!", per Effective uBLAS wiki)? What replacement would you suggest, if not? -- edit And just in case, yes, I've read this warning: It should be noted that this only changes the storage uBLAS uses for the vector3. uBLAS will still use all the same algorithm (which assume a variable size) to manipulate the vector3. In practice this seems to have no negative impact on speed. The above runs just as quickly as a hand crafted vector3 which does not use uBLAS. The only negative impact is that the vector3 always store a "size" member which in this case is redundant [or isn't it? I mean......]. I see it uses the same algorithm, assuming a variable size, but if an operation were to actually change its size, shouldn't it be stopped (assertion)? ublas::bounded_vector<float,3> v3; ublas::bounded_vector<float,2> v2; v3 = v2; std::cout << v3.size() << '\n'; // prints 2 Oh, come on, isn't this just plain betrayal?

    Read the article

  • My server app works strangely. What could be the reason(s)?

    - by Poni
    Hi! I've written a server app (two parts actually; proxy server and a game server) using C++ (board game). It uses IOCP as the sockets interface. For that app I've also written a "client simulator" (hereafter "client") app that spawns many client connections, where each of them plays, in very high speed, getting the CPU to be 100% utilized. So, that's how it goes in terms of topology: Game server - holds the game state. Real players do not connect it directly but through the proxy server. When a player joins a game, the proxy actually asks for it on behalf of that player, and the game server spawns a "player instance" for that player, and from now on, every notification between the game server and the player is being passed through the proxy. Proxy server - holds TCP connections with the real players. Players communicate with the game server through it only. Client simulator - connects to the proxy only. When running the server (again, it's actually two server apps) & client locally it all works just fine. I'm talking about 40k+ player instances in which all of them are active in a game. On the other hand, when running the server remotely with, say, 1000 clients who play things getting strange. For example, I run it as said above. Then with Task Manager I kill the client simulator app ("End Process Tree"). Then it seems like the buffer of the remote server got modified by another thread, or in other words, a memory corruption has been occurred. The server crashes because it got an unknown message id (it's a custom protocol where each message has it's own unique number). To make things clear, here is how I run the apps: PC1 - game server and clients simulator (because the clients will connect the proxy). PC2 - proxy server. The strangest thing is this: Only the remote side gets "corrupted". Remote in terms that it's not the PC I use to code the app (VC++ 2008). Let's call the PC I use to code the apps "PC1". Now for example, if this time I ran the game server on PC1 (it means that proxy server on PC2 and clients simulator on PC1), then the proxy server crashes with an "unknown message id" error. Another variation is when I run the proxy server on PC1 (again, the dev machine), the game server and the clients simulator on PC2, then the game server on PC2 gets crashed. As for the IOCP config: The servers' internal connections use the default receive/send buffer sizes. Tried even with setting them to 1MB, but no luck. I have three PCs in total; 2 x Vista 64bit <<-- one of those is the dev machine. The other is connected through WiFi. 1 x WinXP 32bit They're all connected in a "full duplex" manner. What could be the reason? Tried about everything; Stack tracing, recording some actions (like read/write logging).. I want to stress that only the PC I'm not using to code the apps crashes (actually the server app "role" which is running on it - sometimes the game server and sometimes the proxy server). At first I thought that maybe the wireless PC has problems (it's wireless..) but: TCP has it's own mechanisms to make sure the packet is delivered properly. Also, a crash also happens when trying it with the two PCs that are physically connected (Vista vs. XP). Another option is that the Windows DLLs versions might have problems, but then again, one of the tests is Vista vs. Vista, and the other is Vista vs. XP. Any idea?

    Read the article

  • What is the best algorithm for this array-comparison problem?

    - by mark
    What is the most efficient for speed algorithm to solve the following problem? Given 6 arrays, D1,D2,D3,D4,D5 and D6 each containing 6 numbers like: D1[0] = number D2[0] = number ...... D6[0] = number D1[1] = another number D2[1] = another number .... ..... .... ...... .... D1[5] = yet another number .... ...... .... Given a second array ST1, containing 1 number: ST1[0] = 6 Given a third array ans, containing 6 numbers: ans[0] = 3, ans[1] = 4, ans[2] = 5, ......ans[5] = 8 Using as index for the arrays D1,D2,D3,D4,D5 and D6, the number that goes from 0, to the number stored in ST1[0] minus one, in this example 6, so from 0 to 6-1, compare each res array against each D array My algorithm so far is: I tried to keep everything unlooped as much as possible. EML := ST1[0] //number contained in ST1[0] EML1 := 0 //start index for the arrays D While EML1 < EML if D1[ELM1] = ans[0] goto two if D2[ELM1] = ans[0] goto two if D3[ELM1] = ans[0] goto two if D4[ELM1] = ans[0] goto two if D5[ELM1] = ans[0] goto two if D6[ELM1] = ans[0] goto two ELM1 = ELM1 + 1 return 0 //If the ans[0] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers two: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[1] goto three if D2[ELM1] = ans[1] goto three if D3[ELM1] = ans[1] goto three if D4[ELM1] = ans[1] goto three if D5[ELM1] = ans[1] goto three if D6[ELM1] = ans[1] goto three ELM1 = ELM1 + 1 return 0 //If the ans[1] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers three: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[2] goto four if D2[ELM1] = ans[2] goto four if D3[ELM1] = ans[2] goto four if D4[ELM1] = ans[2] goto four if D5[ELM1] = ans[2] goto four if D6[ELM1] = ans[2] goto four ELM1 = ELM1 + 1 return 0 //If the ans[2] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers four: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[3] goto five if D2[ELM1] = ans[3] goto five if D3[ELM1] = ans[3] goto five if D4[ELM1] = ans[3] goto five if D5[ELM1] = ans[3] goto five if D6[ELM1] = ans[3] goto five ELM1 = ELM1 + 1 return 0 //If the ans[3] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers five: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[4] goto six if D2[ELM1] = ans[4] goto six if D3[ELM1] = ans[4] goto six if D4[ELM1] = ans[4] goto six if D5[ELM1] = ans[4] goto six if D6[ELM1] = ans[4] goto six ELM1 = ELM1 + 1 return 0 //If the ans[4] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers six: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[5] return 1 ////If the ans[1] number is not found in either D1[0-6]..... if D2[ELM1] = ans[5] return 1 which will then include ans[0-6] numbers return 1 if D3[ELM1] = ans[5] return 1 if D4[ELM1] = ans[5] return 1 if D5[ELM1] = ans[5] return 1 if D6[ELM1] = ans[5] return 1 ELM1 = ELM1 + 1 return 0 As language of choice, it would be pure c

    Read the article

  • Using label tags in a validation summary error list?

    - by patridge
    I was thinking about making use of <label> tags in my validation error summary on a failed form submit and I can't figure out if it is going to get me in trouble down the line. Can anyone think of a good reason to avoid this approach? Usability, functionality, design, or other issues are all helpful. I really like the idea of clicking a line item in the error list and being jumped to the offending input element, especially in a mobile HTML scenario where vertical orientation is more common and scrolling is a pain. So far the only problem I can find is that labels don't navigate for radio buttons or checkboxes without individual IDs (Clicking a label for a single ID-tagged radio/checkbox element alters its selection). It doesn't make it any worse than no label, though. Here is a stripped down HTML test sample of this idea (CSS omitted for simplicity). <div class="validation-errors"> <p>There was a problem saving your form.</p> <ul> <li><label for="select1">Select 1 is invalid.</label></li> <li><label for="text1">Text 1 is invalid.</label></li> <li><label for="textarea1">TextArea 1 is invalid.</label></li> <li><label for="radio1">Radio 1 is invalid.</label></li> <li><label for="checkbox1">Checkbox 1 is invalid.</label></li> </ul> </div> <form action="/somewhere"> <fieldset><legend>Some Form</legend> <ol> <li><label for="select1">select1</label> <select id="select1" name="select1"> <option value="value1">Value 1</option> <option value="value2">Value 2</option> <option selected="selected" value="value3">Value 3</option> </select></li> <li><label for="text1">text1</label> <input id="text1" name="text1" type="text" value="sometext" /></li> <li><label for="textarea1">textarea1</label> <textarea id="textarea1" name="textarea1" rows="5" cols="25">sometext</textarea></li> <li><ul> <li><label><input type="radio" name="radio1" value="value1" />Value 1</label></li> <li><label><input type="radio" name="radio1" value="value2" checked="checked" />Value 2</label></li> <li><label><input type="radio" name="radio1" value="value3" />Value 3</label></li> </ul></li> <li><ul> <li><label><input type="checkbox" name="checkbox1" value="value1" checked="checked" />Value 1</label></li> <li><label><input type="checkbox" name="checkbox1" value="value2" />Value 2</label></li> <li><label><input type="checkbox" name="checkbox1" value="value3" checked="checked" />Value 3</label></li> </ul></li> <li><input type="submit" value="Save &amp; Continue" /></li> </ol> </fieldset> </form> The only thing I have added to make the click-capable behavior more obvious is to add a CSS rule for the labels. .validation-errors label { text-decoration: underline; cursor: pointer; }

    Read the article

  • Why is my mdadm raid-1 recovery so slow?

    - by dimmer
    On a system I'm running Ubuntu 10.04. My raid-1 restore started out fast but quickly became ridiculously slow (at this rate the restore will take 150 days!): dimmer@paimon:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdc1[2] sdb1[1] 1953513408 blocks [2/1] [_U] [====>................] recovery = 24.4% (477497344/1953513408) finish=217368.0min speed=113K/sec unused devices: <none> Eventhough I have set the kernel variables to reasonably quick values: dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_min 1000000 dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_max 100000000 I am using 2 2.0TB Western Digital Hard Disks, WDC WD20EARS-00M and WDC WD20EARS-00J. I believe they have been partitioned such that their sectors are aligned. dimmer@paimon:/sys$ sudo parted /dev/sdb GNU Parted 2.2 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00M (scsi) Disk /dev/sdb: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 (parted) unit s (parted) p Number Start End Size File system Name Flags 1 2048s 3907028991s 3907026944s ext4 (parted) q dimmer@paimon:/sys$ sudo parted /dev/sdc GNU Parted 2.2 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00J (scsi) Disk /dev/sdc: 2000GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 I am beginning to think that I have a hardware problem, otherwise I can't imagine why the mdadm restore should be so slow. I have done a benchmark on /dev/sdc using Ubuntu's disk utility GUI app, and the results looked normal so I know that sdc has the capability to write faster than this. I also had the same problem on a similar WD drive that I RMAd because of bad sectors. I suppose it's possible they sent me a replacement with bad sectors too, although there are no SMART values showing them yet. Any ideas? Thanks. As requested, output of top sorted by cpu usage (notice there is ~0 cpu usage). iowait is also zero which seems strange: top - 11:35:13 up 2 days, 9:40, 3 users, load average: 2.87, 2.58, 2.30 Tasks: 142 total, 1 running, 141 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3096304k total, 1482164k used, 1614140k free, 617672k buffers Swap: 1526132k total, 0k used, 1526132k free, 535416k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 45 root 20 0 0 0 0 S 0 0.0 2:17.02 scsi_eh_0 1 root 20 0 2808 1752 1204 S 0 0.1 0:00.46 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.17 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/1 ... dmesg errors, definitely looking like hardware: [202884.000157] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [202884.007015] ata5.00: failed command: FLUSH CACHE EXT [202884.013728] ata5.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 [202884.013730] res 40/00:00:ff:59:2e/00:00:35:00:00/e0 Emask 0x4 (timeout) [202884.033667] ata5.00: status: { DRDY } [202884.040329] ata5: hard resetting link [202889.400050] ata5: link is slow to respond, please be patient (ready=0) [202894.048087] ata5: COMRESET failed (errno=-16) [202894.054663] ata5: hard resetting link [202899.412049] ata5: link is slow to respond, please be patient (ready=0) [202904.060107] ata5: COMRESET failed (errno=-16) [202904.066646] ata5: hard resetting link [202905.840056] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [202905.849178] ata5.00: configured for UDMA/133 [202905.849188] ata5: EH complete [203899.000292] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [203899.007096] ata5.00: failed command: IDENTIFY DEVICE [203899.013841] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [203899.013843] res 40/00:00:ff:f9:f6/00:00:38:00:00/e0 Emask 0x4 (timeout) [203899.041232] ata5.00: status: { DRDY } [203899.048133] ata5: hard resetting link [203899.816134] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [203899.826062] ata5.00: configured for UDMA/133 [203899.826079] ata5: EH complete [204375.000200] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204375.007421] ata5.00: failed command: IDENTIFY DEVICE [204375.014799] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204375.014800] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204375.044374] ata5.00: status: { DRDY } [204375.051842] ata5: hard resetting link [204380.408049] ata5: link is slow to respond, please be patient (ready=0) [204384.440076] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204384.449938] ata5.00: configured for UDMA/133 [204384.449955] ata5: EH complete [204395.988135] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204395.988140] ata5.00: failed command: IDENTIFY DEVICE [204395.988147] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204395.988149] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204395.988151] ata5.00: status: { DRDY } [204395.988156] ata5: hard resetting link [204399.320075] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204399.330487] ata5.00: configured for UDMA/133 [204399.330503] ata5: EH complete

    Read the article

  • PHP MINISERVER DOWNLOAD RESUME-ERROR! Resource id # 4

    - by snikolov
    $httpsock = @socket_create_listen("9090"); if (!$httpsock) { print "Socket creation failed!\n"; exit; } while (1) { $client = socket_accept($httpsock); $input = trim(socket_read ($client, 4096)); $input = explode(" ", $input); $range = $input[12]; $input = $input[1]; $fileinfo = pathinfo($input); switch ($fileinfo['extension']) { default: $mime = "text/html"; } if ($input == "/") { $input = "index.html"; } $input = ".$input"; if (file_exists($input) && is_readable($input)) { echo "Serving $input\n"; $contents = file_get_contents($input); $output = "HTTP/1.0 200 OK\r\nServer: APatchyServer\r\nConnection: close\r\nContent-Type: $mime\r\n\r\n$contents"; } else { //$contents = "The file you requested doesn't exist. Sorry!"; //$output = "HTTP/1.0 404 OBJECT NOT FOUND\r\nServer: BabyHTTP\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n$contents"; if(isset($range)) { list($a, $range) = explode("=",$range); str_replace($range, "-", $range); $size2 = $size-1; $new_length = $size-$range; $output = "HTTP/1.1 206 Partial Content\r\n"; $output .= "Content-Length: $new_length\r\n"; $output .= "Content-Range: bytes $range$size2/$size\r\n"; } else { $size2=$size-1; $output .= "Content-Length: $new_length\r\n"; } $chunksize = 1*(1024*1024); $bytes_send = 0; $file = "a.mp3"; $filesize = filesize($file); if ($file = fopen($file, 'r')) { if(isset($range)) $output = 'HTTP/1.0 200 OK\r\n'; $output .= "Content-type: application/octet-stream\r\n"; $output .= "Content-Length: $filesize\r\n"; $output .= 'Content-Disposition: attachment; filename="'.$file.'"\r\n'; $output .= "Accept-Ranges: bytes\r\n"; $output .= "Cache-Control: private\n\n"; fseek($file, $range); $download_rate = 1000; while(!feof($file) and (connection_status()==0)) { $var_stat = fread($file, round($download_rate *1024)); $output .= $var_stat;//echo($buffer); // is also possible flush(); sleep(1);//// decrease download speed } fclose($file); } /** $filename = "dada"; $file = fopen($filename, 'r'); $filesize = filesize($filename); $buffer = fread($file, $filesize); $send = array("Output"=$buffer,"filesize"=$filesize,"filename"=$filename); $file = $send['filename']; */ //@ob_end_clean(); // $output .= "Content-Transfer-Encoding: binary"; //$output .= "Connection: Keep-Alive\r\n"; } socket_write($client, $output); socket_close ($client); } socket_close ($httpsock); hey guys i have create a miniwebserver downloader it can download files from your server, however i am unable to resume my download when i download the file i get Resource id # 4 and also i cant resume the download,i would like to know how i can monitor record the client output how much bandwidth he has downloaded perl has something like this put its hardcore if possible kindly provide me with some pointers thank you :)

    Read the article

  • How do I get through proxy server environments for non-standard services?

    - by Ripred
    I'm not real hip on exactly what role(s) today's proxy servers can play and I'm learning so go easy on me :-) I have a client/server system I have written using a homegrown protocol and need to enhance the client side to negotiate its way out of a proxy environment. I have an existing client and server system written in C and C++ for the speed and a small amount of MFC in the client to handle the user interface. I have written both the server and client side of the system on Windows (the people I work for are mainly web developers using Windows everything - not a choice) sticking to Berkeley Sockets as it were via wsock32 for efficiency. The clients connect to the server through a nonstandard port (even though using port 80 is an option to get out of some environments but the protocol that goes over it isn't HTTP). The TCP connection(s) stay open for the duration of the clients participation in real time conferences. Our customer base is expanding to all kinds of networked environments. I have been able to solve a lot of problems by adding the ability to connect securely over port 443 and using secure sockets which allows the protocol to pass through a lot environments since the internal packets can't be sniffed. But more and more of our customers are behind a proxy server environment and my direct connections don't make it through. My old school understanding of proxy servers is that they act as a proxy for external HTML content over HTTP, possibly locally caching popular material for faster local access, and also allowing their IT staff to blacklist certain destination sites. Customer are complaining that my software doesn't recognize and easily navigate its way through their proxy environments but I'm finding it difficult to decide what my "best fit" solution should be. My software doesn't tear down the connection after each client request, and on top of that packets can come from either side at any time, basically your typical custom client/server system for a specific niche. My first reaction is "why can't they just add my servers addresses to their white list" but if there is a programmatic way I can get through without requiring their IT staff to help it is politically better and arguably a better solution anyway. Plus maybe I'm still not understanding the role and purpose of what proxy servers and environments have grown to be these days. My first attempt at a solution was to use WinInet with its various proxy capabilities to establish a connection over port 80 to my non-standard protocol server (which knows enough to recognize and answer a simple HTTP-looking GET request and answer it with a simple HTTP response page to get around some environments that employ initial packet sniffing (DPI)). I retrieved the actual SOCKET handle behind WinInet's HINTERNET request object and had hoped to use that in place of my software's existing SOCKET connection and hopefully not need to change much more on the client side. It initially seemed to be my solution but on further inspection it seems that the OS gets first-chance at the received data on this socket since when I get notified of events via the standard select(...) statement on the socket and query the size of the data available via ioctlsocket the call succeeds but returns 0 bytes available, the reads don't work and it goes downhill from there. Can someone tell me of a client-side library (commercial is fine) will let me get past these proxy server environments with as little user and IT staff help as possible? From what I read it has grown past SOCKS and I figure someone has to have solved this problem before me. Thanks for reading my long-winded question, Ripred

    Read the article

  • My D-Link's Ethernet bridge downlink just got 10-30x slower?

    - by Jay Levitt
    TL;DR: I unplugged my network to move my desk, and now downloading via my DIR-655's Ethernet LAN bridge is 10-30x slower than the Ethernet switch it's plugged into. Background My network is SMC cable modem <-> Cisco firewall <-> Netgear switch <-> D-Link WiFi† | | | | SMC8014 ASA-5505 GS608v2 gigE DIR-655 rev A3 gigE †The DIR-655 is used as an access point, not a router (although what D-Link calls an access point, I'd call a bridge). The "WAN" port is unused; the Netgear connects to the built-in 4-port Ethernet LAN switch, inside the built- in router/firewall. Endpoints: MacBook Pro 17" mid-2010 iPhone 4S Fedora 12 Linux server running reasonably fast dual-Athlon X2, VelociRaptors, etc. All cables are <10 feet, mostly CAT-5e, some CAT-6, all premade. All WiFi endpoints are within three feet of the D-Link. Yesterday I unplugged and rearranged stuff, and now connecting via the D-Link - even through the wired switch, right next to the incoming network cable - is 30x slower than connecting directly to the Netgear switch, on both my MacBook and iPhone. How I'm measuring "slower" I'm mostly using http://speedtest.net, which of course only really measures broadband speeds. I've also installed http://www.speedtest.net/mini.php on my local server, but can't test the iPhone with that. Results Speedtest.net, closest server over Comcast business-class: CONFIG | PING (ms) | DOWN (Mbps) | UP (Mbps) Mac <-> Ethernet <-> Netgear | 9 | 31.6 | 6.8 Mac <-> Ethernet <-> D-Link | 8 | 4.1 | 6.0 Mac <-> WiFi <-> D-Link | 9 | 1.4 | 2.9 iPhone <-> WiFi <-> D-Link | 67 | 0.4 | 1.6 Speedtest Mini on Linux PC: CONFIG | DOWN (Mbps) | UP (Mbps) Mac <-> Ethernet <-> NetGear | 97.2 | 76.9 Mac <-> Ethernet <-> D-Link | 8.2 | 24.2 Mac <-> WiFi <-> D-Link | 1.0 | 8.6 Slow typing in SSH: Mac <-> Ethernet <-> Netgear <-> Linux PC: smooth Mac <-> Ethernet <-> D-Link <-> Linux PC: choppy Note that D-Link upload speeds are normal on broadband, slower locally (but I'd believe that's a D-Link limitation), and always faster than the downloads! Since ssh is choppy just with slow typing, I don't believe it's a throttling-type problem either; that's not a lot of bandwidth. What I've tried Swapping all "good" and "bad" cables Re-plugging "bad" cable from D-Link to Netgear and watching it be the "good" cable pulling cables away from power lines Verify that the Mac auto-detects the D-Link as gigE Try to verify the link speed of the D-Link <- Netgear connection, but the firmware doesn't report that Verify that the D-Link sees no TX/RX errors or collisions Use different Ethernet ports on both Netgear and D-Link Reset the D-Link to factory settings Upgrade the D-Link firmware from 1.21 to 1.35NA, 2010/11/12, the latest Reboot everything at least once On the Mac, disable Wi-Fi during the Ethernet tests, and unplug Ethernet during the Wi-Fi tests Using iStumbler, verify that the D-Link isn't picking overloaded Wi-Fi channels (usually just 1-5 neighbors on my and adjacent channels, average for my apt building) Verify that the only client connected to the Wi-Fi was the iPhone Verify that nothing was being chatty on my network according to the WISH log Enable and disable all sorts of D-Link settings, including forcing WAN auto-detect to gigE So. I don't mind buying a new access point—I wouldn't mind having a dual-link network—but as a guy who's been networking since gated v4 was a drastic rewrite, and who often used physical sniffers in the days before Wireshark, I'm baffled. I hate being baffled. What could I possibly have changed that would result in this? How can I measure it? All I can think of is a static zap—thick carpet, socks, HVAC—but I didn't feel one, and does that really happen anymore? Can I test if it's Ethernet vs. TCP layer slowness? I'm not familiar with modern network utilities; it's hard to Google without hitting "Q: Why is my network slow? A: Is your microwave on?" If I don't get an answer here, will someone big and powerful help me migrate it to serverfault without getting screamed back here? In the words of Inigo Montoya, "I must know." Don't get all Dread Pirate Roberts on me.

    Read the article

  • Optimizing Python code with many attribute and dictionary lookups

    - by gotgenes
    I have written a program in Python which spends a large amount of time looking up attributes of objects and values from dictionary keys. I would like to know if there's any way I can optimize these lookup times, potentially with a C extension, to reduce the time of execution, or if I need to simply re-implement the program in a compiled language. The program implements some algorithms using a graph. It runs prohibitively slowly on our data sets, so I profiled the code with cProfile using a reduced data set that could actually complete. The vast majority of the time is being burned in one function, and specifically in two statements, generator expressions, within the function: The generator expression at line 202 is neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) and the generator expression at line 204 is neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) The source code for this function of context provided below. selected_nodes is a set of nodes in the interaction_graph, which is a NetworkX Graph instance. node_neighbors is an iterator from Graph.neighbors_iter(). Graph itself uses dictionaries for storing nodes and edges. Its Graph.node attribute is a dictionary which stores nodes and their attributes (e.g., 'weight') in dictionaries belonging to each node. Each of these lookups should be amortized constant time (i.e., O(1)), however, I am still paying a large penalty for the lookups. Is there some way which I can speed up these lookups (e.g., by writing parts of this as a C extension), or do I need to move the program to a compiled language? Below is the full source code for the function that provides the context; the vast majority of execution time is spent within this function. def calculate_node_z_prime( node, interaction_graph, selected_nodes ): """Calculates a z'-score for a given node. The z'-score is based on the z-scores (weights) of the neighbors of the given node, and proportional to the z-score (weight) of the given node. Specifically, we find the maximum z-score of all neighbors of the given node that are also members of the given set of selected nodes, multiply this z-score by the z-score of the given node, and return this value as the z'-score for the given node. If the given node has no neighbors in the interaction graph, the z'-score is defined as zero. Returns the z'-score as zero or a positive floating point value. :Parameters: - `node`: the node for which to compute the z-prime score - `interaction_graph`: graph containing the gene-gene or gene product-gene product interactions - `selected_nodes`: a `set` of nodes fitting some criterion of interest (e.g., annotated with a term of interest) """ node_neighbors = interaction_graph.neighbors_iter(node) neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) try: max_z_score = max(neighbor_z_scores) # max() throws a ValueError if its argument has no elements; in this # case, we need to set the max_z_score to zero except ValueError, e: # Check to make certain max() raised this error if 'max()' in e.args[0]: max_z_score = 0 else: raise e z_prime = interaction_graph.node[node]['weight'] * max_z_score return z_prime Here are the top couple of calls according to cProfiler, sorted by time. ncalls tottime percall cumtime percall filename:lineno(function) 156067701 352.313 0.000 642.072 0.000 bpln_contextual.py:204(<genexpr>) 156067701 289.759 0.000 289.759 0.000 bpln_contextual.py:202(<genexpr>) 13963893 174.047 0.000 816.119 0.000 {max} 13963885 69.804 0.000 936.754 0.000 bpln_contextual.py:171(calculate_node_z_prime) 7116883 61.982 0.000 61.982 0.000 {method 'update' of 'set' objects}

    Read the article

  • Spring MVC working with Web Designers

    - by jboyd
    When working with web-designers in a Spring-MVC and JSP environment, are there any tools or helpful patterns to reduce the pain of going back and forth from HTML to JSP and back? Project requirements dictate that views will change often, and it can be difficult to make changes efficiently because of the amount of Java code that leaks into the view layer. Ideally I'd like to remove almost all Java code from the view, but it does not seem like this works with the Spring/JSP philosophy where often the way to remove Java code is to replace that code with tag libraries, which will still have a similar problem. To provide a little clarity to my question I'm going to include some existing code (inherited by me) to show the kinds of problems I'm likely to face when change the look of our views: <%-- Begin looping through results --%> <% List memberList = memberSearchResults.getResults(); for(int i = start - 1; i < memberList.size() && i < end; i++) { Profile profile = (Profile)memberList.get(i); long profileId = profile.getProfileId(); String nickname = profile.getNickname(); String description = profile.getDescription(); Image image = profile.getAvatarImage(); String avatarImageSrc = null; int avatarImageWidthNum = 0; int avatarImageHeightNum = 0; if(null != image) { avatarImageSrc = image.getSrc(); avatarImageWidthNum = image.getWidth(); avatarImageHeightNum = image.getHeight(); } String bgColor = i % 2 == 1 ? "background-color:#FFF" : ""; %> <div style="float:left;clear:both;padding:5px 0 5px 5px;cursor:pointer;<%= bgColor %>" onclick='window.location="profile.sp?profileId=<%= profileId %>"'> <div style="float:right;clear:right;padding-left:10px;width:515px;font-size:10px;color:#7e7e7e"> <h6><%= nickname %></h6> <%= description %> </div> <img style="float:left;clear:left;" alt="Avatar Image" src="<%= null != avatarImageSrc && avatarImageSrc.trim().length() > 0 ? avatarImageSrc : "images/defaultUserImage.png" %>" <%= avatarImageWidthNum < avatarImageHeightNum ? "height='59'" : "width='92'" %> /> </div> <% } // End loop %> Now, ignoring some of the code smells there, it's obvious that if someone wants to change the look of that DIV it would be neccesary to re-place all the Java/JSP code into the new HTML given (the designers don't work in JSP files, they have their own HTML versions of the website). Which is tedious, and prone to errors.

    Read the article

  • Cisco: unable to negotiate IP using IPCP with Windows server

    - by lnk
    I am connecting to Windows server using PPP (for vpn), I establish connection but server does not respond me for my address requests: *Mar 23 00:40:06.055: Vi1 MS-CHAP-V2: I CHALLENGE id 0 len 25 from "MSDC" *Mar 23 00:40:06.063: Vi1 MS CHAP V2: Using hostname from interface CHAP *Mar 23 00:40:06.063: Vi1 MS CHAP V2: Using password from interface CHAP *Mar 23 00:40:06.067: Vi1 MS-CHAP-V2: O RESPONSE id 0 len 69 from "XXX" *Mar 23 00:40:06.087: Vi1 PPP: I pkt type 0xC223, datagramsize 50 link[ppp] *Mar 23 00:40:06.087: Vi1 MS-CHAP-V2: I SUCCESS id 0 len 46 msg is "S=XXX" *Mar 23 00:40:06.087: Vi1 MS CHAP V2 No Password found for : XXX *Mar 23 00:40:06.091: Vi1 MS CHAP V2 Check AuthenticatorResponse Success for : XXX *Mar 23 00:40:06.091: Vi1 IPCP: O CONFREQ [Closed] id 1 len 20 *Mar 23 00:40:06.091: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:06.091: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:07.091: %LINEPROTO-5-UPDOWN: Line protocol on Interface Virtual-Access1, changed state to up *Mar 23 00:40:07.091: Vi1 LCP: O ECHOREQ [Open] id 1 len 12 magic 0x194CAFCF *Mar 23 00:40:07.103: Vi1 LCP-FS: I ECHOREP [Open] id 1 len 12 magic 0x361B62E5 *Mar 23 00:40:07.103: Vi1 LCP-FS: Received id 1, sent id 1, line up *Mar 23 00:40:08.083: Vi1 IPCP: TIMEout: State REQsent *Mar 23 00:40:08.083: Vi1 IPCP: O CONFREQ [REQsent] id 2 len 20 *Mar 23 00:40:08.083: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:08.083: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:10.099: Vi1 IPCP: TIMEout: State REQsent *Mar 23 00:40:10.099: Vi1 IPCP: O CONFREQ [REQsent] id 3 len 20 *Mar 23 00:40:10.099: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:10.099: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:12.115: Vi1 IPCP: TIMEout: State REQsent *Mar 23 00:40:12.115: Vi1 IPCP: O CONFREQ [REQsent] id 4 len 20 *Mar 23 00:40:12.115: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:12.115: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:12.211: Vi1 LCP: O ECHOREQ [Open] id 2 len 12 magic 0x194CAFCF *Mar 23 00:40:12.219: Vi1 LCP-FS: I ECHOREP [Open] id 2 len 12 magic 0x361B62E5 *Mar 23 00:40:12.219: Vi1 LCP-FS: Received id 2, sent id 2, line up *Mar 23 00:40:14.131: Vi1 IPCP: TIMEout: State REQsent *Mar 23 00:40:14.131: Vi1 IPCP: O CONFREQ [REQsent] id 5 len 20 *Mar 23 00:40:14.131: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:14.131: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:16.147: Vi1 IPCP: TIMEout: State REQsent *Mar 23 00:40:16.147: Vi1 IPCP: O CONFREQ [REQsent] id 6 len 20 *Mar 23 00:40:16.147: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:16.147: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:17.331: Vi1 LCP: O ECHOREQ [Open] id 3 len 12 magic 0x194CAFCF *Mar 23 00:40:17.343: Vi1 LCP-FS: I ECHOREP [Open] id 3 len 12 magic 0x361B62E5 *Mar 23 00:40:17.343: Vi1 LCP-FS: Received id 3, sent id 3, line up You see: My router asks for address, but only keepalives are on line. But the same server works with windows client!! ! version 12.4 no service pad service timestamps debug datetime msec service timestamps log datetime msec no service password-encryption service internal ! hostname Router ! boot-start-marker boot-end-marker ! ! no aaa new-model ! resource policy ! ip subnet-zero ! ! ip cef vpdn enable ! vpdn-group pptp request-dialin protocol pptp pool-member 1 initiate-to ip XXXX ! ! ! ! ! ! ! bridge irb ! ! interface ATM0 no ip address shutdown no atm ilmi-keepalive dsl operating-mode auto ! interface FastEthernet0 ! interface FastEthernet1 ! interface FastEthernet2 ! interface FastEthernet3 ! interface Dot11Radio0 no ip address shutdown speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 12.0 18.0 24.0 36.0 48.0 54.0 station-role root ! interface Vlan1 no ip address bridge-group 1 ! interface Dialer0 ip address negotiated encapsulation ppp dialer pool 1 dialer idle-timeout 0 dialer string XXX dialer persistent dialer vpdn dialer-group 1 keepalive 5 3 no cdp enable ppp authentication ms-chap-v2 optional ppp eap refuse ppp chap hostname XXX ppp chap password 0 XXX ppp ipcp mask request ppp ipcp ignore-map ppp ipcp address accept ! interface BVI1 mac-address XXX.XXX.XXX ip address dhcp ! ip classless ip route 172.0.0.0 255.0.0.0 Dialer0 ! no ip http server no ip http secure-server ! dialer-list 1 protocol ip permit ! control-plane ! bridge 1 protocol vlan-bridge bridge 1 route ip ! line con 0 no modem enable line aux 0 line vty 0 4 login ! scheduler max-task-time 5000 end

    Read the article

  • Windows 7: Search indexing is stuck

    - by Ricket
    When I open Indexing Options, it says: 4,317 items indexed Indexing in progress. Search results might not be complete during this time. It's stuck at 4,317 though; no more items have been indexed. Worst of all, SearchIndexer.exe is taking up 100% CPU (well, 50%, but I have a dual core CPU; it's taking up all processing power it can). It is not causing hard drive activity though. I tried clicking "Troubleshoot search and indexing" at the bottom of the Indexing Options window, but it couldn't find any problem. I've also tried the repair registry key that several websites suggest; I change HKLM\SOFTWARE\Microsoft\Windows Search SetupCompletedSuccessfully to 0 and restarted the computer, and it apparently repaired because it flipped back to 1, but the same problem continues to occur. It's reducing the battery life of my laptop and making it really hot so that my fans are running all the time. I've had to disable the Windows Search service. How can I fix this? Do I need to just flat-out reformat my computer? Update: I've tried rebuilding a couple times. There's nothing unusual about the locations I have to index, and I don't have any downloads in progress or anything like that. I don't see any reason why it stopped, and I noticed it much too late to do a system restore. At this point, I'm hoping someone will offer up some secret answer that will fix the problem, thus the bounty. Another update: I tried starting the service again, just to let it try yet again. It seemed okay at first (Indexing Options showed it operating at reduced speed due to user activity, and the number of files was going up). A while later I checked, and the service had stopped. Event viewer revealed some errors like this: Log Name: Application Source: Application Error Date: 2/1/2010 7:34:23 PM Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Computer: ricky-win7 Description: Faulting application name: SearchIndexer.exe, version: 7.0.7600.16385, time stamp: 0x4a5bcdd0 Faulting module name: NLSData0007.dll, version: 6.1.7600.16385, time stamp: 0x4a5bda88 Exception code: 0xc0000005 Fault offset: 0x002141ba Faulting process id: 0x13a0 Faulting application start time: 0x01caa39f2a70ec02 Faulting application path: C:\Windows\system32\SearchIndexer.exe Faulting module path: C:\Windows\System32\NLSData0007.dll Report Id: b4f7a7ae-0f92-11df-87fc-e5d65d8794c2 Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Application Error" /> <EventID Qualifiers="0">1000</EventID> <Level>2</Level> <Task>100</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-02-02T00:34:23.000000000Z" /> <EventRecordID>10689</EventRecordID> <Channel>Application</Channel> <Computer>ricky-win7</Computer> <Security /> </System> <EventData> <Data>SearchIndexer.exe</Data> <Data>7.0.7600.16385</Data> <Data>4a5bcdd0</Data> <Data>NLSData0007.dll</Data> <Data>6.1.7600.16385</Data> <Data>4a5bda88</Data> <Data>c0000005</Data> <Data>002141ba</Data> <Data>13a0</Data> <Data>01caa39f2a70ec02</Data> <Data>C:\Windows\system32\SearchIndexer.exe</Data> <Data>C:\Windows\System32\NLSData0007.dll</Data> <Data>b4f7a7ae-0f92-11df-87fc-e5d65d8794c2</Data> </EventData> </Event> If you are having the same error and arrived here from a Google search, please comment or add an answer detailing your progress on this, if any...

    Read the article

  • Search Form in Responsive Design - Remove Search button on Mobile

    - by Kevin
    I'm working with a search box in the header of a responsive website. On desktop/tablet widths, there's a search input field and a styled 'search' button to the right. You can type in a search term and either click 'SEARCH' button or just hit enter on the keyboard with the same result. When you scale down to mobile widths, the search input field fills the width of the screen. The submit button falls below it. On a desktop, clicking the button or hitting enter activate the search. On an actual iphone phone, you can hit the 'SEARCH' button, but the native mobile keyboard that rises from the bottom of the screen has a search button where the enter/return key would normally be. It seems to know I'm in a form and the keyboard automatically gives me the option to kick off the search by basically hitting the ENTER key location....but it says SEARCH. So far so good. I figure I don't need the button in the header on mobile since it's already in the keyboard. Therefore, I'll hide the button on mobile widths and everything will be tighter and look better. So I added this to my CSS to hide it in mobile: #search-button {display: none;} But now the search doesn't work at all. On mobile, I don't get the option in the keyboard that showed up before and if I just hit enter, it doesn't work at all. On desktop at mobile width, hitting enter also not longer works. So clearly by hiding the submit/search button, the phone no longer gave me the native option to run the search. In addition, on the desktop at mobile width, even hitting enter inside the search input box also fails to launch the the search. Here's my search box: <form id="search-form" method="get" accept-charset="UTF-8, utf-8" action="search.php"> <fieldset> <div id="search-wrapper"> <label id="search-label" for="search">Item:</label> <input id="search" class="placeholder-color" name="item" type="text" placeholder="Item Number or Description" /> <button id="search-button" title="Go" type="submit"><span class="search-icon"></span></button> </div> </fieldset> </form> Here's what my CSS looks like: #search-wrapper { float: left; display: inline-block; position: relative; white-space: nowrap; } #search-button { display: inline-block; cursor: pointer; vertical-align: top; height: 30px; width: 40px; } @media only screen and (max-width: 639px) { #search-wrapper { display: block; margin-bottom: 7px; } #search-button { /* this didn't work....it hid the button but the search failed to load */ display: none;*/ } } So.....how can I hide this submit button when I'm on a mobile screen, but still let the search run from the mobile keyboard or just run by hitting enter when in the search input box. I was sure that putting display:none on the search button at mobile width would do the trick, but apparently not. Thanks...

    Read the article

  • Languages and VMs: Features that are hard to optimize and why

    - by mrjoltcola
    I'm doing a survey of features in preparation for a research project. Name a mainstream language or language feature that is hard to optimize, and why the feature is or isn't worth the price paid, or instead, just debunk my theories below with anecdotal evidence. Before anyone flags this as subjective, I am asking for specific examples of languages or features, and ideas for optimization of these features, or important features that I haven't considered. Also, any references to implementations that prove my theories right or wrong. Top on my list of hard to optimize features and my theories (some of my theories are untested and are based on thought experiments): 1) Runtime method overloading (aka multi-method dispatch or signature based dispatch). Is it hard to optimize when combined with features that allow runtime recompilation or method addition. Or is it just hard, anyway? Call site caching is a common optimization for many runtime systems, but multi-methods add additional complexity as well as making it less practical to inline methods. 2) Type morphing / variants (aka value based typing as opposed to variable based) Traditional optimizations simply cannot be applied when you don't know if the type of someting can change in a basic block. Combined with multi-methods, inlining must be done carefully if at all, and probably only for a given threshold of size of the callee. ie. it is easy to consider inlining simple property fetches (getters / setters) but inlining complex methods may result in code bloat. The other issue is I cannot just assign a variant to a register and JIT it to the native instructions because I have to carry around the type info, or every variable needs 2 registers instead of 1. On IA-32 this is inconvenient, even if improved with x64's extra registers. This is probably my favorite feature of dynamic languages, as it simplifies so many things from the programmer's perspective. 3) First class continuations - There are multiple ways to implement them, and I have done so in both of the most common approaches, one being stack copying and the other as implementing the runtime to use continuation passing style, cactus stacks, copy-on-write stack frames, and garbage collection. First class continuations have resource management issues, ie. we must save everything, in case the continuation is resumed, and I'm not aware if any languages support leaving a continuation with "intent" (ie. "I am not coming back here, so you may discard this copy of the world"). Having programmed in the threading model and the contination model, I know both can accomplish the same thing, but continuations' elegance imposes considerable complexity on the runtime and also may affect cache efficienty (locality of stack changes more with use of continuations and co-routines). The other issue is they just don't map to hardware. Optimizing continuations is optimizing for the less-common case, and as we know, the common case should be fast, and the less-common cases should be correct. 4) Pointer arithmetic and ability to mask pointers (storing in integers, etc.) Had to throw this in, but I could actually live without this quite easily. My feelings are that many of the high-level features, particularly in dynamic languages just don't map to hardware. Microprocessor implementations have billions of dollars of research behind the optimizations on the chip, yet the choice of language feature(s) may marginalize many of these features (features like caching, aliasing top of stack to register, instruction parallelism, return address buffers, loop buffers and branch prediction). Macro-applications of micro-features don't necessarily pan out like some developers like to think, and implementing many languages in a VM ends up mapping native ops into function calls (ie. the more dynamic a language is the more we must lookup/cache at runtime, nothing can be assumed, so our instruction mix is made up of a higher percentage of non-local branching than traditional, statically compiled code) and the only thing we can really JIT well is expression evaluation of non-dynamic types and operations on constant or immediate types. It is my gut feeling that bytecode virtual machines and JIT cores are perhaps not always justified for certain languages because of this. I welcome your answers.

    Read the article

  • <div> of images, retrieved by getJSON, disappear after append()

    - by Midiane
    Hi all I'm working on a GMaps application to retrieve images, via getJSON(), and to populate a popup marker. The following is the markup which I add to the marker dynamically: <div id="images"></div> <div id="CampWindow" style="display:none;width:550px;height:500px;"> <h4 id="camp-title"></h4> <p>View... (all links open in new windows)</p> <ul> <li><a id="camp-hp-link" target="_blank" href="">camp home page</a></li> <li>information: <a id="camp-av-link" target="_blank" href="">availability</a> | <a id="camp-vi-link" target="_blank" href="">vital information</li> </ul> <p id="message"></p> I've been clawing out my eyes and woohoo for the past couple of days, trying to get the images to show inside the CampWindow . Then, I decided to think laterally and to see if the images were being retrieved at all. I then moved the images outside and sure as Bob (Hope), the images were being retrieved and refreshed with every click. So, I decided to the keep the images outside and then once loaded, append it to the CampWindow . It's not working still; when I append the div to the main CampWindow div, the images won't show. I check in Firebug with the pointer thingy and it shows me the images as empty. I try it again with the images outside and it shows the images. I've tried before append and appendTo with no success. Am I missing something here? I have no more woohoo to claw out. Please, please help. marker.clicked = function(marker){ $("#images").html(''); $('#camp-title').text(this.name); $('#camp-hp-link').attr('href', this.url); $('#camp-av-link').attr('href', this.url + '/tourism/availability.php'); $('#camp-vi-link').attr('href', this.url + '/tourism/general.php'); // get resort images via jQuery AJAX call - includes/GetResortImages.inc.php $.getJSON('./includes/GetResortImages.inc.php', { park: this.park_name, camp: this.camp_name }, RetrieveImages); function RetrieveImages (data) { if ('failed' == data.status) { $('#messages').append("<em>We don't have any images for this rest camp right now!</em>"); } else { if ('' != data.camp) { $.each(data, function(key,value){ $("<img/>").attr("src", value).appendTo('#images'); }); } } } //.append($("#images")); $("#CampWindow").show(); var windowContent = $("<html />"); $("#CampWindow").appendTo(windowContent); var infoWindowAnchor = marker.getIcon().infoWindowAnchor; var iconAnchor = marker.getIcon().iconAnchor; var offset = new google.maps.Size(infoWindowAnchor.x-iconAnchor.x,infoWindowAnchor.y-iconAnchor.y); map.openInfoWindowHtml(marker.getLatLng(), windowContent.html(), {pixelOffset:offset}); } markers.push(marker); });

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >