Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 772/883 | < Previous Page | 768 769 770 771 772 773 774 775 776 777 778 779  | Next Page >

  • Programatically enable/disable menuBar buttons in Flex 4

    - by Hamid
    I have the following XML in my Flex4 (AIR) project that defines the start of my menu interface: <mx:MenuBar x="0" y="0" width="100%" id="myMenuBar" labelField="@label" itemClick="menuChange(event)"> <mx:dataProvider> <s:XMLListCollection> <fx:XMLList xmlns=""> <menu label="File"> <item label="New"/> <item label="Load"/> <item label="Save" enabled="false"/> </menu> <menu label="Help"> <item label="About"/> </menu> </fx:XMLList> </s:XMLListCollection> </mx:dataProvider> </mx:MenuBar> I am trying to find the syntax that will let me set the save button to enabled=true after a file has been loaded by clicking "Load", however I can't figure out the syntax, can someone make a suggestion please. Currently the way that button clicks are detected is by a Switch/Case testing the String result of the MenuEvent event.item.@label. Maybe this isn't the best way?

    Read the article

  • OptimisticLockException in inner transaction ruins outer transaction

    - by Pace
    I have the following code (OLE = OptimisticLockException)... public void outer() { try { middle() } catch (OLE) { updateEntities(); outer(); } } @Transactional public void middle() { try { inner() } catch (OLE) { updateEntities(); middle(); } @Transactional public void inner() { //Do DB operation } inner() is called by other non-transactional methods which is why both middle() and inner() are transactional. As you can see, I deal with OLEs by updating the entities and retrying the operation. The problem I'm having is that when I designed things this way I was assuming that the only time one could get an OLE was when a transaction closed. This is apparently not the case as the call to inner() is throwing an OLE even when the stack is outer()->middle()->inner(). Now, middle() is properly handling the OLE and the retry succeeds but when it comes time to close the transaction it has been marked rollbackOnly by Spring. When the middle() method call finally returns the closing aspect throws an exception because it can't commit a transaction marked rollbackOnly. I'm uncertain what to do here. I can't clear the rollbackOnly state. I don't want to force create a transaction on every call to inner because that kills my performance. Am I missing something or can anyone see a way I can structure this differently? EDIT: To clarify what I'm asking, let me explain my main question. Is it possible to catch and handle OLE if you are inside of an @Transactional method? FYI: The transaction manager is a JpaTransactionManager and the JPA provider is Hibernate.

    Read the article

  • PreparedStatement question in Java against Oracle.

    - by fardon57
    Hi everyone, I'm working on the modification of some code to use preparedStatement instead of normal Statement, for security and performance reason. Our application is currently storing information into an embedded derby database, but we are going to move soon to Oracle. I've found two things that I need your help guys about Oracle and Prepared Statement : 1- I've found this document saying that Oracle doesn't handle bind parameters into IN clauses, so we cannot supply a query like : Select pokemon from pokemonTable where capacity in (?,?,?,?) Is that true ? Is there any workaround ? ... Why ? 2- We have some fields which are of type TIMESTAMP. So with our actual Statement, the query looks like this : Select raichu from pokemonTable where evolution = TO_TIMESTAMP('2500-12-31 00:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') What should be done for a prepared Statement ? Should I put into the array of parameters : 2500-12-31 or TO_TIMESTAMP('2500-12-31 00:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') ? Thanks for your help, I hope my questions are clear ! Regards,

    Read the article

  • Producer consumer with qualifications

    - by tgguy
    I am new to clojure and am trying to understand how to properly use its concurrency features, so any critique/suggestions is appreciated. So I am trying to write a small test program in clojure that works as follows: there 5 producers and 2 consumers a producer waits for a random time and then pushes a number onto a shared queue. a consumer should pull a number off the queue as soon as the queue is nonempty and then sleep for a short time to simulate doing work the consumers should block when the queue is empty producers should block when the queue has more than 4 items in it to prevent it from growing huge Here is my plan for each step above: the producers and consumers will be agents that don't really care for their state (just nil values or something); i just use the agents to send-off a "consumer" or "producer" function to do at some time. Then the shared queue will be (def queue (ref [])). Perhaps this should be an atom though? in the "producer" agent function, simply (Thread/sleep (rand-int 1000)) and then (dosync (alter queue conj (rand-int 100))) to push onto the queue. I am thinking to make the consumer agents watch the queue for changes with add-watcher. Not sure about this though..it will wake up the consumers on any change, even if the change came from a consumer pulling something off (possibly making it empty) . Perhaps checking for this in the watcher function is sufficient. Another problem I see is that if all consumers are busy, then what happens when a producer adds something new to the queue? Does the watched event get queued up on some consumer agent or does it disappear? see above I really don't know how to do this. I heard that clojure's seque may be useful, but I couldn't find enough doc on how to use it and my initial testing didn't seem to work (sorry don't have the code on me anymore)

    Read the article

  • Can I commit changes to actual database while debugging C# in Visual Studio?

    - by nathant23
    I am creating a C# application using Visual Studio that uses an SQLExpress database. When I hit f5 to debug the application and make changes to the database I believe what is happening is there is a copy of the database in the bin/debug folder that changes are being made to. However, when I stop the debugging and then hit f5 the next time a new copy of the database is being put in the bin/debug folder so that all the changes made the last time are gone. My question is: Is there a way that when I am debugging the application I can have it make changes to the actual database and those changes are actually saved or will it only make changes to the copy in the bin/debug folder (if that is what is actually happening)? I've seen similar questions, but I couldn't find an answer that said if it's possible to make those changes persistent in the actual .mdf file. The reason I ask is because as I build this application I am continuously adding pieces and testing to make sure they all work together. When I put in test data I am using actual data that I would like to stay in the database. This would just help me not have to reenter the data later. Thanks in advance for any help or information that could help me better understand the process.

    Read the article

  • Converting ntext to nvcharmax(max) - Getting around size limitation

    - by Overflew
    Hi all, I'm trying to change an existing SQL NText column to nvcharmax(max), and encountering an error on the size limit. There's a large amount of existing data, some of which is more than the 8k limit, I believe. We're looking to convert this, so that the field is searchable in LINQ. The 2x SQL statements I've tried are: update Table set dataNVarChar = convert(nvarchar(max), dataNtext) where dataNtext is not null update Table set dataNVarChar = cast(dataNtext as nvarchar(max)) where dataNtext is not null And the error I get is: Cannot create a row of size 8086 which is greater than the allowable maximum row size of 8060. This is using SQL Server 2008. Any help appreciated, Thanks. Update / Solution: The marked answer below is correct, and SQL 2008 can change the column to the correct data type in my situation, and there are no dramas with the LINQ-utilising application we use on top of it: alter table [TBL] alter column [COL] nvarchar(max) I've also been advised to follow it up with: update [TBL] set [COL] = [COL] Which completes the conversion by moving the data from the lob structure to the table (if the length in less than 8k), which improves performance / keeps things proper.

    Read the article

  • How do I resize an iframe with dynamic content?

    - by Middletone
    I'm populating an iframe with some contents that are injected into it but I'm unable to size it correctly. I've tried a variety of methods but here is the most recent code. After populating the iframe with an html based mail message it doesn't seem to register the scroll height. <iframe scrolling="no" width="100%" onload="javascript:(LoadIFrame(this));" > HTML content goes here. Make sure it's long enough to need to scroll before testing. </iframe> <script type="text/javascript"> function LoadIFrame(iframe) { $("iframe").each(function() { $(this).load(function() { var doc = this.document; if (this.contentDocument) { doc = this.contentDocument; } else if (this.contentWindow) { doc = this.contentWindow.document; } else if (this.document) { doc = this.document; } this.height = (doc.body.scrollHeight + 50) + 'px'; this.onload = ''; }); txt = $(this).text(); var doc = this.document; if (this.contentDocument) { doc = this.contentDocument; } else if (this.contentWindow) { doc = this.contentWindow.document; } else if (this.document) { doc = this.document; } doc.open(); doc.writeln(txt); doc.close(); }); } </script>

    Read the article

  • How to justify using a scripting language as part of a project

    - by sylvanaar
    I have a specific project in which I want to use either a scripting language + C, or as an alternative a 100% Java solution. The program adapts a legacy system for use with other moderns systems. Basically, I have few choices as to what language I can use. I have C/C++, Java 1.4, and I have also compiled the Lua for this environment. The program does 'screen scraping' and has to deal with alot of strings. That part of the code is highly variable. Most of the developers at my company use C, so - my original design was to write some portions in C, and use Lua for the part that dealt with strings and changed freqently. I was told 'You have to justify your use of the scripting language.' So i reworked my design using 100% Java, and was told - Java wont have enough performance. You should do the whole thing in C. I'm not controlling lasers or doing image processing - just some screen scraping. I still have to provide justification for using anything but C - so what justification can I provide?

    Read the article

  • What is different about C++ math.h abs() compared to my abs()

    - by moka
    I am currently writing some glsl like vector math classes in c++, and I just implemented an abs() function like this: template<class T> static inline T abs(T _a) { return _a < 0 ? -_a : _a; } I compared its speed to the default c++ abs from math.h like this: clock_t begin = clock(); for(int i=0; i<10000000; ++i) { float a = abs(-1.25); }; clock_t end = clock(); unsigned long time1 = (unsigned long)((float)(end-begin) / ((float)CLOCKS_PER_SEC/1000.0)); begin = clock(); for(int i=0; i<10000000; ++i) { float a = myMath::abs(-1.25); }; end = clock(); unsigned long time2 = (unsigned long)((float)(end-begin) / ((float)CLOCKS_PER_SEC/1000.0)); std::cout<<time1<<std::endl; std::cout<<time2<<std::endl; Now the default abs takes about 25ms while mine takes 60. I guess there is some low level optimisation going on. Does anybody know how math.h abs works internally? The performance difference is nothing dramatic, but I am just curious!

    Read the article

  • String or binary data would be truncated -- Heisenberg problem

    - by harpo
    When you get this error, the first thing you ask is, which column? Unfortunately, SQL Server is no help here. So you start doing trial and error. Well, right now I have a statement like: INSERT tbl (A, B, C, D, E, F, G) SELECT A, B * 2, C, D, E, q.F, G FROM tbl ,othertable q WHERE etc etc Note that Some values are modified or linked in from another table, but most values are coming from the original table, so they can't really cause truncation going back to the same field (that I know of). Eliminating fields one at a time eventually makes the error go away, if I do it cumulatively, but — and here's the kicker — it doesn't matter which fields I eliminate. It's as if SQL Server is objecting to the total length of the row, which I doubt, since there are only about 40 fields in all, and nothing large. Anyone ever seen this before? Thanks. UPDATE: I have also done "horizontal" testing, by filtering out the SELECT, with much the same result. In other words, if I say WHERE id BETWEEN 1 AND 100: Error WHERE id BETWEEN 1 AND 50: No error WHERE id BETWEEN 50 AND 100: No error I tried many combinations, and it cannot be limited to a single row.

    Read the article

  • I'm having an issue to use GLshort for representing Vertex, and Normal.

    - by Xylopia
    As my project gets close to optimization stage, I notice that reducing Vertex Metadata could vastly improve the performance of 3D rendering. Eventually, I've dearly searched around and have found following advices from stackoverflow. Using GL_SHORT instead of GL_FLOAT in an OpenGL ES vertex array How do you represent a normal or texture coordinate using GLshorts? Advice on speeding up OpenGL ES 1.1 on the iPhone Simple experiments show that switching from "FLOAT" to "SHORT" for vertex and normal isn't tough, but what troubles me is when you're to scale back verticies to their original size (with glScalef), normals are multiplied by the reciprocal of the scale. Natural remedy for this is to multiply the normals w/ scale before you submit to GPU. Then, my short normals almost become 0, because the scale factor is usually smaller than 0. Duh! How do you use "short" for both vertex and normal at the same time? I've been trying this and that for about a full day, but I could only go for "float vertex w/ byte normal" or "short vertex w/ float normal" so far. Your help would be truly appreciated.

    Read the article

  • ReaderWriterLockSlim and Pulse/Wait

    - by Jono
    Is there an equivalent of Monitor.Pulse and Monitor.Wait that I can use in conjunction with a ReaderWriterLockSlim? I have a class where I've encapsulated multi-threaded access to an underlying queue. To enqueue something, I acquire a lock that protects the underlying queue (and a couple of other objects) then add the item and Monitor.Pulse the locked object to signal that something was added to the queue. public void Enqueue(ITask task) { lock (mutex) { underlying.Enqueue(task); Monitor.Pulse(mutex); } } On the other end of the queue, I have a single background thread that continuously processes messages as they arrive on the queue. It uses Monitor.Wait when there are no items in the queue, to avoid unnecessary polling. (I consider this to be good design, but any flames (within reason) are welcome if they help me learn otherwise.) private void DequeueForProcessing(object state) { while (true) { ITask task; lock (mutex) { while (underlying.Count == 0) { Monitor.Wait(mutex); } task = underlying.Dequeue(); } Process(task); } } As more operations are added to this class (requiring read-only access to the lock protected underlying), someone suggested using ReaderWriterLockSlim. I've never used the class before, and assuming it can offer some performance benefit, I'm not against it, but only if I can keep the Pulse/Wait design.

    Read the article

  • Strange problem with simple multithreading program in Java

    - by Elizabeth
    Hello, I am just starting play with multithreading programming. I would like to my program show alternately character '-' and '+' but it doesn't. My task is to use synchronized keyword. As far I have: class FunnyStringGenerator{ private char c; public FunnyStringGenerator(){ c = '-'; } public synchronized char next(){ if(c == '-'){ c = '+'; } else{ c = '-'; } return c; } } class ThreadToGenerateStr implements Runnable{ FunnyStringGenerator gen; public ThreadToGenerateStr(FunnyStringGenerator fsg){ gen = fsg; } @Override public void run() { for(int i = 0; i < 10; i++){ System.out.print(gen.next()); } } } public class Main{ public static void main(String[] args) throws IOException { FunnyStringGenerator FSG = new FunnyStringGenerator(); ExecutorService exec = Executors.newCachedThreadPool(); for(int i = 0; i < 20; i++){ exec.execute(new ThreadToGenerateStr(FSG)); } } } EDIT: I also testing Thread.sleep in run method instead for loop.

    Read the article

  • OpenGLES - Rendering a background image only once and not wiping it

    - by chaosbeaker
    Hello, first time asking a question here but been watching others answers for a while. My own question is one for improving the performance of my program. Currently I'm wiping the viewFrameBuffer on each pass through my program and then rendering the background image first followed by the rest of my scene. I was wondering how I go about rendering the background image once, and only wiping the rest of the scene for updating/re-rendering. I tried using a seperate buffer but I'm not sure how to present this new buffer to the render buffer. // Set the current EAGLContext and bind to the framebuffer. This will direct all OGL commands to the // framebuffer and the associated renderbuffer attachment which is where our scene will be rendered [EAGLContext setCurrentContext:context]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer); // Define the viewport. Changing the settings for the viewport can allow you to scale the viewport // as well as the dimensions etc and so I'm setting it for each frame in case we want to change i glViewport(0, 0, screenBounds.size.width , screenBounds.size.height); // Clear the screen. If we are going to draw a background image then this clear is not necessary // as drawing the background image will destroy the previous image glClearColor(0.0f, 1.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // Setup how the images are to be blended when rendered. This could be changed at different points during your // render process if you wanted to apply different effects glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); switch (currentViewInt) { case 1: { [background render:CGPointMake(240, 0) fromTopLeftBottomRightCenter:@"Bottom"]; // Other Rendering Code }} // Bind to the renderbuffer and then present this image to the current context glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; Hopefully by solving this I'll also be able to implement another buffer just for rendering particles as I can set them to always use a black background as their alpha source. Any help is greatly appreciated

    Read the article

  • How can I have sub-elements of a complex/mixed type with unrestricted order and count?

    - by mbmcavoy
    I am working with XML where some elements will contain text with additional markup. This is similar to this example at W3Schools. However, I need the markup tags to be able to appear in any order and possibly more than once. To modify their example for illustration: <letter> Dear Mr.<name>John Smith</name>. Your order <orderid>1032</orderid> will be shipped on <shipdate>2001-07-13</shipdate>. Thank you, <name>Bob Adams</name> </letter> None of the options presented by W3Schools (on the page following the linked example) allow this XML due to the second <name> element. Their explanation of the "indicators" and my testing are consistent. <xs:sequence> - violates the element order <xs:choice> - more than one kind of element is used. <xs:all> - maxOccurs is restricted to "1". This seems like it should be basic, after all, XHTML allows such things. How do I define my schema to allow this?

    Read the article

  • Heavy Mysql operation & Time Constraints [closed]

    - by Rahul Jha
    There is a performance issue where that I have stuck with my application which is based on PHP & MySql. The application is for Data Migration where data has to be uploaded and after various processes (Cleaning from foreign characters, duplicate check, id generation) it has to be inserted into one central table and then to 5 different tables. There, an id is generated and that id has to be updated to central table. There are different sets of records and validation rules. The problem I am facing is that when I insert say(4K) rows file (containing 20 columns) it is working fine within 15 min it gets inserted everywhere. But, when I insert the same records again then at this time it is taking one hour to insert (ideally it should get inserted by marking earlier inserted data as duplicate). After going through the log file, I noticed is that there is a Mysql select statement where I am checking the duplicates and getting ID which are duplicates. Then I am calling a function inside for loop which is basically inserting records into 5 tables and updates id to central table. This Calling function is major time of whole process. P.S. The records has to be inserted record by record.. Kindly Suggest some solution.. //This is that sample code $query=mysql_query("SELECT DISTINCT p1.ID FROM table1 p1, table2 p2, table3 a WHERE p2.datatype =0 AND (p1.datatype =1 || p1.datatype=2) AND p2.ID =0 AND p1.ID = a.ID AND p1.coulmn1 = p2.column1 AND p1.coulmn2 = p2.coulmn2 AND a.coulmn3 = p2.column3"); $num=mysql_num_rows($query); for($i=0;$i<$num;$i++) { $f=mysql_result($query,$i,"ID"); //calling function RecordInsert($f); }

    Read the article

  • Python optimization problem?

    - by user342079
    Alright, i had this homework recently (don't worry, i've already done it, but in c++) but I got curious how i could do it in python. The problem is about 2 light sources that emit light. I won't get into details tho. Here's the code (that I've managed to optimize a bit in the latter part): import math, array import numpy as np from PIL import Image size = (800,800) width, height = size s1x = width * 1./8 s1y = height * 1./8 s2x = width * 7./8 s2y = height * 7./8 r,g,b = (255,255,255) arr = np.zeros((width,height,3)) hy = math.hypot print 'computing distances (%s by %s)'%size, for i in xrange(width): if i%(width/10)==0: print i, if i%20==0: print '.', for j in xrange(height): d1 = hy(i-s1x,j-s1y) d2 = hy(i-s2x,j-s2y) arr[i][j] = abs(d1-d2) print '' arr2 = np.zeros((width,height,3),dtype="uint8") for ld in [200,116,100,84,68,52,36,20,8,4,2]: print 'now computing image for ld = '+str(ld) arr2 *= 0 arr2 += abs(arr%ld-ld/2)*(r,g,b)/(ld/2) print 'saving image...' ar2img = Image.fromarray(arr2) ar2img.save('ld'+str(ld).rjust(4,'0')+'.png') print 'saved as ld'+str(ld).rjust(4,'0')+'.png' I have managed to optimize most of it, but there's still a huge performance gap in the part with the 2 for-s, and I can't seem to think of a way to bypass that using common array operations... I'm open to suggestions :D

    Read the article

  • Fake serial communication under Linux

    - by kigurai
    I have an application where I want to simulate the connection between a device and a "modem". The device will be connected to a serial port and will talk to the software modem through that. For testing purposes I want to be able to use a mock software device to test send and receive data. Example Python code device = Device() modem = Modem() device.connect(modem) device.write("Hello") modem_reply = device.read() Now, in my final app I will just pass /dev/ttyS1 or COM1 or whatever for the application to use. But how can I do this in software? I am running Linux and application is written in Python. I have tried making a FIFO (mkfifo ~/my_fifo) and that does work, but then I'll need one FIFO for writing and one for reading. What I want is to open ~/my_fake_serial_port and read and write to that. I have also lpayed with the ptymodule, but can't get that to work either. I can get a master and slave file descriptor from pty.openpty() but trying to read or write to them only causes IOError Bad File Descriptor error message.

    Read the article

  • AppFabric caching's local cache isnt working for us... What are we doing wrong?

    - by Olly
    We are using appfabric as the 2ndlevel cache for an NHibernate asp.net application comprising a customer facing website and an admin website. They are both connected to the same cache so when admin updates something, the customer facing site is updated. It seems to be working OK - we have a CacheCLuster on a seperate server and all is well but we want to enable localcache to get better performance, however, it dosnt seem to be working. We have enabled it like this... bool UseLocalCache = int LocalCacheObjectCount = int.MaxValue; TimeSpan LocalCacheDefaultTimeout = TimeSpan.FromMinutes(3); DataCacheLocalCacheInvalidationPolicy LocalCacheInvalidationPolicy = DataCacheLocalCacheInvalidationPolicy.TimeoutBased; if (UseLocalCache) { configuration.LocalCacheProperties = new DataCacheLocalCacheProperties( LocalCacheObjectCount, LocalCacheDefaultTimeout, LocalCacheInvalidationPolicy ); // configuration.NotificationProperties = new DataCacheNotificationProperties(500, TimeSpan.FromSeconds(300)); } Initially we tried using a timeout invalidation policy (3mins) and our app felt like it was running faster. HOWEVER, we noticed that if we changed something in the admin site, it was immediatley updated in the live site. As we are using timeouts not notifications, this demonstrates that the local cache isnt being queried (or is, but is always missing). The cache.GetType().Name returns "LocalCache" - so the factory has made a local cache. Running "Get-Cache-Statistics MyCache" in PS on my dev environment (asp.net app running local from vs2008, cache cluster running on a seperate w2k8 machine) show a handful of Request Counts. However, on the Production environment, the Request Count increases dramaticaly. We tried following the method here to se the cache cliebt-server traffic... http://blogs.msdn.com/b/appfabriccat/archive/2010/09/20/appfabric-cache-peeking-into-client-amp-server-wcf-communication.aspx but the log file had nothing but the initial header in it - i.e no loggin either. I cant find anything in SO or Google. Have we done something wrong? Have we got a screwy install of AppFabric - we installed it via WebPlatform Installer - I think? (note: the IIS box running ASp.net isnt in yhe cluster - it is just the client). Any insights greatfully received!

    Read the article

  • Android: Help with tabs view

    - by James
    So I'm trying to build a tabs view for an Android app, and for some reason I get a force close every time I try to run it on the emulator. When I run the examples, everything shows fine, so I went as far as to just about copy most of the layout from the examples(a mix of Tabs2.java and Tabs3.java), but for some reason it still wont run, any ideas? Here is my code(List1.class is a copy from the examples for testing purposes). It all compiles fine, just gets a force close the second it starts: package com.jvavrik.gcm; import android.app.TabActivity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.TabHost; import android.widget.TextView; public class GCM extends TabActivity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); final TabHost tabHost = getTabHost(); tabHost.addTab(tabHost.newTabSpec("tab1") .setIndicator("g", getResources().getDrawable(R.drawable.star_big_on)) .setContent(new Intent(this, List1.class))); tabHost.addTab(tabHost.newTabSpec("tab2") .setIndicator("C") .setContent(new Intent(this, List1.class)) ); tabHost.addTab(tabHost.newTabSpec("tab3") .setIndicator("S") .setContent(new Intent(this, List1.class)) ); tabHost.addTab(tabHost.newTabSpec("tab4") .setIndicator("A") .setContent(new Intent(this, List1.class)) ); } }

    Read the article

  • Mootools Javascript can't push to array

    - by nona
    I have an array set with the heights of each hidden div, but when I use it, the div instantly jumps down, rather than slowly sliding as when there is a literal number. EDIT: testing seems to reveal that it's a problem with the push method, as content_height.push(item.getElement('.moreInfo').offsetHeight);alert(content_height[i]);gives undefined, but alert(item.getElement('.moreInfo').offsetHeight); gives the correct values Javascript: window.addEvent('domready', function(){ var content_height = [];i=0; $$( '.bio_accordion' ).each(function(item){ i++; content_height.push( item.getElement('.moreInfo').offsetHeight); var thisSlider = new Fx.Slide( item.getElement( '.moreInfo' ), { mode: 'horizontal' } ); thisSlider.hide(); item.getElement('.moreInfo').set('tween').tween('height', '0px'); var morph = new Fx.Morph(item.getElement( '.divToggle' )); var selected = 0; item.getElement( '.divToggle' ).addEvents({ 'mouseenter': function(){ if(!selected) this.morph('.div_highlight'); }, 'mouseleave': function(){ if(!selected) { this.morph('.divToggle'); } }, 'click': function(){ if (!selected){ if (this.getElement('.symbol').innerHTML == '+') this.getElement('.symbol').innerHTML = '-'; else this.getElement('.symbol').innerHTML = '+'; item.getElement('.moreInfo').set('tween', { duration: 1500, transition: Fx.Transitions.Bounce.easeOut }).tween('height', content_height[i]); //replacing this with '650' keeps it smooth selected = 1; thisSlider.slideIn(); } else{ if (this.getElement('.symbol').innerHTML == '+') this.getElement('.symbol').innerHTML = '-'; else this.getElement('.symbol').innerHTML = '+'; thisSlider.slideOut(); item.getElement('.moreInfo').set('tween', { duration: 1000, transition: Fx.Transitions.Bounce.easeOut }).tween('height', '0px'); selected = 0; } } }); } ); }); Why could this be? Thanks so much!

    Read the article

  • Why can't I simply copy installed Perl modules to other machines?

    - by pistacchio
    Being very new to Perl but not to dynamic languages, I'm a bit surprised at how not straight forward the manage of modules is. Sure, cpan X does theoretically work, but I'm working on the same project from three different machines and OSs (at work, at home, testing in an external environment). At work (Windows 7) I have problem using cpan because of our firewall that makes ftp unusable At home (Mac OS X) it does work In the external environment (Linux CentOs) it worked after hours because I don't have root access and I had to configure cpan to operate as a non-root user I've tried on another server where I have an access. If the previous external environment is a VPS and so I have a shell access, this other one is a cheap shared hosting where I have no way to install new modules other than the ones pre-installed At the moment I still can't install Template under Windows. I've seen that as an alternative I could compile it and I've also tried ActiveState's PPM but the module is not existent there. Now, my perplexity is about Perl being a dynamic language. I've had all these kind of problems while working, for example, with C where I had to compile all the libraries for all the platform, but I thought that with Perl the approach would have been very similar to Python's or PHP's where in 90% of the cases copying the module in a directory and importing it simply works. So, my question: if Perl's modules are written in Perl, why the copy/paste approach will not work? If some (or some part) of the modules have to be compiled, how to see in CPAN if a module is Perl-only or it relies upon compiled libraries? Isn't there a way to download the module (tar, zip...) and use cpan to deploy it? This would solve my problem under Windows.

    Read the article

  • Namespace constraint with generic class decleration

    - by SomeGuy
    Good afternoon people, I would like to know if (and if so how) it is possible to define a namespace as a constraint parameter in a generic class declaration. What I have is this: namespace MyProject.Models.Entities <-- Contains my classes to be persisted in db namespace MyProject.Tests.BaseTest <-- Obvious i think Now the decleration of my 'BaseTest' class looks like so; public class BaseTest<T> This BaseTest does little more (at the time of writing) than remove all entities that were added to the database during testing. So typically I will have a test class declared as: public class MyEntityRepositoryTest : BaseTest<MyEntity> What i would LIKE to do is something similar to the following: public class BaseTest<T> where T : <is of the MyProject.Models.Entities namespace> Now i am aware that it would be entirely possible to simply declare a 'BaseEntity' class from which all entities created within the MyProject.Models.Entities namespace will inherit from; public class BaseTest<T> where T : MyBaseEntity but...I dont actually need to, or want to. Plus I am using an ORM and mapping entities with inheritance, although possible, adds a layer of complexity that is not required. So, is it possible to constrain a generic class parameter to a namespace and not a specific type ? Thank you for your time.

    Read the article

  • HTML5 audio object doesn't play on iPad (when called from a setTimeout)

    - by Dan Halliday
    I have a page with a hidden <audio> object which is being started and stopped using a custom button via javascript. (The reason being I want to customise the button, and that drawing an audio player seems to destroy rendering performance on iPad anyway). A simplified example (in coffeescript): // Works fine on all browsers constructor: (@_button, @_audio) -> @_button.on 'click', @_play // Bind button's click event with jQuery _play: (e) => @_audio[0].play() // Call play() on audio element The audio plays fine when triggered from a function bound to a click event, but I actually want an animation to complete before the file plays so I put .play() inside a setTimeout. However I just can't get this to work: // Will not play on iPad constructor: (@_button, @_audio) -> @_button.on 'click', @_play // Bind button's click event with jQuery _play: (e) => setTimeout (=> // Declare a 300ms timeout @_audio[0].play() // Call play() on audio element ), 300 I've checked that @_audio (this._audio) is in scope and that its play() method exists. Why doesn't this work on iPad?

    Read the article

  • The CLR has been unable to transition from COM context [...] for 60 seconds

    - by BlueRaja The Green Unicorn
    I am getting this error on code that used to work. I have not changed the code. Here is the full error: The CLR has been unable to transition from COM context 0x3322d98 to COM context 0x3322f08 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. And here is the code that caused it: var openFileDialog1 = new System.Windows.Forms.OpenFileDialog(); openFileDialog1.DefaultExt = "mdb"; openFileDialog1.Filter = "Management Database (manage.mdb)|manage.mdb"; //Stalls indefinitely on the following line, then gives the CLR error //one minute later. The dialog never opens. if(openFileDialog1.ShowDialog() == DialogResult.OK) { .... } Yes, I am sure the dialog is not open in the background, and no, I don't have any explicit COM code or unmanaged marshalling or multithreading. I have no idea why the OpenFileDialog won't open - any ideas?

    Read the article

< Previous Page | 768 769 770 771 772 773 774 775 776 777 778 779  | Next Page >