Search Results

Search found 9410 results on 377 pages for 'simulator difference'.

Page 309/377 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • Installing SQLServer 2005 on Windows 7 64bit

    - by Mostafa
    Hi , It's 3 days I'm trying to install SqlServer 2005 under Windows 7 64 bit on my machine . First let me tell you what I've done and what I've got till now . 1-I Installed Windows 7 64 Bit on my computer 2-I tried to install SQl Server 2005 "Developer Edition" 2.1 But in "System Configuration Check" Page i recieved 2 warning , One for "IIS Feature Requirement" and another for "ASP.NET Version Registration Rquired" . 2.1.1 . I installed "Internet Information Services" from "Turn Windows features on or off" section in control panel 2.1.2 I Enabled reporting service 32 bit from "Inetpub= AdminScripts = adsutil.vbs 2.2 At this stage There was no waring in System Configuration Check 3- So I installed SQl Server 2005 Developer Edition By all default settings 4- I installed Sql Server 2005 Service Pack 3 64 bit Now when when i run "Management Studio" There is no name in "Server name" section . I typed my Computer name Or "." and i got this Error : A network -related instance-specific error occurred while establishinga connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (Provider: Named Pipes Provider , error :40 - Could not open a connection to SQL Server ) ( Microsoft SQL Server , Error :2) . I googled some for this Error and some people said follow this instruction: Startsql server 2005Configuration toolsSql Server Surface Configuration AreaSurface Area Configuration for services and Connections But i got this Error : No SQl SErver 2005 Components were found on the specified computer . Either no components are installed , or you are not a administrator on this computer (SQLSAC) I'm really tired because of that , and i don't know what's wrong with this . Some more information : I have no additonal software on my computer , like Antivirus or Proxy I tried all step with "Standard Edition" either , but no difference My user is Administrator I tried more than 5 times all those steps including re-installing Windows 7 . Please help me , I'm losing all my hair

    Read the article

  • AbstractMethodError when invoking createArrayOf, with postgresql 8.4 jdbc4 and JBoss 5.1GA

    - by Francesco
    Hi, when using this method public List<Field> getFieldWithoutId(List<Integer> idSections) throws Exception { try { Connection conn = this.getConnection(); Array arraySections = conn.createArrayOf("int4", idSections.toArray()); this.log.info("Recupero field"); List<Field> fields = this.getJdbcTemplate().query(getFieldWithoutIdQuery, new Object[] {arraySections},ParameterizedBeanPropertyRowMapper.newInstance(Field.class)); /*if (!conn.isClosed()) conn.close(); */ releaseConnection(conn); return fields; } catch (Exception e) { e.printStackTrace(); throw new Exception("Errore."); } } I have an exception at conn.createArrayOf("int4", idSections.toArray());. The exception is: javax.ejb.EJBException : Unexpected Error java.lang.AbstractMethodError: org.jboss.resource.adapter.jdbc.jdk5.WrappedConnectionJDK5.createArrayOf(Ljava/lang/String;[Ljava/lang/Object;)Ljava/sql/Array; postgresql-8.4-701.jdbc4.jar is in jboss/server/all/lib dir. Application is spring based with ejb3. When working locally with the same setup everything is fine. This only happens on a preproduction environment. Only difference is locally I have jboss run in default mode, in the other case there are 2 jbosses in all configuration. I can't track down the cause of this error. Could someone help me please?

    Read the article

  • OpenGL-ES: Change (multiply) color when using color arrays?

    - by arberg
    Following the ideas in OpenGL ES iPhone - drawing anti aliased lines, I am trying to draw stroked anti-aliased lines and I am successful so far. After line is draw by the finger, I wish to fade the path, that is I need to change the opacity (color) of the entire path. I have computed a large array of vertex positions, vertex colors, texture coordinates, and indices and then I give these to opengl but I would like reduce the opacity of all the drawn triangles without having to change each of the color coordinates. Normally I would use glColor4f(r,g,b,a) before calling drawElements, but it has no effect due to the color array. I am working on Android, but I believe it shouldn't make the big difference, as long as it is OpenGL-ES 1.1 (or 1.0). I have the following code : gl.glEnable(GL10.GL_BLEND); gl.glBlendFunc(GL10.GL_ONE, GL10.GL_ONE_MINUS_SRC_ALPHA); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); gl.glShadeModel(GL10.GL_SMOOTH); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glEnable(GL10.GL_TEXTURE_2D); // Should set rgb to greyish, and alpha to half-transparent, the greyish is // just there to make the question more general its the alpha i'm interested in gl.glColor4f(.75f, .75f, .75f, 0.5f); gl.glVertexPointer(mVertexSize, GL10.GL_FLOAT, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTexCoordBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, indexCount, GL10.GL_UNSIGNED_SHORT, mIndexBuffer.position(startIndex)); If I disable the color array gl.glEnableClientState(GL10.GL_COLOR_ARRAY);, then the glColor4f works, if I enable the color array it does nothing. Is there any way in OpenGl-ES to change the coloring without changing all the color coordinates? I think that in OpenGl one might use a fragment shader, but it seems OpenGL does not have a fragment shader (not that I know how to use one).

    Read the article

  • Indy Write Buffering / Efficient TCP communication

    - by Smasher
    I know, I'm asking a lot of questions...but as a new delphi developer I keep falling over all these questions :) This one deals with TCP communication using indy 10. To make communication efficient, I code a client operation request as a single byte (in most scenarios followed by other data bytes of course, but in this case only one single byte). Problem is that var Bytes : TBytes; ... SetLength (Bytes, 1); Bytes [0] := OpCode; FConnection.IOHandler.Write (Bytes, 1); ErrorCode := Connection.IOHandler.ReadByte; does not send that byte immediately (at least the servers execute handler is not invoked). If I change the '1' to a '9' for example everything works fine. I assumed that Indy buffers the outgoing bytes and tried to disable write buffering with FConnection.IOHandler.WriteBufferClose; but it did not help. How can I send a single byte and make sure that it is immediatly sent? And - I add another little question here - what is the best way to send an integer using indy? Unfortunately I can't find function like WriteInteger in the IOHandler of TIdTCPServer...and WriteLn (IntToStr (SomeIntVal)) seems not very efficient to me. Does it make a difference whether I use multiple write commands in a row or pack things together in a byte array and send that once? Thanks for any answers! EDIT: I added a hint that I'm using Indy 10 since there seem to be major changes concerning the read and write procedures.

    Read the article

  • MySQL performance

    - by kapil.israni
    Hi, I have this LAMP application with about 900k rows in MySQL and I am having some performance issues. Background - Apart from the LAMP stack , there's also a Java process (multi-threaded) that runs in its own JVM. So together with LAMP & java, they form the complete solution. The java process is responsible for inserts/updates and few selects as well. These inserts/updates are usually in bulk/batch, anywhere between 5-150 rows. The PHP front-end code only does SELECT's. Issue - the PHP/SELECT queries become very slow when the java process is running. When the java process is stopped, SELECT's perform alright. I mean the performance difference is huge. When the java process is running, any action performed on the php front-end results in 80% and more CPU usage for mysqld process. Any help would be appreciated. MySQL is running with default parameters & settings. Software stack - Apache - 2.2.x MySQL -5.1.37-1ubuntu5 PHP - 5.2.10 Java - 1.6.0_15 OS - Ubuntu 9.10 (karmic)

    Read the article

  • How to resolve deprecation warnings for OpenSSL::Cipher::Cipher#encrypt

    - by Olly
    I've just upgraded my Mac to Snow Leopard and got my Rails environment up and running. The only difference -- OSX aside -- with my previous install is that I'm now running ruby 1.8.7 (2008-08-11 patchlevel 72) [universal-darwin10.0] (Snow Leopard default) rather than 1.8.6. I'm now seeing deprecation warnings relating to OpenSSL when I run my code: warning: argumtents for OpenSSL::Cipher::Cipher#encrypt and OpenSSL::Cipher::Cipher#decrypt were deprecated; use OpenSSL::Cipher::Cipher#pkcs5_keyivgen to derive key and IV Example of my code which is causing these warnings (it decodes an encrypted string) on line 4: 1. def decrypt(data) 2. encryptor = OpenSSL::Cipher::Cipher.new('DES-EDE3-CBC') 3. key = "my key" 4. encryptor.decrypt(key) 5. text = encryptor.update(data) 6. text << encryptor.final 7. end I'm struggling to understand how I can resolve this, and Google isn't really helping. Should I try and downgrade to Ruby 1.8.6 (and if so, what's the best way of doing this?), should I try and just hide the warnings (bury my head in the sand?!) or is there an easy fix I can apply in the code?

    Read the article

  • Can't connect to SQL Server 2008 - looks like Shared Memory problem

    - by Proposition Joe
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest).

    Read the article

  • Advice for Future Programmers?

    - by Nate Zaugg
    I have a buddy that is going to be giving some presentations to high-schoolers. Specifically he asked: What would you be looking for if they approached you about work? Perhaps you are in that age group right now. What do you want to know? Perhaps you are just a few years into the workforce. What do you wish someone had told you but never did? Perhaps you have children, relatives or friends in or soon to be in that age group. What are you worried they don't know about? I'm sure there are other perspectives and questions I'm not even thinking about. I'd like to hear what you have to say about it. Here was my list: Don't be afraid to try! Don't let the perception that something is too difficult stop you from experimenting. Curiosity may have killed the cat, but an un-inquisitive person is mostly useless. Stolen from Einstein: You don't really understand something until you can explain it to your grandmother. It's never enough to be smart, you also have to work well with others. Before you can be really smart, you must learn how to learn. There will always be someone smarter than you are -- Become their buddy! Get to know great minds and learn all you can. Some knowledge can only be expressed this way. Communication, Communication, Communication! Projects rarely fail because of technical reasons and the difference between good programmers and outstanding programmers is how well they communicate. A good work ethic never goes unnoticed. Know when to ask for help and when to figure something out for yourself.

    Read the article

  • Worse is better. Is there an example?

    - by J.F. Sebastian
    Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical situations (worse complexity but better otherwise)? An acceptable answer might be in a form: There are algorithms A and B that have O(N**2) and O(N) time complexity correspondingly, but B has such a big constant that it has no advantages over A for inputs less then a number of atoms in the Universe. Examples highlights from the answers: Simplex algorithm -- worst-case is exponential time -- vs. known polynomial-time algorithms for convex optimization problems. A naive median of medians algorithm -- worst-case O(N**2) vs. known O(N) algorithm. Backtracking regex engines -- worst-case exponential vs. O(N) Thompson NFA -based engines. All these examples exploit worst-case vs. average scenarios. Are there examples that do not rely on the difference between the worst case vs. average case scenario? Related: The Rise of ``Worse is Better''. (For the purpose of this question the "Worse is Better" phrase is used in a narrower (namely -- algorithmic time-complexity) sense than in the article) Python's Design Philosophy: The ABC group strived for perfection. For example, they used tree-based data structure algorithms that were proven to be optimal for asymptotically large collections (but were not so great for small collections). This example would be the answer if there were no computers capable of storing these large collections (in other words large is not large enough in this case). Coppersmith–Winograd algorithm for square matrix multiplication is a good example (it is the fastest (2008) but it is inferior to worse algorithms). Any others? From the wikipedia article: "It is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005)."

    Read the article

  • How Do You Get the bufspec While Using Vimdiff Through Git

    - by Elizabeth Buckwalter
    I've read Vimdiff and Viewing differences with Vimdiff plus doing various google searches using things like "vimdiff multiple", "vimdiff git", "vimdiff commands" etc. When using do or diffg I get the error "More than two buffers in diff mode, don't know which one to use". When using diffg v:fname_in I get "No matching buffer for v:fname_in". From the vimdiff documentation: :[range]diffg[et] [bufspec] Modify the current buffer to undo difference with another buffer. If [bufspec] is given, that buffer is used. If [bufspec] refers to the current buffer then nothing happens. Otherwise this only works if there is one other buffer in diff mode. and more: When 'diffexpr' is not empty, Vim evaluates to obtain a diff file in the format mentioned. These variables are set to the file names used: v:fname_in original file v:fname_new new version of the same file v:fname_out resulting diff file So, I need to get the name of bufspec, but the default variables (fname_in, fname_new, and fname_out) aren't set. I ran the command git mergetool on a linux box through a terminal. [Edit] A partial solution that bred more questions. I used the "filename" at the bottom of the buffer. It's only a half answer, because occasionally I get a file does not exist error. I believe it's consistently the remote version of the file that "does not exist". I suspect this has something to do with git and indexing. How do you get the bufspec value consistently while using vimdiff through git-mergetool?

    Read the article

  • Is there a better way to create a generic convert string to enum method or enum extension?

    - by Kelsey
    I have the following methods in an enum helper class (I have simplified it for the purpose of the question): static class EnumHelper { public enum EnumType1 : int { Unknown = 0, Yes = 1, No = 2 } public enum EnumType2 : int { Unknown = 0, Dog = 1, Cat = 2, Bird = 3 } public enum EnumType3 : int { Unknown = 0, iPhone = 1, Andriod = 2, WindowsPhone7 = 3, Palm = 4 } public static EnumType1 ConvertToEnumType1(string value) { return (string.IsNullOrEmpty(value)) ? EnumType1.Unknown : (EnumType1)(Enum.Parse(typeof(EnumType1), value, true)); } public static EnumType2 ConvertToEnumType2(string value) { return (string.IsNullOrEmpty(value)) ? EnumType2.Unknown : (EnumType2)(Enum.Parse(typeof(EnumType2), value, true)); } public static EnumType3 ConvertToEnumType3(string value) { return (string.IsNullOrEmpty(value)) ? EnumType3.Unknown : (EnumType3)(Enum.Parse(typeof(EnumType3), value, true)); } } So the question here is, can I trim this down to an Enum extension method or maybe some type of single method that can handle any type. I have found some examples to do so with basic enums but the difference in my example is all the enums have the Unknown item that I need returned if the string is null or empty (if no match is found I want it to fail). Looking for something like the following maybe: EnumType1 value = EnumType1.Convert("Yes"); // or EnumType1 value = EnumHelper.Convert(EnumType1, "Yes"); One function to do it all... how to handle the Unknown element is the part that I am hung up on.

    Read the article

  • A Question about .net Rfc2898DeriveBytes class?

    - by IbrarMumtaz
    What is the difference in this class? as posed to just using Encoding.ASCII.GetBytes(string object); I have had relative success with either approach, the former is a more long winded approach where as the latter is simple and to the point. Both seem to allow you to do the same thing eventually but I am struggling to the see the point in using the former over the latter. The basic concept I have been able to grasp is that you can convert string passwords into byte arrays to be used for e.g a symmetric encryption class, AesManaged. Via the RFC class but you get to use SaltValues and password when creating your rfc object. I assume its more secure but still thats an uneducated guess at best ! Also that it allows you to return byte arrays of a certain size, well something like that. heres a few examples to show you where I am coming from? byte[] myPassinBytes = Encoding.ASCII.GetBytes("some password"); or string password = "P@%5w0r]>"; byte[] saltArray = Encoding.ASCII.GetBytes("this is my salt"); Rfc2898DeriveBytes rfcKey = new Rfc2898DeriveBytes(password, saltArray); The 'rfcKey' object can now be used towards setting up the the .Key or .IV properties on a Symmetric Encryption Algorithm class. ie. RijndaelManaged rj = new RijndaelManaged (); rj.Key = rfcKey.Getbytes(rj.KeySize / 8); rj.IV = rfcKey.Getbytes(rj.Blocksize / 8); 'rj' should be ready to go ! The confusing part ... so rather than using the 'rfcKey' object can I not just use my 'myPassInBytes' array to help set-up my 'rj' object???? I have tried doing this in VS2008 and the immediate answer is NO ! but have you guys got a better educated answer as to why the RFC class is used over the other alternative I have mentioned above and why????

    Read the article

  • How to prevent latex memory overflow

    - by drasto
    I've got a latex macro that makes small pictures. In that picture I need to draw area. Borders of that area are quadratic bezier curves and that area is to be filled. I did not know how to do it so currently I'm "filling" the area by drawing a plenty of bezier curves inside it... This slows down typeseting and when a macro is used multiple times (so tex is drawing really a lot of quadratic bezier curves) it produces following error: ! TeX capacity exceeded, sorry [main memory size=3000000]. How can I prevent this error ? (by freeing memory after macro or such...) Or even better how do I fill the area determined by two quadratic bezier curves? Code that produces error: \usepackage{forloop} \usepackage{picture} \usepackage{eepic} ... \linethickness{\lineThickness\unitlength}% \forloop[\lineThickness]{cy}{\cymin}{\value{cy} < \cymax}{% \qbezier(\ax, \ay)(\cx, \value{cy})(\bx, \by)% }% Here are some example values for variables: \setlength{\unitlength}{0.01pt} \lineThickness=20 %cy is just a counter - inital value is not important \cymin=450 \cymax=900 %from following only the difference between \ax and \bx is important \ax=0 \ay=0 \bx=550 \by=0 Note: To reproduce the error this code have to execute approximately 150 times (could be more depending on your latex memory settings). Thanks a lot for any help

    Read the article

  • Using Oracle hint "FIRST_ROWS" to improve Oracle database performances

    - by bobetko
    I have a statement that runs on Oracle database server. The statement has about 5 joins and there is nothing unusual there. It looks pretty much like below: SELECT field1, field2, field3, ... FROM table1, table2, table3, table4, table5 WHERE table1.id = table2.id AND table2.id = table3.id AND ... table5.userid = 1 The problem (and what is interesting) is that statement for userid = 1 takes 1 second to return 590 records. Statement for userid = 2 takes around 30 seconds to return 70 records. I don't understand why is difference so big. It seems that different execution plan is chosen for statement with userid = 1 and different for userid = 2. After I implemented Oracle Hint FIRST_ROW, performance become significantly better. Both statements (for both ids 1 and 2) produce return in under 1 second. SELECT /*+ FIRST_ROWS */ field1, field2, field3, ... FROM table1, table2, table3, table4, table5 WHERE table1.id = table2.id AND table2.id = table3.id AND ... table5.userid = 1 Questions: 1) What are possible reasons for bad performance when userid = 2 (when hint is not used)? 2) Why would execution plan be different for one vs another statement (when hint is not used)? 3) Is there anything that I should be careful about when deciding to add this hint to my queries? Thanks

    Read the article

  • Accessing Current URL using Prototype

    - by Jason Nerer
    Hi folks, following Ryan Bates Screencast #114 I'm trying to generate endless pages using prototype. In difference to Ryan's showcase my URL called via the AJAX request shall be handled dynamically, cause I do not always call the same URL when the user reaches the end of my page. So my JS running in backround looks like that and uses document.location.href instead a fixed URL: var currentPage = 1; function checkScroll() { if (nearBottomOfPage()) { currentPage++; new Ajax.Request(document.location.href + '?page=' + currentPage, {asynchronous:true, evalScripts:true, method:'get'}); } else { setTimeout("checkScroll()", 250); } } function nearBottomOfPage() { return scrollDistanceFromBottom() < 10; } function scrollDistanceFromBottom(argument) { return pageHeight() - (window.pageYOffset + self.innerHeight); } function pageHeight() { return Math.max(document.body.scrollHeight, document.body.offsetHeight); } document.observe('dom:loaded', checkScroll); The question is: The code seems to work in Safari but fails in FF 3.6. It seems that FF calculates scrollHeight or offsetHeight differently. How can I prevent that? Thx in advance. Jason

    Read the article

  • Secure hash and salt for PHP passwords

    - by luiscubal
    It is currently said that MD5 is partially unsafe. Taking this into consideration, I'd like to know which mechanism to use for password protection. Is “double hashing” a password less secure than just hashing it once? Suggests that hashing multiple times may be a good idea. How to implement password protection for individual files? Suggests using salt. I'm using PHP. I want a safe and fast password encryption system. Hashing a password a million times may be safer, but also slower. How to achieve a good balance between speed and safety? Also, I'd prefer the result to have a constant number of characters. The hashing mechanism must be available in PHP It must be safe It can use salt (in this case, are all salts equally good? Is there any way to generate good salts?) Also, should I store two fields in the database(one using MD5 and another one using SHA, for example)? Would it make it safer or unsafer? In case I wasn't clear enough, I want to know which hashing function(s) to use and how to pick a good salt in order to have a safe and fast password protection mechanism. EDIT: The website shouldn't contain anything too sensitive, but still I want it to be secure. EDIT2: Thank you all for your replies, I'm using hash("sha256",$salt.":".$password.":".$id) Questions that didn't help: What's the difference between SHA and MD5 in PHP Simple Password Encryption Secure methods of storing keys, passwords for asp.net How would you implement salted passwords in Tomcat 5.5

    Read the article

  • Mathematica regular expressions on unicode strings.

    - by dreeves
    This was a fascinating debugging experience. Can you spot the difference between the following two lines? StringReplace["–", RegularExpression@"[\\s\\S]" -> "abc"] StringReplace["-", RegularExpression@"[\\s\\S]" -> "abc"] They do very different things when you evaluate them. It turns out it's because the string being replaced in the first line consists of a unicode en dash, as opposed to a plain old ascii dash in the second line. In the case of the unicode string, the regular expression doesn't match. I meant the regex "[\s\S]" to mean "match any character (including newline)" but Mathematica apparently treats it as "match any ascii character". How can I fix the regular expression so the first line above evaluates the same as the second? Alternatively, is there an asciify filter I can apply to the strings first? PS: The Mathematica documentation says that its string pattern matching is built on top of the Perl-Compatible Regular Expressions library (http://pcre.org) so the problem I'm having may not be specific to Mathematica.

    Read the article

  • Can I use a plaintext diff algorithm for tracking XML changes?

    - by rinogo
    Hi all! Interesting question for you here. I'm working in Flex/AS3 on (for simplicity) an XML editor. I need to provide undo/redo functionality. Of course, one solution is to store the entire source text with each edit. However, to conserve memory, I'd like to store the diffs instead (these diffs will also be used to transmit updates to the server for auto-saving). My question is - can I use a plaintext diff algorithm for tracking these XML changes? My research on the internet indicates that I cannot do so. However, I'm obviously missing something. Plaintext diff provides functionality that is purportedly: diff(text, text') - diffs patch(text, diffs) - text' XML is simply text, so why can't I just use diff() and patch() to transform the text reliably? For example: Let's say that I'm a poet. When I write poetry, I use lots of funky punctuation... You know, like <, /, and . (You might see where I'm going with this...) If I'm writing my poetry in an application that uses diffs to provide undo/redo functionality, does my poetry become garbled when I undo/redo my edits? It's just text! Why does it make a difference to the algorithm? I obviously don't get something here...Thanks for explaining! :) -Rich

    Read the article

  • How does 'lazy' work?

    - by Matt Fenwick
    What is the difference between these two functions? I see that lazy is intended to be lazy, but I don't understand how that is accomplished. -- | Identity function. id :: a -> a id x = x -- | The call '(lazy e)' means the same as 'e', but 'lazy' has a -- magical strictness property: it is lazy in its first argument, -- even though its semantics is strict. lazy :: a -> a lazy x = x -- Implementation note: its strictness and unfolding are over-ridden -- by the definition in MkId.lhs; in both cases to nothing at all. -- That way, 'lazy' does not get inlined, and the strictness analyser -- sees it as lazy. Then the worker/wrapper phase inlines it. -- Result: happiness Tracking down the note in MkId.lhs (hopefully this is the right note and version, sorry if it's not): Note [lazyId magic] ~~~~~~~~~~~~~~~~~~~ lazy :: forall a?. a? -> a? (i.e. works for unboxed types too) Used to lazify pseq: pseq a b = a `seq` lazy b Also, no strictness: by being a built-in Id, all the info about lazyId comes from here, not from GHC.Base.hi. This is important, because the strictness analyser will spot it as strict! Also no unfolding in lazyId: it gets "inlined" by a HACK in CorePrep. It's very important to do this inlining after unfoldings are exposed in the interface file. Otherwise, the unfolding for (say) pseq in the interface file will not mention 'lazy', so if we inline 'pseq' we'll totally miss the very thing that 'lazy' was there for in the first place. See Trac #3259 for a real world example. lazyId is defined in GHC.Base, so we don't have to inline it. If it appears un-applied, we'll end up just calling it. I don't understand that because it refers to lazyId instead of lazy. How does lazy work?

    Read the article

  • How to setup matlabpool for multiple processors?

    - by JohnIdol
    I just setup a Extra Large Heavy Computation EC2 instance to throw it at my Genetic Algorithms problem, hoping to speed up things. This instance has 8 Intel Xeon processors (around 2.4Ghz each) and 7 Gigs of RAM. On my machine I have an Intel Core Duo, and matlab is able to work with my two cores just fine by runinng: matlabpool open 2 On the EC2 instance though, matlab only is capable of detecting 1 out of 8 processors, and if I try running: matlabpool open 8 I get an error saying that the ClusterSize is 1 since there's only 1 core on my CPU. True, there is only 1 core on each CPU, but I have 8 CPUs on the given EC2 instance! So the difference from my machine and the ec2 instance is that I have my 2 cores on a single processor locally, while the EC2 instance has 8 distinct processors. My question is, how do I get matlab to work with those 8 processors? I found this paper, but it seems related to setting up matlab with multiple EC2 instances (not related to multiple processors on the same instance, EC2 or not), which is not my problem. Any help appreciated!

    Read the article

  • Add KO "data-bind" attribute on $(document).ready

    - by M.Babcock
    Preface I've rarely ever been a JS developer and this is my first attempt at doing something with Knockout.js. The question to follow likely illustrates both points. Backgound I have a fairly complex MVC3 application that I'm trying to get to work with KO (v2.0.0.0). My MVC app is designed to generically control which fields appear in the view (and how they are added to the view). It makes use of partial views to decide what to draw in the view based on the user's permissions (If the user is in group A then show control A, if the user in group B then show control B or possibly if the user is in group A don't include the control at all). Also, my model is very flat so I'm not sure the built-in ability to apply my ViewModel to a specific portion of the view will help. My solution to this problem is to provide an action in my controller that responds with an object in JSON format with that contains the JQuery selector and the content to assign to the "data-bind" attribute and bind the ViewModel to the View in the $(document).ready event using the values provided. Failed Proof-of-concept My first attempt at proving that this works doesn't actually seem to work, and by "doesn't work" I mean it just doesn't bind the values at all (as can be seen in this jsfiddle). I've tried it with the applyBindings inside of the ready event and not, but it doesn't seem to make any bit of difference. Question What am I doing wrong? Or is this just not something that can work with KO (though I've seen at least one example online doing the same thing and it supposedly works)? Like I said in the preface, I've only ever pretended to be a JS developer (though I've generally gotten it to work in the past) so I'm at a loss where to start trying to figure out what I'm doing wrong. Hopefully this isn't a real noob question.

    Read the article

  • Adding AJAX call to function triggered popup blocker

    - by jerrygarciuh
    Hi folks, I have a client who wants to open variously sized images in a centered popup. I tried to get them to use FancyBox but they don't want interstitial presentation, so... I initially was opening a generic popup which resized and centered onload based on image size but they don't like the shift so I added a PHP script to echo the sizes and used jQuery to fetch the size info to feed into the pop up call. But it appears the delay this causes is setting off all popup blockers. Here is the JS $("#portfolioBigPic").click(function () { var src = $("#portfolioBigPic").attr('src'); var ar = src.split('/'); var fname = ar.pop(); fname = '/g/portfolio/clients/big/' + fname; $.get("imgsize.php", { i: fname}, function(data){ var dim = data.split(","); popit(fname,dim[0],dim[1]); }); }); function popit(img,w,h) { var features = 'width='+w+',height='+h+', toolbar=0, location=0, directories=0, status=0, menubar=0, scrollbars=0, resizable=1,'; var left = (screen.width/2)-(w/2); var top = 0; features += 'top='+top+',left='+left; bigpic = window.open('portfolioBigPic.php?img='+img, 'bigpic',features); bigpic.focus(); } The only difference between dodging the blockers and failing is that I added the AJAX .get and use it to specify w and h. Any thoughts on how to avoid this? Maybe I should use PHP to get widths and heights of all the big pics and write a JS array of them when this page loads? Am I right that the delay caused by fetching the data is tripping the blockers? Thoughts? Any advice much appreciated. JG

    Read the article

  • Relation between HTTP Keep Alive duration and TCP timeout duration

    - by Suresh Kumar
    I am trying to understand the relation between TCP/IP and HTTP timeout values. Are these two timeout values different or same? Most Web servers allow users to set the HTTP Keep Alive timeout value through some configuration. How is this value used by the Web servers? is this value just set on the underlying TCP/IP socket i.e is the HTTP Keep Alive timeout and TCP/IP Keep Alive Timeout same? or are they treated differently? My understanding is (maybe incorrect): The Web server uses the default timeout on the underlying TCP socket (i.e. indefinite) regardless of the configured HTTP Keep Alive timeout and creates a Worker thread that counts down the specified HTTP timeout interval. When the Worker thread hits zero, it closes the connection. EDIT: My question is about the relation or difference between the two timeout durations i.e. what will happen when HTTP keep-alive timeout duration and the timeout on the Socket (SO_TIMEOUT) which the Web server uses is different? should I even worry about these two being same or not?

    Read the article

  • ASPX FormsAuthentication.RedirectFromLoginPage function is not working anymore

    - by Mike Webb
    Here is my issue. I have an ASPX web site and I have code in there to redirect from the login page with the call to "FormsAuthentication.RedirectFromLoginPage(username, false);" This sends the user from the root website folder to 'website/Admin/'. I have a 'default.aspx' page in 'website/Admin/' and the call to redirect works on a previous version of the website we have running currently, but the one that I am updating on a separate test server is not working. It gives me the error "Directory Listing Denied. This Virtual Directory does not allow contents to be listed." I have this in the config file: <authorization> <allow users="*" /> </authorization> under the "authentication" option and... <location path="Admin"> <system.web> <authorization> <deny users="?" /> </authorization> </system.web> </location> for the location of Admin. Also, there is no difference in the code between the web.config, Login.aspx, or the default.aspx files on the current server and the one on the test server, so I am confused as to why the redirect will not work on both. It even works in the Visual Studio server environment, for which the code is also identical. Any suggestions and help is appreciated.

    Read the article

  • Java AtomicInteger: what are the differences between compareAndSet and weakCompareAndSet?

    - by WizardOfOdds
    (note that this question is not about CAS, it's about the "May fail spuriously" Javadoc). The only difference in the Javadoc between these two methods from the AtomicInteger class is that the weakCompareAndSet contains the comment: "May fail spuriously". Now unless my eyes are cheated by some spell, both method do look to be doing exactly the same: public final boolean compareAndSet(int expect, int update) { return unsafe.compareAndSwapInt(this, valueOffset, expect, update); } /* ... * May fail spuriously. */ public final boolean weakCompareAndSet(int expect, int update) { return unsafe.compareAndSwapInt(this, valueOffset, expect, update); } So I realize that "May" doesn't mean "Must" but then why don't we all start adding this to our codebase: public void doIt() { a(); } /** * May fail spuriously */ public void weakDoIt() { a(); } I'm really confused with that weakCompareAndSet() that appears to do the same as the compareAndSet() yet that "may fail spuriously" while the other can't. Apparently the "weak" and the "spurious fail" are in a way related to "happens-before" ordering but I'm still very confused by these two AtomicInteger (and AtomicLong etc.) methods: because apparently they call exactly the same unsafe.compareAndSwapInt method. I'm particularly confused in that AtomicInteger got introduced in Java 1.5, so after the Java Memory Model change (so it is obviously not something that could "fail spuriously in 1.4" but whose behavior changed to "shall not fail spuriously in 1.5").

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >