Search Results

Search found 9410 results on 377 pages for 'simulator difference'.

Page 265/377 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • What is the purpose of OCaml's Lazy.lazy_from_val?

    - by Ricardo
    The doc of Lazy.lazy_from_val states that this function is for special cases: val lazy_from_val : 'a -> 'a t lazy_from_val v returns an already-forced suspension of v This is for special purposes only and should not be confused with lazy (v). Which cases are they talking about? If I create a pair of suspended computation from a value like: let l1 = lazy 123 let l2 = Lazy.lazy_from_val 123 What is the difference between these two? Because Lazy.lazy_is_val l1 and Lazy.lazy_is_val l2 both return true saying that the value is already forced!

    Read the article

  • How can I add a field with an array value to my Perl object?

    - by superstar
    What's the difference between these two constructors in perl? 1) sub new { my $class = shift; my $self = {}; $self->{firstName} = undef; $self->{lastName} = undef; $self->{PEERS} = []; bless ($self, $class); return $self; } 2) sub new { my $class = shift; my $self = { _firstName => shift, _lastName => shift, _ssn => shift, }; bless $self, $class; return $self; } I am using the second one so far, but I need to implement the PEERS array in the second one? How do I do it with the second constructor and how can we use get and set methods on those array variables?

    Read the article

  • How do I compile for windows XP under windows 7 / visual studio 2008

    - by Jon Cage
    I'm running Windows 7 and Visual Studio 2008 Pro and trying to get my application to work on Windows XP SP3. It's a really minimal command line program so should have any ridiculous dependencies: // XPBuild.cpp : Defines the entry point for the console application. // #include "stdafx.h" int _tmain(int argc, _TCHAR* argv[]) { printf("Hello world"); getchar(); return 0; } I read somewhere that defining several constants such as WINVER should allow me to compile for other platforms. I've tried the added the following to my /D compiler options: ;WINVER=0x0501;_WIN32_WINNT 0x0501;NTDDI_VERSION=NTDDI_WINXP But that made no difference. When I run it on my Windows XP machine (actually running in a virtualbox) I get the following error: This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem. So what have I missed? Is there something else required to run MSVC compiled programs or a different compiler option or something else?

    Read the article

  • Mozilla Firefox border rendering

    - by zA
    Hi, I've come across a strange thing in Firefox, which has become a problem to me. It seems that Firefox renders borders thinner than other browsers. For example I have just a simple empty div element, and nothing else on the webpage, with a border set to width:3px. In all other browsers, such as IE, Opera, Chrome and Safari, the width looks the same and is in fact 3px wide. But in Firefox I noticed that the border width seemed thinner. So I checked the border width with Firebug, under the Computed tab - Box model. And yes as I suspected, the rendered border in Firefox is thinner. The border width that Firefox rendered is actually 2.2px and not the expected 3px. This small difference with Firefox completely messes up my design. Has anyone else noticed this? Does anyone have a solution for this? Thanks in advance!

    Read the article

  • Why is distributed source control considered harder?

    - by Will Robertson
    It seems rather common (around here, at least) for people to recommend SVN to newcomers to source control because it's "easier" than one of the distributed options. As a very casual user of SVN before switching to Git for many of my projects, I found this to be not the case at all. It is conceptually easier to set up a DCVS repository with git init (or whichever), without the problem of having to set up an external repository in the case of SVN. And the base functionality between SVN, Git, Mercurial, Bazaar all use essentially identical commands to commit, view diffs, and so on. Which is all a newcomer is really going to be doing. The small difference in the way Git requires changes to be explicitly added before they're committed, as opposed to SVN's "commit everything" policy, is conceptually simple and, unless I'm mistaken, not even an issue when using Mercurial or Bazaar. So why is SVN considered easier? I would argue that this is simply not true.

    Read the article

  • Can I safely store UInt32 to NSUInteger?

    - by mystify
    In the header, it is defined like: #if __LP64__ || (TARGET_OS_EMBEDDED && !TARGET_OS_IPHONE) || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64 typedef long NSInteger; typedef unsigned long NSUInteger; #else typedef int NSInteger; typedef unsigned int NSUInteger; #endif So does an UInt32 fit without problems into an NSUInteger (an unsigned int)? Where's the difference between UInt32 and unsigned int? And I assume that an unsigned long is bigger than an unsigned int?

    Read the article

  • Determining idle network transfer bandwidth

    - by rwmnau
    I'm building an application that will move around some potentially large files, but I want to do it without disturbing the user's network connection by flooding it. I know that Windows BITS has this kind of functionality, and that's essentially what I'm looking to replicate (as far as the throttling goes). I know BITS has other functionality as well that I'm not interested in, and I also have the option to consume it from .NET, but I'm interested in how it works. I've looked online, and I haven't found a clear explanation of how exactly BITS determines how much bandwidth to consume, aside from a vague "BITS polls activity to watch for a drop in the bandwidth used by other programs." What does this mean? Bandwidth consumed by other programs can drop for a number of other reasons as well - can BITS tell the difference? If I was looking for a process that replicated this "stay just under the radar, where the user won't notice the transfers" functionality, how would I go about doing it?

    Read the article

  • C++/CLI value class constraint won't compile. Why?

    - by Simon
    Hello, a few weeks ago a co-worker of mine spent about two hours finding out why this piece of C++/CLI code won't compile with Visual Studio 2008 (I just tested it with Visual Studio 2010... same story). public ref class Test { generic<class T> where T : value class void MyMethod(Nullable<T> nullable) { } }; The compiler says: Error 1 error C3214: 'T' : invalid type argument for generic parameter 'T' of generic 'System::Nullable', does not meet constraint 'System::ValueType ^' C:\Users\Simon\Desktop\Projektdokumentation\GridLayoutPanel\Generics\Generics.cpp 11 1 Generics Adding ValueType will make the code compile. public ref class Test { generic<class T> where T : value class, ValueType void MyMethod(Nullable<T> nullable) { } }; My question is now. Why? What is the difference between value class and ValueType?

    Read the article

  • Database Modelling - Conceptually different entities with near identical fields

    - by Andrew Shepherd
    Suppose you have two sets of conceptual entities: MarketPriceDataSet which has multiple ForwardPriceEntries PoolPriceForecastDataSet which has multiple PoolPriceForecastEntry Both different child objects have near identical fields: ForwardPriceEntry has StartDate EndDate SimulationItemId ForwardPrice MarketPriceDataSetId (foreign key to parent table) PoolPriceForecastEntry has StartDate EndDate SimulationItemId ForecastPoolPrice PoolPriceForecastDataSetId (foreign key to parent table) If I modelled them as separate tables, the only difference would be the foreign key, and the name of the price field. There has been a debate as to whether the two near identical tables should be merged into one. Options I've thought of to model this is: Just keep them as two independent, separate tables Have both sets in the one table with an additional "type" field, and a parent_id equalling a foreign key to either parent table. This would sacrifice referential integrity checks. Have both sets in the one table with an additional "type" field, and create a complicated sequence of joining tables to maintain referential integrity. What do you think I should do, and why?

    Read the article

  • Actionscript 3 svg XML parsing bug?

    - by Mahir
    Hey I get two different results when using the for each loop below. As far as I can tell there's no difference aside from attributes in the two XML literals. for each (var pathXML:XML in svg.path) { // do stuff... trace(pathXML.@stroke) } // This one works, the loop iterates once over the single path element... var svg:XML = <svg> <path stroke="#00FF00" /> </svg> // This one doesn't, the loop just exits. var svg:XML = <svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" width="612px" height="792px" viewBox="0 0 612 792" enable-background="new 0 0 612 792" xml:space="preserve"> <path fill="#FFFFFF" stroke="#000000" d="M160.333,372.444c0,0,17.778-115.555,60-63.333s27.778-106.666,78.889,40" /> </svg>

    Read the article

  • MySQL encoding problem

    - by heffaklump
    I use Java and JDBC to save japanese characters and it works perfectly on my local MySQL. But when I tried doing the same thing on my web hotels MySQL i get ????? instead of japanese characters. I have made the exact same tables and use exact same code. The only difference I have found is SHOW VARIABLES LIKE 'CHAR%' character_set_client utf8 character_set_connection utf8 character_set_database latin1 character_set_filesystem binary character_set_results utf8 character_set_server latin1 character_set_system utf8 character_sets_dir /s/usr-local/share/mysql/charsets/ character_set_datbase is set to latin1. But I can't change it! Any tips?

    Read the article

  • CLI design and implementation?

    - by Majid
    I am developing a time management tool for my personal use. I prefer using keyboard over mouse, and on the interface have a general purpose text box which will act like a command line. I have just started thinking about what commands I need, what to use for the command names, how to pass in switches and parameters, and so forth. I wonder if some of you have come across a good read along these lines; something that describes the choices you have for designing a cli, and how those affect the complexity of the interpreter, and extendability of the commands. It makes no difference if the descriptions are language-specific or in general terms. However, my implementation will be with javascript. Thank you.

    Read the article

  • Calling member method on unmanaged C++ pointer from C# (I think)

    - by Jacob G
    I apologize in advance if this is a trivial question... I'm pretty C++ / unmanaged dumb. Here's a simplified analog to my setup: --In myUnmanagedObject.h in DLL: class myUnmanagedObject { public: virtual void myMethod(){} } --In MyControl.h in Assembly #1: #pragma make_public(myUnmanagedObject) [event_source(managed)] public ref class MyControl : public System::Windows::Forms::UserControl { public: myUnmanagedObject* GetMyUnmanagedObject(); } --in C# in Assembly #2: unsafe { MyControl temp = new MyControl(); myUnmanagedObject* obj = temp.GetMyUnmanagedObject(); obj-myMethod(); } I get a compile error saying that myUnmanagedObject does not contain a definition for myMethod. Assembly #2 references Assembly #1. Assembly #1 references DLL. If I compile the DLL with /clr and reference it directly from Assembly #2, it makes no difference. How, from C#, do I execute myMethod ?

    Read the article

  • functions in F# .. why is it not compiling

    - by Tanmoy
    Hi, I have written two versions of code. The first one works as expected and print "Hi". the second one gives me error that "block following this let is unfinished" 1st version #light let samplefn() = let z = 2 let z = z * 2 printfn "hi" samplefn() 2nd version #light let samplefn() = let z = 2 let z = z * 2 samplefn() Only difference is the printfn is absent in the second version. I am using Visual Studio 2010 as my IDE. I am very new to F# but this error seems very strange to me. I guess I am missing some very important concept. Please explain. Edit: Also if I do it outside the function I get error even with the first version of code. #light let z = 2 let z = z * 2 printfn "Error: Duplicate definition of value z"

    Read the article

  • When should an array name be treated as a pointer and when does it just represent the array itself? [duplicate]

    - by user1087373
    This question already has an answer here: When is an array name or a function name 'converted' into a pointer ? (in C) 4 answers I just made a test program after reading the book and the result turned out confusing: #include <stdio.h> int main(void) { char text[] = "hello!"; printf("sizeof(text):%d sizeof(text+2):%d sizeof(text[0]):%d \n",(int)sizeof(text), sizeof(text+2), sizeof(text[0])); printf("text:%p sizeof(text):%d &text:%p sizeof(&text):%d \n",text, sizeof(text), &text, sizeof(&text)); printf("text+1:%p &text+1:%p \n", text+1, &text+1); return 0; } The result: sizeof(text):7 sizeof(text+2):4 sizeof(text[0]):1 text:0xbfc8769d sizeof(text):7 &text:0xbfc8769d sizeof(&text):4 text+1:0xbfc8769e &text+1:0xbfc876a4 What makes me feel confused are: why the value of 'sizeof(text)' is 7 whereas 'sizeof(text+2)' is 4 what's the difference between 'text' and '&text'?

    Read the article

  • xgettext vs gettext

    - by Kentor
    I have a few questions: I know what gettext is. I've read a few posts where they mentioned xgettext and was curious as to what is the difference between the two. How can I install xgettext on Windows? And finally, does anybody have a tutorial on how to install the library php-gettext http://savannah.nongnu.org/projects/php-gettext/ (this one usually doesn't come with PHP) I've read about it in an article but I'm not sure how to get it working in Windows. The thing is, sometimes when you make changes, you need to restart Apache to see the new data with the gettext that comes with PHP (but with the library you don't need to restart it) so I wanted to use the library for development. Thanks!

    Read the article

  • ssrs: one static row matrix, multiple columns will not filter out nulls

    - by pbarton99
    Using a ssrs 2005 matrix client side. I want to list the multiple addresses of one person, hence one row, multiple columns. The Column field is =Fields!StreetName.Value. The data details field is =First(Fields!StreetPrefix.Value) & " " & First(Fields!StreetName.Value). The datasource has a row for each address; however, some rows will have nulls since the datasource is composed of outer joins. The column grouping works, but the first column is always empty, (first 2 rows of datasource are null) addresses appear only after the empty column. I want to filter out nulls on the matrix, but its like the filter is ignored. I have also tried having the dataset return an empty string for a null streetname and setting the filter to =Fields!StreetName.Value != ="" but no difference. What am I missing?

    Read the article

  • Move Global.asax to iHttpModule when using ASP.NET MVC

    - by rockinthesixstring
    I have successfully created an iHttpModule to replace my Global.asax file in many Web Forms applications. But in looking at the Global.asax file in my MVC application, the methods are totally different. I'm wondering if it is still possible to create this same thing in an MVC app. I know it's not necessary and the Global.asax works just fine. I suppose I just want to have nothing but the web.config in the root directory of my application. Also, I am putting all of my classes in a separate class library project instead of a folder in my MVC application. Not sure if this makes a difference or not.

    Read the article

  • IPhone Development Profile Expired

    - by theiphoneguy
    I really combed this site and others. I read and re-read the related links here and the Apple docs. I'm sorry, but either I am obviously missing something right under my nose, or this Apple profile/certificate stuff is a bit convoluted. Here it is: I have a product in the App Store. I have updated it several times and users like it. My development profile recently expired just when I was improving the app for its next release. I can run the app in the simulator. I can compile and put the distribution build on my iPhone just fine. I went to the Apple portal and renewed the development profile. I downloaded it and installed it in Xcode. I see it in the Organize window. I see it on my iPhone. I CANNOT put the debug build on my iPhone to debug or run with Instruments. The message is that either there is not a valid signed profile or it is untrusted. I subsequently tried to download and install the certificate to my Mac's keychain. Still no success. I checked the code signing section of Project settings and also for the target and the root. All appears to indicate that it is using the expected development profile for debug. Yes, I had deleted the old profile from my iPhone, from the Organizer. I cleaned the Xcode cache and all targets. I have done all of this several times and in varying sequences to try to cover every possibility. I am ready to do anything to be able to debug with Instruments in order to check for leaks or high memory usage. Even though the distribution compile runs fine on my iPhone and plays well with other running processes, I will not release anything without a leaks/memory test. Any ideas will be appreciated. If I missed something obvious, please forgive me - it was not due to just posting a question without searching for similar postings. Thanks!

    Read the article

  • PHP/MySQL: Storing and retrieving UUIDS

    - by Greg
    I'm trying to add UUIDs to a couple of tables, but I'm not sure what the best way to store/retrieve these would be. I understand it's far more efficient to use BINARY(16) instead of VARCHAR(36). After doing a bit of research, I also found that you can convert a UUID string to binary with: UNHEX(REPLACE(UUID(),'-','')) Pardon my ignorance, but is there an easy way to this with PHP and then turn it back to a string, when needed, for readability? Also, would it make much difference if I used this as a primary key instead of auto_increment? EDIT: Found part of the answer: $bin = pack("h*", str_replace('-', '', $guid)); How would you unpack it?

    Read the article

  • Odd optimization problem under MSVC

    - by Goz
    I've seen this blog: http://igoro.com/archive/gallery-of-processor-cache-effects/ The "weirdness" in part 7 is what caught my interest. My first thought was "Thats just C# being weird". Its not I wrote the following C++ code. volatile int* p = (volatile int*)_aligned_malloc( sizeof( int ) * 8, 64 ); memset( (void*)p, 0, sizeof( int ) * 8 ); double dStart = t.GetTime(); for (int i = 0; i < 200000000; i++) { //p[0]++;p[1]++;p[2]++;p[3]++; // Option 1 //p[0]++;p[2]++;p[4]++;p[6]++; // Option 2 p[0]++;p[2]++; // Option 3 } double dTime = t.GetTime() - dStart; The timing I get on my 2.4 Ghz Core 2 Quad go as follows: Option 1 = ~8 cycles per loop. Option 2 = ~4 cycles per loop. Option 3 = ~6 cycles per loop. Now This is confusing. My reasoning behind the difference comes down to the cache write latency (3 cycles) on my chip and an assumption that the cache has a 128-bit write port (This is pure guess work on my part). On that basis in Option 1: It will increment p[0] (1 cycle) then increment p[2] (1 cycle) then it has to wait 1 cycle (for cache) then p[1] (1 cycle) then wait 1 cycle (for cache) then p[3] (1 cycle). Finally 2 cycles for increment and jump (Though its usually implemented as decrement and jump). This gives a total of 8 cycles. In Option 2: It can increment p[0] and p[4] in one cycle then increment p[2] and p[6] in another cycle. Then 2 cycles for subtract and jump. No waits needed on cache. Total 4 cycles. In option 3: It can increment p[0] then has to wait 2 cycles then increment p[2] then subtract and jump. The problem is if you set case 3 to increment p[0] and p[4] it STILL takes 6 cycles (which kinda blows my 128-bit read/write port out of the water). So ... can anyone tell me what the hell is going on here? Why DOES case 3 take longer? Also I'd love to know what I've got wrong in my thinking above, as i obviously have something wrong! Any ideas would be much appreciated! :) It'd also be interesting to see how GCC or any other compiler copes with it as well! Edit: Jerry Coffin's idea gave me some thoughts. I've done some more tests (on a different machine so forgive the change in timings) with and without nops and with different counts of nops case 2 - 0.46 00401ABD jne (401AB0h) 0 nops - 0.68 00401AB7 jne (401AB0h) 1 nop - 0.61 00401AB8 jne (401AB0h) 2 nops - 0.636 00401AB9 jne (401AB0h) 3 nops - 0.632 00401ABA jne (401AB0h) 4 nops - 0.66 00401ABB jne (401AB0h) 5 nops - 0.52 00401ABC jne (401AB0h) 6 nops - 0.46 00401ABD jne (401AB0h) 7 nops - 0.46 00401ABE jne (401AB0h) 8 nops - 0.46 00401ABF jne (401AB0h) 9 nops - 0.55 00401AC0 jne (401AB0h) I've included the jump statetements so you can see that the source and destination are in one cache line. You can also see that we start to get a difference when we are 13 bytes or more apart. Until we hit 16 ... then it all goes wrong. So Jerry isn't right (though his suggestion DOES help a bit), however something IS going on. I'm more and more intrigued to try and figure out what it is now. It does appear to be more some sort of memory alignment oddity rather than some sort of instruction throughput oddity. Anyone want to explain this for an inquisitive mind? :D Edit 3: Interjay has a point on the unrolling that blows the previous edit out of the water. With an unrolled loop the performance does not improve. You need to add a nop in to make the gap between jump source and destination the same as for my good nop count above. Performance still sucks. Its interesting that I need 6 nops to improve performance though. I wonder how many nops the processor can issue per cycle? If its 3 then that account for the cache write latency ... But, if thats it, why is the latency occurring? Curiouser and curiouser ...

    Read the article

  • What Use are Threads Outside of Parallel Problems on MultiCore Systesm?

    - by Robert S. Barnes
    Threads make the design, implementation and debugging of a program significantly more difficult. Yet many people seem to think that every task in a program that can be threaded should be threaded, even on a single core system. I can understand threading something like an MPEG2 decoder that's going to run on a multicore cpu ( which I've done ), but what can justify the significant development costs threading entails when you're talking about a single core system or even a multicore system if your task doesn't gain significant performance from a parallel implementation? Or more succinctly, what kinds of non-performance related problems justify threading? Edit Well I just ran across one instance that's not CPU limited but threads make a big difference: TCP, HTTP and the Multi-Threading Sweet Spot Multiple threads are pretty useful when trying to max out your bandwidth to another peer over a high latency network connection. Non-blocking I/O would use significantly less local CPU resources, but would be much more difficult to design and implement.

    Read the article

  • Real thing about "->" and "."

    - by fsdfa
    I always wanted to know what is the real thing difference of how the compiler see a pointer to a struct (in C suppose) and a struct itself. struct person p; struct person *pp; pp->age, I always imagine that the compiler does: "value of pp + offset of atribute "age" in the struct". But what it does with person.p? It would be almost the same. For me "the programmer", p is not a memory address, its like "the structure itself", but of course this is not how the compiler deal with it. My guess is it's more of a syntactic thing, and the compiler always does (&p)->age. I'm correct?

    Read the article

  • Do I need to invoke MessageBox calls?

    - by mafutrct
    To pop-up a message box, I'm using MessageBox.Show(...). I usually wrap this call in an Invoke: BeginInvoke (new Action (() => { MessageBox.Show ()); })); I've got 2 questions: Do I always need to wrap the MessageBox call in an Invoke if I'm calling from a non-GUI thread? If so, should I use BeginInvoke or Invoke? I found not much difference in my tests, BeginInvoke is, as expected (and unlike Invoke), displayed with a slight delay.

    Read the article

  • How can I tell Visual Studio to not catch a particular exception?

    - by Noel Kennedy
    I have a particular type of exception that I would like Visual Studio to not catch with the Exception Assistant. Essentially I would like it just to let my normal exception handling infrastructure deal with it. The exception is an inheritor of System.Exception which I wrote and have the source code for. Any where this is thrown I want VS to not catch it, ie it is not useful to just supress a single throw new BlahException(); in code. This is because the exception is thrown a lot, and I don't want to have to supress every single instance individually. In case it makes a difference I am on Visual Studio 2010 Ultimate, Framework 3.5 SP1.

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >