Search Results

Search found 33477 results on 1340 pages for 'static vs non static'.

Page 318/1340 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • DirectX 9 HLSL vs. DirectX 10 HLSL: syntax the same?

    - by numerical25
    For the past month or so, I have been busting my behind trying to learn DirectX. So I've been mixing back back and forth between DirectX 9 and 10. One of the major changes I've seen in the two is how to process vectors in the graphics card. One of the drastic changes I notice is how you get the GPU to recognize your structs. In DirectX 9, you define the Flexible Vertex Formats. Your typical set up would be like this: #define CUSTOMFVF (D3DFVF_XYZRHW | D3DFVF_DIFFUSE) In DirectX 10, I believe the equivalent is the input vertex description: D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0} }; I notice in DirectX 10 that it is more descriptive. Besides this, what are some of the drastic changes made, and is the HLSL syntax the same for both?

    Read the article

  • Java Spring 3.0 MVC Annotation vs COC. Whats the preferred method in the Java community?

    - by Athens
    I am using Spring's MVC framework for an application I'm hosting on Google's App Engine. So far, my controllers are registered via the @Controller annotation; however, prior to getting into Spring, I evaluated ASP.net MVC 2 which requires no configuration and is based on convention. Is convention over configuration (COC) the current and preferred method in the Java community to implement MVC with Spring. Also, this may be a result of my limited knowledge so far but i noticed that i could only instantiate my Controllers the required constuctor injection if i use the COC method via the ControllerClassNameHandlerMapping. For instance the following controller bean config will fail if i use the defaultannotationhandlermapping. <bean id="c" class="com.domain.TestController"> <constructor-arg ref="service" /> </bean> <bean id="service" class="com.domain.Service" /> My com.domain.TestController controller works fine if i use ControllerClassNameHandlerMapping/COC but it results in an error when i use defaultannotationhandlermapping/Annotations.

    Read the article

  • SQL Server 2000: Why is this query w/ variables so slow vs w/o variables?

    - by William DiStefano
    I can't figure out why this query would be so slow with variables versus without them. I read some where that I need to enable "Dynamic Parameters" but I cannot find where to do this. DECLARE @BeginDate AS DATETIME ,@EndDate AS DATETIME SELECT @BeginDate = '2010-05-20' ,@EndDate = '2010-05-25' -- Fix date range to include time values SET @BeginDate = CONVERT(VARCHAR(10), ISNULL(@BeginDate, '01/01/1990'), 101) + ' 00:00' SET @EndDate = CONVERT(VARCHAR(10), ISNULL(@EndDate, '12/31/2099'), 101) + ' 23:59' SELECT * FROM claim c WHERE (c.Received_Date BETWEEN @BeginDate AND @EndDate) --this is much slower --(c.Received_Date BETWEEN '2010-05-20' AND '2010-05-25') --this is much faster

    Read the article

  • Using AND vs && in a for loop (Not related to precedence?)

    - by Peter
    Why is it that this code prints "Hello!" four times and then prints "1": <?php for ($i=1 AND $blah=1; $i<5; $i++) echo("Hello!"); echo($blah); ?> While this doesn't print out "Hello!" at all and then prints "1": <?php for ($i=1 && $blah=1; $i<5; $i++) echo("Hello!"); echo($blah); ?> I know AND and && have different precedences, but that doesn't seem to apply here. What am I missing? (I'm using a variant of the code above, since I will use $blah within the for loop, and I want to set the value for it). Thanks for any help!

    Read the article

  • ADF vs. EJB/Spring: Where should I invest my time?

    - by Arthur Huxley
    I am a junior Java SE developer, planning to become a Java Standard Edition professional. Which technologies/frameworks will be the smartest thing for me to learn? I will invest a lot of time and energy on the technologies that I eventually choose and it will be the basis for my carreer. I need to choose carefully. I have one question in particular regarding Oracle ADF: How can it be better than Spring or EJB 3.x? No offense to the ADF developers - and please excuse my ignorance - but is there a reason for using ADF other than locking customers to Oracle products? If ADF is an inferior technology I fear I will be making a mistake choosing to specialize in ADF.

    Read the article

  • JavaScript socket vs. Flash socket?

    - by Dr.Dredel
    Steve Jobs just posted this article on why Apple rejects Flash... http://www.apple.com/hotnews/thoughts-on-flash/ I agree that javascript and css can be used to replicate some of Flash's animation, though Flash does all sorts of scaling and tweening that is incredibly powerful, and I'm not sure that there's anything comparable in javascript, if there is, I certainly haven't seen it. However, my question is about the socket. Flash has an incredibly powerful openSocket class that allows you to connect to a server and have the server and the client talk back and forth to one another. As far as I know there is no equivalent class in Javascript. Am I mistaken? Is there some secret mystery Ajax class that replicates the openSocket? If not, then that feature alone makes Flash an invaluable tool. I'm interested in all answers though... and yes this IS a programming question! :)

    Read the article

  • Pascal's repeat... until vs. C's do... while

    - by Bob
    In C there is a do while loop and pascal's (almost) equivalent is the repeat until loop, but there is a small difference between the two, while both structures will iterate at least once and check whether they need to do the loop again only in the end, in pascal you write the condition that need to met to terminate the loop (REPEAT UNTIL something) in C you write the condition that need to be met to continue the loop (DO WHILE something). Is there a reason why there is this difference or is it just an arbitrary decision?

    Read the article

  • Add xcode-select to PATH vs. Install Xcode Command Line Tools?

    - by MattDiPasquale
    Now with Xcode 4.5, is it OK to just add the following line to my ~/.bash_profile rather than installing the Xcode Command Line Tools? export PATH="$PATH:`xcode-select -print-path`/usr/bin:`xcode-select -print-path`/Toolchains/XcodeDefault.xctoolchain/usr/bin" Note: Xcode says the following about Command Line Tools: Before installing, note that from within Terminal you can use the XCRUN tool to launch compilers and other tools embedded within the Xcode application. Use the XCODE-SELECT tool to define which version of Xcode is active. Type "man xcrun" from within Terminal to find out more.

    Read the article

  • Cucumber vs. built-in testing? [Rails]

    - by yuval
    I asked a question about different testing frameworks yesterday. This question can be found here. Now that I have a better understanding of the different frameworks, I have a very simple question: With a basic understanding, but very limited experience with writing tests with rails' built in testing framework (basic assertions), would it be okay for me to jump directly to testing with RSpec, Webrat, and Cucumber? Thank you! As a side note: yes, this is an opinion based question, but I feel that the input received to this question is valuable enough to the community to keep this question open. Thanks.

    Read the article

  • Why does GCC need extra declarations in templates when VS does not?

    - by Kyle
    template<typename T> class Base { protected: Base() {} T& get() { return t; } T t; }; template<typename T> class Derived : public Base<T> { public: Base<T>::get; // Line A Base<T>::t; // Line B void foo() { t = 4; get(); } }; int main() { return 0; } If I comment out lines A and B, this code compiles fine under Visual Studio 2008. Yet when I compile under GCC 4.1 with lines A and B commented, I get these errors: In member function ‘void TemplateDerived::foo()’: error: ‘t’ was not declared in this scope error: there are no arguments to ‘get’ that depend on a template parameter, so a declaration of ‘get’ must be available Why would one compiler require lines A and B while the other doesn't? Is there a way to simplify this? In other words, if derived classes use 20 things from the base class, I have to put 20 lines of declarations for every class deriving from Base! Is there a way around this that doesn't require so many declarations?

    Read the article

  • When to alter a function vs when to just write a new one...?

    - by Andrew Heath
    /is n00b Through the gift of knowledge and expertise encoded here, I am doing my best to avoid n00b mistakes as I learn the basics of programming. I use functions when I (think I) can in PHP, and keep them somewhat sorted in different includes. The n00b problem I'm running into now is situations where perhaps 4/5th of an existing function is relevant to a new need. Maybe there are a slightly different set of inputs, or an additional calculation or two in the series, or output needs a different format/structure... but the core of the function is still applicable. Is there a good rule of thumb regarding when one should bolt-on crap to an original function and when one should (literally) copy & paste most of it into a new function and tweak to fit the situation? On the one hand I feel bad duping code, on the other I feel bad cluttering up an existing function with stuff not always needed...

    Read the article

  • Is there an easy way to sign a C++ CLI assembly in VS 2010?

    - by jyoung
    Right now I am setting the Linker/Advanced/KeyFile option. I am getting the "mt.exe : general warning 810100b3: is a strong-name signed assembly and embedding a manifest invalidates the signature. You will need to re-sign this file to make it a valid assembly.". Reading from the web, it sounds like I have to set the delay signing option, download the SDK, and run sn.exe as a post build event. Surely there must be an easier way to do this common operation in VS2010?

    Read the article

  • Implicit vs explicit getters/setters in AS3, which to use and why?

    - by James
    Since the advent of AS3 I have been working like this: private var loggy:String; public function getLoggy ():String { return loggy; } public function setLoggy ( loggy:String ):void { // checking to make sure loggy's new value is kosher etc... this.loggy = loggy; } and have avoided working like this: private var _loggy:String; public function get loggy ():String { return loggy; } public function set loggy ( loggy:String ):void { // checking to make sure loggy's new value is kosher etc... this.loggy = loggy; } I have avoided using AS3's implicit getters/setters partly so that I can just start typing "get.." and content assist will give me a list of all my getters, and likewise for my setters. I also dislike underscores in my code which turned me off the implicit route. Another reason is that I prefer the feel of this: whateverObject.setLoggy( "loggy's awesome new value!" ); to this: whateverObject.loggy = "loggy's awesome new value!"; I feel that the former better reflects what is actually happening in the code. I am calling functions, not setting values directly. After installing Flash Builder and the great new plugin SourceMate ( which helps to get some of the useful features that FDT is famous into FB ) I realized that when I use SourceMate's "generate getters and setters" feature it automatically sets my code up using the implicit route: private var _loggy:String; public function get loggy ():String { return loggy; } public function set loggy ( loggy:String ):void { // do whatever is needed to check to make sure loggy is an acceptable value this.loggy = loggy; } I figure that these SourceMate people must know what they are doing or they wouldn't be writing workflow enhancement plugins for coding in AS3, so now I am questioning my ways. So my question to you is: Can anyone give me a good reason why I should give up my explicit g/s ways, start using the implicit technique, and embrace those stinky little _underscores for my private vars? Or back me up in my reasons for doing things the way that I do?

    Read the article

  • java singleton instantiation

    - by jurchiks
    I've found three ways of instantiating a Singleton, but I have doubts as to whether any of them is the best there is. I'm using them in a multi-threaded environment and prefer lazy instantiation. Sample 1: private static final ClassName INSTANCE = new ClassName(); public static ClassName getInstance() { return INSTANCE; } Sample 2: private static class SingletonHolder { public static final ClassName INSTANCE = new ClassName(); } public static ClassName getInstance() { return SingletonHolder.INSTANCE; } Sample 3: private static ClassName INSTANCE; public static synchronized ClassName getInstance() { if (INSTANCE == null) INSTANCE = new ClassName(); return INSTANCE; } The project I'm using ATM uses Sample 2 everywhere, but I kind of like Sample 3 more. There is also the Enum version, but I just don't get it. The question here is - in which cases I should/shouldn't use any of these variations? I'm not looking for lengthy explanations though (there's plenty of other topics about that, but they all eventually turn into arguing IMO), I'd like it to be understandable with few words.

    Read the article

  • Team Foundation Server vs. SVN and other source control systems

    - by micha12
    We are currently looking for a version control system to use in our projects. Up to now we have been using VSS, but nowadays more powerful source control systems exists like TFS, SVN, etc. We are planning to migrate our projects to Visual Studio 2010, so the first idea coming to mind is to start using TFS 2010. I have never worked with SVN and other version control systems. My question is: how good is TFS compared to other source control systems? Is it a good idea using it, or should we rather use SVN (or any other system)? Thank you.

    Read the article

  • memcache is not storing data accross requests

    - by morpheous
    I am new to using memcache, so I may be doing something wrong. I have written a wrapper class around memcache. The wrapper class has only static methods, so is a quasi singleton. The class looks something like this: class myCache { private static $memcache = null; private static $initialized = false; public static function init() { if (self::$initialized) return; self::$memcache = new Memcache(); if (self::configure()) //connects to daemon { self::store('foo', 'bar'); } else throw ConnectionError('I barfed'); } public static function store($key, $data, $flag=MEMCACHE_COMPRESSED, $timeout=86400) { if (self::$memcache->get($key)!== false) return self::$memcache->replace($key, $data, $flag, $timeout); return self::$memcache->set($key, $data, $flag, $timeout); } public static function fetch($key) { return self::$memcache->get($key); } } //in my index.php file, I use the class like this require_once('myCache.php'); myCache::init(); echo 'Stored value is: '. myCache::fetch('foo'); The problem is that the myCache::init() method is being executed in full everytime a page is requested. I then remembered that static variables do not maintain state accross page requests. So I decided instead, to store the flag that indicates whether the server contains the start up data (for our purposes, the variable 'foo', with value 'bar') in memcache itself. Once the status flag is stored in memcache itself, It solves the problem of the initialisation data being loaded for every page request (which quite frankly, defeats the purpose of memcache). However, having solved that problem, when I come to fetch the data in memcache, it is empty. I dont understand whats going on. Can anyone clarify how I can store my data once and retrieve it accross page requests? BTW, (just to clarify), the get/set is working correctly, and if I allow memcache to load the initialisation data for each page request, (which is silly), then the data is available in memcache.

    Read the article

  • Can I get rid of this read lock?

    - by Pieter
    I have the following helper class (simplified): public static class Cache { private static readonly object _syncRoot = new object(); private static Dictionary<Type, string> _lookup = new Dictionary<Type, string>(); public static void Add(Type type, string value) { lock (_syncRoot) { _lookup.Add(type, value); } } public static string Lookup(Type type) { string result; lock (_syncRoot) { _lookup.TryGetValue(type, out result); } return result; } } Add will be called roughly 10/100 times in the application and Lookup will be called by many threads, many of thousands of times. What I would like is to get rid of the read lock. How do you normally get rid of the read lock in this situation? I have the following ideas: Require that _lookup is stable before the application starts operation. The could be build up from an Attribute. This is done automatically through the static constructor the attribute is assigned to. Requiring the above would require me to go through all types that could have the attribute and calling RuntimeHelpers.RunClassConstructor which is an expensive operation; Move to COW semantics. public static void Add(Type type, string value) { lock (_syncRoot) { var lookup = new Dictionary<Type, string>(_lookup); lookup.Add(type, value); _lookup = lookup; } } (With the lock (_syncRoot) removed in the Lookup method.) The problem with this is that this uses an unnecessary amount of memory (which might not be a problem) and I would probably make _lookup volatile, but I'm not sure how this should be applied. (John Skeets' comment here gives me pause.) Using ReaderWriterLock. I believe this would make things worse since the region being locked is small. Suggestions are very welcome.

    Read the article

  • #define vs enum in an embedded environment (How do they compile?)

    - by Alexander Kondratskiy
    This question has been done to death, and I would agree that enums are the way to go. However, I am curious as to how enums compile in the final code- #defines are just string replacements, but do enums add anything to the compiled binary? Or are they both equivalent at that stage. When writing firmware and memory is very limited, is there any advantage, no matter how small, to using #defines? Thanks! EDIT: As requested by the comment below, by embedded, I mean a digital camera. Thanks for the answers! I am all for enums!

    Read the article

  • Where can I find a tool to convert a VS solution to a gcc makefile?

    - by Tim
    I know about CMake and bakefile already, but that is not what I am looking for. Is there a tool that will generate a makefile given a VC project? (or at least a first attempt at one) so I don't have to do all the work by hand? Alternatively, is there a tool that makes CMake files? Edit: Following the link below leads me to this: http://www.winehq.org/docs/winemaker That is a great help. I have not tried it yet.

    Read the article

  • Why MSMQ won't send a space character?

    - by cyclotis04
    I'm exploring MSMQ services, and I wrote a simple console client-server application that sends each of the client's keystrokes to the server. Whenever hit a control character (DEL, ESC, INS, etc) the server understandably throws an error. However, whenever I type a space character, the server receives the packet but doesn't throw an error and doesn't display the space. Server: namespace QIM { class Program { const string QUEUE = @".\Private$\qim"; static MessageQueue _mq; static readonly object _mqLock = new object(); static XmlSerializer xs; static void Main(string[] args) { lock (_mqLock) { if (!MessageQueue.Exists(QUEUE)) _mq = MessageQueue.Create(QUEUE); else _mq = new MessageQueue(QUEUE); } xs = new XmlSerializer(typeof(string)); _mq.BeginReceive(new TimeSpan(0, 1, 0), new object(), OnReceive); while (Console.ReadKey().Key != ConsoleKey.Escape) { } } static void OnReceive(IAsyncResult result) { Message msg; lock (_mqLock) { try { msg = _mq.EndReceive(result); Console.Write("."); Console.Write(xs.Deserialize(msg.BodyStream)); } catch (Exception ex) { Console.Write(ex); } } _mq.BeginReceive(new TimeSpan(0, 1, 0), new object(), OnReceive); } } } Client: namespace QIM_Client { class Program { const string QUEUE = @".\Private$\qim"; static MessageQueue _mq; static void Main(string[] args) { if (!MessageQueue.Exists(QUEUE)) _mq = MessageQueue.Create(QUEUE); else _mq = new MessageQueue(QUEUE); ConsoleKeyInfo key = new ConsoleKeyInfo(); while (key.Key != ConsoleKey.Escape) { key = Console.ReadKey(); _mq.Send(key.KeyChar.ToString()); } } } } Client Input: Testing, Testing... Server Output: .T.e.s.t.i.n.g.,..T.e.s.t.i.n.g...... You'll notice that the space character sends a message, but the character isn't displayed.

    Read the article

  • Visual Studio 2010 "Not enough storage is available to process this command"

    - by Daniel Perez
    I'm fighting with VS 2010 and this error that seems to be very common in previous versions, but it looks like not everyone is having it in the latest version. I've got VS 2010 SP1 and I'm getting this error quite often. The problem is that it's not even enough to restart VS in order to make it go away, I usually have to restart my pc, and i'm losing a lot of time doing this (it's quite frequent) I've got Windows 7 32bits (can't upgrade to 64 bits, the company doesn't allow it), and I can't do things like creating another solution (please don't reply this :) ) I've used the command to make devenv.exe LARGEADDRESSAWARE, but the error keeps on happening My virtual memory size is set to automatic, and the weird thing is that VS doesn't even take 2gb of ram, so I don't know if the error is really because it's lacking memory, or if it's some bug in the program any ideas, things to try, something?

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >