Search Results

Search found 54446 results on 2178 pages for 'struct vs class'.

Page 287/2178 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • Unable to add CRM 2011's Organization service as Service Reference to VS project

    - by Scorpion
    I have problem accessing Organization Service when I try to add it as a Service Reference in Visual Studio. However, I can Access the Service in browser. I have tried to add OrganizationData service and there is no issue with that. An Error occurred while attempting to find service at 'http://xxxxxxxx/xxxxx/XRMServices/2011/Organization.svc'. Error Details There was an error downloading 'http://xxxxxxxx/xxxxx/XRMServices/2011/Organization.svc/_vti_bin/ListData.svc/$metadata'. The request failed with HTTP status 400: Bad Request. Metadata contains a reference that cannot be resolved: 'http://xxxxxxxx/xxxxx/XRMServices/2011/Organization.svc'. Metadata contains a reference that cannot be resolved: 'http://xxxxxxxx/xxxxx/XRMServices/2011/Organization.svc'. If the service is defined in the current solution, try building the solution and adding the service reference again.

    Read the article

  • Is VS 2010 SharePoint functionality good enough to replace WSP Builder?

    - by JL
    I have been using WSP builder up until now with VS 2008. I recently upgraded my IDE to VS 2010, and have heard that VS 2010 now includes functionality to work with MOSS directly. If you guys have had any experience with this new MOSS functionality and have come from a WSP builder background I would like to hear what you think. Just to add more focus to my question, I am not interested in ease of deployment at this stage, only the ability to wrap up a WSP package, so I can ship this off to production machines. So can VS 2010 out of the box create WSP packages from class library projects, like WSP Builder does?

    Read the article

  • Rails' page caching vs. HTTP reverse proxy caches

    - by John Topley
    I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question). What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here. This is my understanding of how both techniques work (maybe I'm wrong): With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?

    Read the article

  • Using static variable in function vs passing variable from caller

    - by Patrick
    I have a function which spawns various types of threads, one of the thread types needs to be spawned every x seconds. I currently have it like this: bool isTime( Time t ) { return t >= now(); } void spawner() { while( 1 ) { Time t = now(); if( isTime( t ) )//is time is called in more than one place in the real function { launchthread() t = now() + offset; } } } but I'm thinking of changing it to: bool isTime() { static Time t = now(); if( t >= now() ) { t = now() + offset; return true; } return false; } void spawner() { if( isTime() ) launchthread(); } I think the second way is neater but I generally avoid statics in much the same way I avoid global data; anyone have any thoughts on the different styles?

    Read the article

  • CFLAGS vs CPPFLAGS

    - by EB
    I understand that CFLAGS (or CXXFLAGS for C++) are for the compiler, whereas CPPFLAGS is used by the preprocessor. But I still don't understand the difference. I need to specify an include path for a header file that is included with #include -- because #include is a preprocessor directive, is the preprocessor (CPPFLAGS) the only thing I care about? Under what circumstances do I need to give the compiler an extra include path? In general, if the preprocessor finds and includes needed header files, why does it ever need to be told about extra include directories? What use is CFLAGS at all? (In my case, I actually found that BOTH of these allow me to compile my program, which adds to the confusion... I can use CFLAGS OR CPPFLAGS to accomplish my goal (in autoconf context at least). What gives?)

    Read the article

  • Char* vs std::string

    - by Lockyer
    Is there any advantage to using char*'s instead of std::string? I know char*'s are usually defined on the stack, so we know exactly how much memory we'll use, is this actually a good argument for their use? Or is std::string better in every way?

    Read the article

  • Mac vs. Windows Browser Font Height Rendering Issue

    - by cdmckay
    I'm using a custom font and the @font-face tag. In Windows, everything looks great, regardless of whether it's Firefox, Chrome, or IE. On Mac, it's a different story. For some reason, the Mac font renderer thinks the font is a lot shorter than it is. For example, consider this test code (live example here): <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>Webble</title> <style type="text/css"> @font-face { font-family: "Bubbleboy 2"; src: url("bubbleboy-2.ttf") format('truetype'); } body { font-family: "Bubbleboy 2"; font-size: 30px; } div { background-color: maroon; color: yellow; height: 100px; line-height: 100px; } </style> </head> <body> <div>The quick brown fox jumped over the lazy dog.</div> </body> </html> Open it on Windows Firefox and on Mac Firefox. Use your mouse to select it. On Windows, you'll notice it fully selects the font. On Mac, it only selects about half the font. If you look at what it is selecting, you'll see that that part has been centered, instead of the full height of the font. Is there anyway to fix this rather large discrepancy?

    Read the article

  • ODBC vs MySQLClient

    - by Matt
    I'm currently using ODBC to connect to my MySQL database, using C#. I've been told that using the MySql Connector would be better, and faster, and not dependent on Windows. Can someone shed some light on this please? I've been unable to find anything on the net so far

    Read the article

  • lexers vs parsers

    - by Naveen
    Are lexers and parsers really that different in theory ? It seems fashionable to hate regular expressions: coding horror, another blog post. However, popular lexing based tools: pygments, geshi, or prettify, all use regular expressions. They seem to lex anything... When is lexing enough, when do you need EBNF ? Has anyone used the tokens produced by these lexers with bison or antlr parser generators?

    Read the article

  • Bilinear interpolation - DirectX vs. GDI+

    - by holtavolt
    I have a C# app for which I've written GDI+ code that uses Bitmap/TextureBrush rendering to present 2D images, which can have various image processing functions applied. This code is a new path in an application that mimics existing DX9 code, and they share a common library to perform all vector and matrix (e.g. ViewToWorld/WorldToView) operations. My test bed consists of DX9 output images that I compare against the output of the new GDI+ code. A simple test case that renders to a viewport that matches the Bitmap dimensions (i.e. no zoom or pan) does match pixel-perfect (no binary diff) - but as soon as the image is zoomed up (magnified), I get very minor differences in 5-10% of the pixels. The magnitude of the difference is 1 (occasionally 2)/256. I suspect this is due to interpolation differences. Question: For a DX9 ortho projection (and identity world space), with a camera perpendicular and centered on a textured quad, is it reasonable to expect DirectX.Direct3D.TextureFilter.Linear to generate identical output to a GDI+ TextureBrush filled rectangle/polygon when using the System.Drawing.Drawing2D.InterpolationMode.Bilinear setting? For this (magnification) case, the DX9 code is using this (MinFilter,MipFilter set similarly): Device.SetSamplerState(0, SamplerStageStates.MagFilter, (int)TextureFilter.Linear); and the GDI+ path is using: g.InterpolationMode = InterpolationMode.Bilinear; I thought that "Bilinear Interpolation" was a fairly specific filter definition, but then I noticed that there is another option in GDI+ for "HighQualityBilinear" (which I've tried, with no difference - which makes sense given the description of "added prefiltering for shrinking") Followup Question: Is it reasonable to expect pixel-perfect output matching between DirectX and GDI+ (assuming all external coordinates passed in are equal)? If not, why not? Finally, there are a number of other APIs I could be using (Direct2D, WPF, GDI, etc.) - and this question generally applies to comparing the output of "equivalent" bilinear interpolated output images across any two of these. Thanks!

    Read the article

  • MS Sql Full-text search vs. LIKE expression

    - by Marks
    Hi. I'm currently looking for a way to search a big database (500MB - 10GB or more on 10 tables) with a lot of different fields(nvarchars and bigints). Many of the fields, that should be searched are not in the same table. An example: A search for '5124 Peter' should return all items, that ... have an ID with 5124 in it, have 'Peter' in the title or description have item type id with 5124 in it created by a user named 'peter' or a user whose id has 5124 in it created by a user with '5124' or 'peter' in his street address. How should i do the search? I read that the full-text search of MS-Sql is a lot more performant than a query with the LIKE keyword and i think the syntax is more clear, but i think it cant search on bigint(id) values and i read it has performance problems with indexing and therefore slows down inserts to the DB. In my project there will be more inserting than reading, so this could be a matter. Thanks in advance, Marks

    Read the article

  • Cannot find suitable formatter for custom class object

    - by Ganesha87
    I'm writing messages to a Message Queue in C# as follows: ObjectMsg objMsg = new ObjMsg(1,"ascii",20090807); Message m = new Message(); m.Formatter = new BinaryMessageFormatter(); m.body = objMsg; queue.Send(m); and I'm trying to read the messages as follows: Message m = new Message() m.Formatter = new BinaryMessageFormatter(); MessageQueue mq = new MessageQueue("./pqueue"); m = mq.Recieve(); ObjMsg msg = (ObjMsg )m.Body; However I'm getting an error message which says: "Cannot find a formatter capable of reading this message."

    Read the article

  • MSSQL Search Proper Names Full Text Index vs LIKE + SOUNDEX

    - by Matthew Talbert
    I have a database of names of people that has (currently) 35 million rows. I need to know what is the best method for quickly searching these names. The current system (not designed by me), simply has the first and last name columns indexed and uses "LIKE" queries with the additional option of using SOUNDEX (though I'm not sure this is actually used much). Performance has always been a problem with this system, and so currently the searches are limited to 200 results (which still takes too long to run). So, I have a few questions: Does full text index work well for proper names? If so, what is the best way to query proper names? (CONTAINS, FREETEXT, etc) Is there some other system (like Lucene.net) that would be better? Just for reference, I'm using Fluent NHibernate for data access, so methods that work will with that will be preferred. I'm using MS SQL 2008 currently.

    Read the article

  • VS 2008 created shortcut doesn't show up in "Send To" menu

    - by Brettski
    I have a WinForms application built using Visual Studio 2008. I added a Setup Project to the solution to create an installation MSI file. I need the setup project to create a shortcut pointing to the application's executable in the users Send To Menu. This way when someone right clicks on a file, my application will show in the Send To list and be selected. I figured out under the file system settings of the Setup project how to add a shortcut to the Users Send To Menu. The problem is, the shortcut doesn't show in the Send To menu when you right click on a file. If I manually create a shortcut to my executable the application does show in the Send To menu. I have read many suggestions on the web to required registry entries for this to work. There is a VBS file written by Ramesh Srinivasan which inserts them. On every system I have tried this on the registry values already existed, so this is not the problem. It seems more to be with the shortcut Visual Studio (or the msi anyway) is creating.

    Read the article

  • UI Terminology - Enabled vs. Active

    - by Pamela
    When designing a feature that can be accessed by different user levels, I'm wondering how the use of "enabled" versus "active" will work. If I'm an administrator, it means I have the ability to turn on and off a feature. Does this mean the feature is enabled for me or active? Once I turn this feature on, is it then enabled or active? Terminology is the pits. On the subject, does anyone know of a reference book or site dedicated to questions regarding standard terminology for UIs? Thanks a million!

    Read the article

  • Raudus vs ExtPascal

    - by user193655
    Delphi developers has several tools (several alternatives to ASP.NET) for building web applications. While No.1 framework is Intraweb, there is a lot of interest around ExtJS, that has 2 incarnations: 1) the opensource ExtPascal 2) the closedsource Raudus Now the products are different, Raudus never supports the latest ExtJS version (while ExtPascal does because as far as I read it "almost automatically updates itself to the latest ExJS version"), Raudus "seems" much RAD (much similar to Intraweb from the RAD point of view). Anyway why chose one or the other? Why Raudus since it is free cannot become Open Source?

    Read the article

  • Changing settings in multiple VS project

    - by Danra
    Hey, Is there any way to change settings for multiple projects in a Visual Studio 2008 C++ solution? For example, adding a library dependancy for all the projects, or ignoring a specific warning. I am aware being able to change some global settings in the IDE itself, but I'm looking for settings which will be stored in the solution/project files. Thanks, Dan

    Read the article

  • Rails new vs create

    - by Senthil
    Why is there a need to define a new method in RESTful controller, follow it up with a create method? Google search didn't provide me the answer I was looking for. I understand the different, but need to know why they are used the way they are.

    Read the article

  • Oracle: '= ANY()' vs. 'IN ()'

    - by eidylon
    Hi all, I just stumbled upon something in ORACLE SQL (not sure if it's in others), that I am curious about. I am asking here as a wiki, since it's hard to try to search symbols in google... I just found that when checking a value against a set of values you can do WHERE x = ANY (a, b, c) As opposed to the usual WHERE x IN (a, b, c) So I'm curious, what is the reasoning for these two syntaxes? Is one standard and one some oddball Oracle syntax? Or are they both standard? And is there a preference of one over the other for performance reasons, or ? Just curious what anyone can tell me about that '= ANY' syntax. CheerZ!

    Read the article

  • Checking for empty arrays: count vs empty

    - by Dan McG
    This question on 'How to tell if a PHP array is empty' had me thinking of this question Is there a reason that count should be used instead of empty when determining if an array is empty or not? My personal thought would be if the 2 are equivalent for the case of empty arrays you should use empty because it gives a boolean answer to a boolean question. From the question linked above, it seems that count($var) == 0 is the popular method. To me, while technically correct, makes no sense. E.g. Q: $var, are you empty? A: 7. Hmmm... Is there a reason I should use count == 0 instead or just a matter of personal taste? As pointed out by others in comments for a now deleted answer, count will have performance impacts for large arrays because it will have to count all elements, whereas empty can stop as soon as it knows it isn't empty. So, if they give the same results in this case, but count is potentially inefficient, why would we ever use count($var) == 0?

    Read the article

  • Unexpected output from Bubblesort program with MSVC vs TCC

    - by Sujith S Pillai
    One of my friends sent this code to me, saying it doesn't work as expected: #include<stdio.h> void main() { int a [10] ={23, 100, 20, 30, 25, 45, 40, 55, 43, 42}; int sizeOfInput = sizeof(a)/sizeof(int); int b, outer, inner, c; printf("Size is : %d \n", sizeOfInput); printf("Values before bubble sort are : \n"); for ( b = 0; b &lt; sizeOfInput; b++) printf("%d\n", a[b]); printf("End of values before bubble sort... \n"); for ( outer = sizeOfInput; outer &gt; 0; outer-- ) { for ( inner = 0 ; inner &lt; outer ; inner++) { printf ( "Comparing positions: %d and %d\n",inner,inner+1); if ( a[inner] &gt; a[inner + 1] ) { int tmp = a[inner]; a[inner] = a [inner+1]; a[inner+1] = tmp; } } printf ( "Bubble sort total array size after inner loop is %d :\n",sizeOfInput); printf ( "Bubble sort sizeOfInput after inner loop is %d :\n",sizeOfInput); } printf ( "Bubble sort total array size at the end is %d :\n",sizeOfInput); for ( c = 0 ; c &lt; sizeOfInput; c++) printf("Element: %d\n", a[c]); } I am using Micosoft Visual Studio Command Line Tool for compiling this on a Windows XP machine. cl /EHsc bubblesort01.c My friend gets the correct output on a dinosaur machine (code is compiled using TCC there). My output is unexpected. The array mysteriously grows in size, in between. If you change the code so that the variable sizeOfInput is changed to sizeOfInputt, it gives the expected results! A search done at Microsoft Visual C++ Developer Center doesn't give any results for "sizeOfInput". I am not a C/C++ expert, and am curious to find out why this happens - any C/C++ experts who can "shed some light" on this? Unrelated note: I seriously thought of rewriting the whole code to use quicksort or merge sort before posting it here. But, after all, it is not Stooge sort... Edit: I know the code is not correct (it reads beyond the last element), but I am curious why the variable name makes a difference.

    Read the article

  • Gtk+ vs Qt language bindings

    - by Adam Smith
    Put shortly: For those familiar with language bindings in Qt and Gtk+. E.g. python and ruby. Are there any quality or capability difference? More background: I know C++ and Qt very well. Minimal experience with Gtk+. I know C++ is not ideal for language bindings due to the lack of a well defined ABI (application binary interface). I also read that Gtk+ was designed to be bound to other languages. So I wonder how this manifets itself in practice. Are the Gtk+ bindings better maintained or work better in some way than their Qt counterparts? I am presently quite interested in the Go language, and they have started developing Gtk+ bindings. However C++ bindings is far away. It makes me wonder whether learning Gtk+ is worth it.

    Read the article

  • long vs. short branches in version control

    - by Vincenzo
    I wonder whether anyone knows some research done with the question "What is good/bad in long/short branches in version control?" I'm specifically interested in academic researches performed in this field. My questions are: What problems (or conflicts) long branches may produce and how to deal with them How to split a big task onto smaller branches/sub-tasks How to coordinate the changes in multiple short branches, related to the same code Thanks in advance for links and suggestions!

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >