Search Results

Search found 4503 results on 181 pages for 'logical operator'.

Page 158/181 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • How to draw a part of a window into a memory device context?

    - by Nell
    I'm using simple statements to keep it, er, simple: The screen goes from 0, 0 to 1000, 1000 (screen coordinates). A window goes from 100, 100 to 900, 900 (screen coordinates). I have a memory device context that goes from 0, 0 to 200, 200 (logical coordinates). I need to send a WM_PRINT message to the window. I can pass the device context to the window via WM_PRINT, but I cannot pass which part of its window it should draw into the device context. Is there some way to alter the device context that will result in the window drawing a specific part of itself into the device context (say, its bottom right portion from 700, 700 to 900, 900)? (This is all under plain old GDI and in C or C++. Any solution must be too.) Please note: This problem is part of a larger solution in which the device context size is fixed and speed is crucial, so I cannot draw the window in full into a separate device context and blit the part I want from the resultant full bitmap into my device context.

    Read the article

  • Why it's can be compiled in GNU/C++, can't compiled in VC++2010 RTM?

    - by volnet
    #include #include #include #include "copy_of_auto_ptr.h" #ifdef _MSC_VER #pragma message("#include ") #include // http://gcc.gnu.org/onlinedocs/gcc/Diagnostic-Pragmas.html#Diagnostic-Pragmas #endif /* case 1-4 is the requirement of the auto_ptr. which form http://ptgmedia.pearsoncmg.com/images/020163371X/autoptrupdate/auto_ptr_update.html */ /* case 1. (1) Direct-initialization, same type, e.g. */ std::auto_ptr source_int() { // return std::auto_ptr(new int(3)); std::auto_ptr tmp(new int(3)); return tmp; } /* case 2. (2) Copy-initialization, same type, e.g. */ void sink_int(std::auto_ptr p) { std::cout source_derived() { // return std::auto_ptr(new Derived()); std::auto_ptr tmp(new Derived()); return tmp; } /* case 4. (4) Copy-initialization, base-from-derived, e.g. */ void sink_base( std::auto_ptr p) { p-go(); } int main(void) { /* // auto_ptr */ // case 1. // auto_ptr std::auto_ptr p_int(source_int()); std::cout p_derived(source_derived()); p_derived-go(); // case 4. // auto_ptr sink_base(source_derived()); return 0; } In Eclipse(GNU C++.exe -v gcc version 3.4.5 (mingw-vista special r3)) it's two compile error: Description Resource Path Location Type initializing argument 1 of void sink_base(std::auto_ptr<Base>)' from result ofstd::auto_ptr<_Tp::operator std::auto_ptr<_Tp1() [with _Tp1 = Base, _Tp = Derived]' auto_ptr_ref_research.cpp auto_ptr_ref_research/auto_ptr_ref_research 190 C/C++ Problem Description Resource Path Location Type no matching function for call to `std::auto_ptr::auto_ptr(std::auto_ptr)' auto_ptr_ref_research.cpp auto_ptr_ref_research/auto_ptr_ref_research 190 C/C++ Problem But it's right in VS2010 RTM. Questions: Which compiler stand for the ISO C++ standard? The content of case 4 is the problem "auto_ptr & auto_ptr_ref want to resolve?"

    Read the article

  • Does Socket open another thread? Does it return something?

    - by Roman
    In the client application I call new Socket(serverIP,serverPort). As a result the client application sends a request to the server application to open a socket. Does it start a new thread? I mean which of the following is true? Client application sends a request and immediately starts to execute following commands (not weighting for the answer). Client sends the request and weights for the answer. As soon as the answer is obtained, the client application continues to execute following commands. The second case seems to be more realistic and logical for me. However, I do not understand what happens if the server does not open a socket and it does not say that it does not "want" to open the second (it can happen if the server does not exist or network is broken). What will happen in this case? Will server weight forever? In general it would be nice for the client to know what is the result of its request for the socket. For example I can imagine the following situations: The socket is opened by the server. The server refuses to open a socket. So, server exists, it got the request from the client but it says "no". There is no response from the server. I know that new Socket(serverIP,serverPort) does not "return" this kind of information. But it throws exceptions. One of them is "UnkownHostException". When it is thrown? When the server is not responding for a while (for how long)? ADDED: I just found out that UnknownHostException is thrown to indicate that the IP address of a host could not be determined. So, it is unrelated with the above described situations (server is not responding, server refuses to open a socket).

    Read the article

  • How to iterate properly across a const set?

    - by Jared
    I'm working on a program that's supposed to represent a graph. My issue is in my printAdjacencyList function. Basically, I have a Graph ADT that has a member variable "nodes", which is a map of the nodes of that graph. Each Node has a set of Edge* to the edges it is connected to. I'm trying to iterate across each node in the graph and each edge of a node. void MyGraph::printAdjacencyList() { std::map<std::string, MyNode*>::iterator mit; std::set<MyEdge*>::iterator sit; for (mit = nodes.begin(); mit != nodes.end(); mit++ ) { std::cout << mit->first << ": {"; const std::set<MyEdge*> edges = mit->second->getEdges(); for (sit = edges.begin(); sit != edges.end(); sit++) { std::pair<MyNode*, MyNode*> edgeNodes = *sit->getEndpoints(); } } std::cout << " }" << std::endl; } getEdges is declared as: const std::set<MyEdge*>& getEdges() { return edges; }; and get Endpoints is declared as: const std::pair<MyNode*, MyNode*>& getEndpoints() { return nodes; }; The compiler error I'm getting is: MyGraph.cpp:63: error: request for member `getEndpoints' in `*(&sit)->std::_Rb_tree_const_iterator<_Tp>::operator-> [with _Tp = MyEdge*]()', which is of non-class type `MyEdge* const' MyGraph.cpp:63: warning: unused variable 'edgeNodes' I have figured out that this probably means I'm misusing const somewhere, but I can't figure out where for the life of me. Any information would be appreciated. Thanks!

    Read the article

  • Using git (or some other VCS) at your company

    - by supercheetah
    Some friends of mine and I were talking recently about version control, and how they were using VSS at their jobs, and were probably going to be moving off of that soon. One of them said that his company will likely be going with Team Foundation Server. Eventually, the conversation did get around to talking about some of the open source VCSes out there, including git and SVN. None of us really knew about any companies that use either of these internally, although we imagined that a number of them did so for SVN, but we weren't too sure about git. I brought up Google and Android using it, but my friend figured that's only for the public facing source code, and that they may use something different for internal projects. Apparently it's more than just SCM that makes TFS so intriguing: Microsoft Sales people and support (although my friend did point out somethings to his managers that he thought might be misleading on MS' part) Integration of things beyond SCM, including project management (I'm just finding out that there are geared towards the same things for git) Again, it's Microsoft, and the transition from VSS to TFS seems logical (or does it?) I'm not much of a fan of SVN, so I didn't really bring it up much, but I am curious about whether or not git is used at your company for internal projects. Have you thought about it, and decided against it? Any reason why?

    Read the article

  • Using placeholders/variables in a sed command

    - by jesse_galley
    I want to store a specific part of a matched result as a variable to be used for replacement later. I would like to keep this in a one liner instead of finding the variable I need before hand. when configuring apache, and use mod_rewrite, you can specificy specific parts of patterns to be used as variables,like this: RewriteRule ^www.example.com/page/(.*)$ http://www.example.com/page.php?page=$1 [R=301,L] the part of the pattern match that's contained inside the parenthesis is stored as $1 for use later. So if the url was www.example.com/page/home, it would be replaced with www.example.com/page.php?page=home. So the "home" part of the match was saved in $1 because it was the part of the pattern inside the parenthesis. I want something like this functionality with a sed command, I need to automatically replace many strings in a SQL dump file, to add drop table if exist commands before each create table, but I need to know the table name to do this, so if the dump file contains something like: ... CREATE TABLE `orders` ... I need to run something like: cat dump.sql | sed "s/CREATE TABLE `(.*)`/DROP TABLE IF EXISTS $1\N CREATE TABLE `$1`/g" to get the result of: ... DROP TABLE IF EXISTS `orders` CREATE TABLE `orders` ... I'm using the mod_rewrite syntax in the sed command as a logical example of what I'm trying to do. Any suggestions?

    Read the article

  • database table design

    - by e.b.white
    I design the tables as below for the system which looks like a package delivering system For example, after user received the package, postman should record in system, and the state(history table) is "delivered",and operator is this postman, the current state(state table) is of course "delivered" history table: +---------------+--------------------------+ | Field | Desc | +---------------+--------------------------+ | id | PRIMARY KEY | +---------------+--------------------------+ | package_id | package_tacking_id | +---------------+--------------------------+ | state | package_state | +---------------+--------------------------+ | operators | operators | +---------------+--------------------------+ | create_time| create_time | +---------------+--------------------------+ state table: +---------------+--------------------------+ | Field | Desc | +---------------+--------------------------+ | id | PRIMARY KEY | +---------------+--------------------------+ | package_id | package_tacking_id | +---------------+--------------------------+ | state | latest_package_state | +---------------+--------------------------+ Above is just the basic information to record, some other information( like invoice, destination,...) should be recored as well. But there are different service types like s1 and s2, for s1 it is not needed to record invoice but s1 need, and maybe s1 need some other information to record (like the tel of end user). After all, at delivering way stations there are additional information to record, and for different service type the information type is different. My question is: 1. For different service type, shall I need to declare different tables(option A) or just one big table which can record all information for all types(option B)? 2. If option A, since the basic information above is MUST, how can prevent from declaring there duplicate fields in different tables?

    Read the article

  • How to combine the multiple part linq into one query?

    - by user2943399
    Operator should be ‘AND’ and not a ‘OR’. I am trying to refactor the following code and i understood the following way of writing linq query may not be the correct way. Can somone advice me how to combine the following into one query. AllCompany.Where(itm => itm != null).Distinct().ToList(); if (AllCompany.Count > 0) { //COMPANY NAME if (isfldCompanyName) { AllCompany = AllCompany.Where(company => company["Company Name"].StartsWith(fldCompanyName)).ToList(); } //SECTOR if (isfldSector) { AllCompany = AllCompany.Where(company => fldSector.Intersect(company["Sectors"].Split('|')).Any()).ToList(); } //LOCATION if (isfldLocation) { AllCompany = AllCompany.Where(company => fldLocation.Intersect(company["Location"].Split('|')).Any()).ToList(); } //CREATED DATE if (isfldcreatedDate) { AllCompany = AllCompany.Where(company => company.Statistics.Created >= createdDate).ToList(); } //LAST UPDATED DATE if (isfldUpdatedDate) { AllCompany = AllCompany.Where(company => company.Statistics.Updated >= updatedDate).ToList(); } //Allow Placements if (isfldEmployerLevel) { fldEmployerLevel = (fldEmployerLevel == "Yes") ? "1" : ""; AllCompany = AllCompany.Where(company => company["Allow Placements"].ToString() == fldEmployerLevel).ToList(); }

    Read the article

  • Database schema to store AND, OR relation, association

    - by user455387
    Many thanks for your help on this. In order for an entreprise to get a call for tender it must meet certain requirements. For the first example the enterprise must have a minimal class 4, and have qualification 2 in sector 5. Minimal class is always one number. Qualification can be anything (single, or multiple using AND, OR logical operators) I have created tables in order to map each number to it's given name. Now I need to store requirements in the database. minimal class 4 Sector Qualification 5.2 minimal class 2 Sector Qualifications 3.9 and 3.10 minimal class 3 Sector Qualifications 6.1 or 6.3 minimal class 1 Sector Qualifications (3.1 and 3.2) or 5.6 class Domain < ActiveRecord::Base has_many :domain_classes has_many :domain_sectors has_many :sector_qualifications, :through => :domain_sectors end class DomainClass < ActiveRecord::Base belongs_to :domain end class DomainSector < ActiveRecord::Base belongs_to :domain has_many :sector_qualifications end class SectorQualification < ActiveRecord::Base belongs_to :domain_sector end create_table "domains", :force => true do |t| t.string "name" end create_table "domain_classes", :force => true do |t| t.integer "number" t.integer "domain_id" end create_table "domain_sectors", :force => true do |t| t.string "name" t.integer "number" t.integer "domain_id" end create_table "sector_qualifications", :force => true do |t| t.string "name" t.integer "number" t.integer "domain_sector_id" end

    Read the article

  • Python: Calling method A from class A within class B?

    - by Tommo
    There are a number of questions that are similar to this, but none of the answers hits the spot - so please bear with me. I am trying my hardest to learn OOP using Python, but i keep running into errors (like this one) which just make me think this is all pointless and it would be easier to just use methods. Here is my code: class TheGUI(wx.Frame): def __init__(self, title, size): wx.Frame.__init__(self, None, 1, title, size=size) # The GUI is made ... textbox.TextCtrl(panel1, 1, pos=(67,7), size=(150, 20)) button1.Bind(wx.EVT_BUTTON, self.button1Click) self.Show(True) def button1Click(self, event): #It needs to do the LoadThread function! class WebParser: def LoadThread(self, thread_id): #It needs to get the contents of textbox! TheGUI = TheGUI("Text RPG", (500,500)) TheParser = WebParser TheApp.MainLoop() So the problem i am having is that the GUI class needs to use a function that is in the WebParser class, and the WebParser class needs to get text from a textbox that exists in the GUI class. I know i could do this by passing the objects around as parameters, but that seems utterly pointless, there must be a more logical way to do this that doesn't using classes seem so pointless? Thanks in advance!

    Read the article

  • content_tag_for a collection in Rails?

    - by Nick
    I have a Post model. Posts have many Comments. I want to generate a <ul> element for post.comments using content_tag_for. Ideally, it'd produce <ul id="comments_post_7" class="comments"> ... </ul> where 7 is the ID of the Post. The closest I can get uses <% content-tag-for :ul post, :comments do %> which produces <ul id="comments_post_7" class="post"> ... </ul> which is pretty close, except for the class="post". Using :class => :comments in the content_tag_for yields class="post comments", but I just want class="comments". It seems logical that I'd be able to use something like <% content_tag_for :ul post.comments do %> but, sadly, that yields <ul id="array_2181653100" class="array"> ... </ul> I've searched far and wide. I feel like I'm missing some elegant way to do this. Am I? Because, seriously, <ul id="comments_post_<%= post.id %>" class="comments"> is painful.

    Read the article

  • Best Practices For Secure APIs?

    - by Ferrett Steinmetz
    Let's say I have a website that has a lot of information on our products. I'd like some of our customers (including us!) to be able to look up our products for various methods, including: 1) Pulling data from AJAX calls that return data in cool, JavaScripty-ways 2) Creating iPhone applications that use that data; 3) Having other web applications use that data for their own end. Normally, I'd just create an API and be done with it. However, this data is in fact mildly confidential - which is to say that we don't want our competitors to be able to look up all our products every morning and then automatically set their prices to undercut us. And we also want to be able to look at who might be abusing the system, so if someone's making ten million complex calls to our API a day and bogging down our server, we can cut them off. My next logical step would be then to create a developers' key to restrict access - which would work fine for web apps, but not so much for any AJAX calls. (As I see it, they'd need to provide the key in the JavaScript, which is in plaintext and easily seen, and hence there's actually no security at all. Particularly if we'd be using our own developers' keys on our site to make these AJAX calls.) So my question: after looking around at Oauth and OpenID for some time, I'm not sure there is a solution that would handle all three of the above. Is there some sort of canonical "best practices" for developers' keys, or can Oauth and OpenID handle AJAX calls easily in some fashion I have yet to grok, or am I missing something entirely?

    Read the article

  • Need help getting parent reference to child view controller

    - by Andy
    I've got the following code in one of my view controllers: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { switch (indexPath.section) { case 0: // "days" section tapped { DayPicker *dayPicker = [[DayPicker alloc] initWithStyle:UITableViewStylePlain]; dayPicker.rowLabel = self.activeDaysLabel; dayPicker.selectedDays = self.newRule.activeDays; [self.navigationController pushViewController:dayPicker animated:YES]; [dayPicker release]; break; ... Then, in the DayPicker controller, I do some stuff to the dayPicker.rowLabel property. Now, when the dayPicker is dismissed, I want the value in dayPicker.rowLabel to be used as the cell.textLabel.text property in the cell that called the controller in the first place (i.e., the cell label becomes the option that was selected within the DayPicker controller). I thought that by using the assignment operator to set dayPicker.rowLabel = self.activeDaysLabel, the two pointed to the same object in memory, and that upon dismissing the DayPicker, my first view controller, which uses self.activeDaysLabel as the cell.textLabel.text property for the cell in question, would automagically pick up the new value of the object. But no such luck. Have I missed something basic here, or am I going about this the wrong way? I originally passed a reference to the calling view controller to the child view controller, but several here told me that was likely to cause problems, being a circular reference. That setup worked, though; now I'm not sure how to accomplish the same thing "the right way." As usual, thanks in advance for your help.

    Read the article

  • Which C++ Standard Library wrapper functions do you use?

    - by Neil Butterworth
    This question, asked this morning, made me wonder which features you think are missing from the C++ Standard Library, and how you have gone about filling the gaps with wrapper functions. For example, my own utility library has this function for vector append: template <class T> std::vector<T> & operator += ( std::vector<T> & v1, const std::vector <T> & v2 ) { v1.insert( v1.end(), v2.begin(), v2.end() ); return v1; } and this one for clearing (more or less) any type - particularly useful for things like std::stack: template <class C> void Clear( C & c ) { c = C(); } I have a few more, but I'm interested in which ones you use? Please limit answers to wrapper functions - i.e. no more than a couple of lines of code.

    Read the article

  • Nested bind expressions

    - by user328543
    This is a followup question to my previous question. #include <functional> int foo(void) {return 2;} class bar { public: int operator() (void) {return 3;}; int something(int a) {return a;}; }; template <class C> auto func(C&& c) -> decltype(c()) { return c(); } template <class C> int doit(C&& c) { return c();} template <class C> void func_wrapper(C&& c) { func( std::bind(doit<C>, std::forward<C>(c)) ); } int main(int argc, char* argv[]) { // call with a function pointer func(foo); func_wrapper(foo); // error // call with a member function bar b; func(b); func_wrapper(b); // call with a bind expression func(std::bind(&bar::something, b, 42)); func_wrapper(std::bind(&bar::something, b, 42)); // error // call with a lambda expression func( [](void)->int {return 42;} ); func_wrapper( [](void)->int {return 42;} ); return 0; } I'm getting a compile errors deep in the C++ headers: functional:1137: error: invalid initialization of reference of type ‘int (&)()’ from expression of type ‘int (*)()’ functional:1137: error: conversion from ‘int’ to non-scalar type ‘std::_Bind(bar, int)’ requested func_wrapper(foo) is supposed to execute func(doit(foo)). In the real code it packages the function for a thread to execute. func would the function executed by the other thread, doit sits in between to check for unhandled exceptions and to clean up. But the additional bind in func_wrapper messes things up...

    Read the article

  • Porting Perl to C++ `print "\x{2501}" x 12;`

    - by jippie
    I am porting a program from Perl to C++ as a learning objective. I arrived at a routine that draws a table with commands like the following: Perl: print "\x{2501}" x 12; And it draws 12 times a '?' ("box drawings heavy horizontal"). Now I figured out part of the problem already: Perl: \x{}, \x00 Hexadecimal escape sequence; C++: \unnnn To print a single Unicode character: C++: printf( "\u250f\n" ); But does C++ have a smart equivalent for the 'x' operator or would it come down to a for loop? UPDATE Let me include the full source code I am trying to compile with the proposed solution. The compiler does throw an errors: g++ -Wall -Werror project.cpp -o project project.cpp: In function ‘int main(int, char**)’: project.cpp:38:3: error: ‘string’ is not a member of ‘std’ project.cpp:38:15: error: expected ‘;’ before ‘s’ project.cpp:39:3: error: ‘cout’ is not a member of ‘std’ project.cpp:39:16: error: ‘s’ was not declared in this scope #include <stdlib.h> #include <stdint.h> #include <stdio.h> #include <string.h> int main ( int argc, char *argv[] ) { if ( argc != 2 ) { fprintf( stderr , "usage: %s matrix\n", argv[0] ); exit( 2 ); } else { //std::string s(12, "\u250f" ); std::string s(12, "u" ); std::cout << s; } }

    Read the article

  • Android lifecycle: Fill in data in activity in onStart() or onResume()?

    - by pjv
    Should you get data via a cursor and fill in the data on the screen, such as setting the window title, in onStart() or onResume()? onStart() would seem the logical place because after onStart() the Activity can already be displayed, albeit in the background. Notably I was having a problem with a managed dialog that made me rethink this. If the user rotates the screen while the dialog is still open, onCreateDialog() and onPrepareDialog() are called between onStart() and onResume(). If the dialog needs to be based on the data you need to have the data before onResume(). If I'm correct about onStart() then why does the Notepad example give a bad example by doing it in onResume()? See http://developer.android.com/resources/samples/NotePad/src/com/example/android/notepad/NoteEditor.html NoteEditor.java line 176 (title = mCursor.getString...). Also, what if my Activity launches another Actvity/Dialog that changes the data my cursor is tracking. Even in the simplest case, does that mean that I have to manually update my previous screen (a listener for a dialog in the main activity), or alternatively that I have to register a ContentObserver, since I'm no longer updating the data in onResume() (though I could update it twice of course)? I know it's a basic question but the dialog only recently, to my surprise, made me realize this.

    Read the article

  • in haskell, why do I need to specify type constraints, why can't the compiler figure them out?

    - by Steve
    Consider the function, add a b = a + b This works: *Main> add 1 2 3 However, if I add a type signature specifying that I want to add things of the same type: add :: a -> a -> a add a b = a + b I get an error: test.hs:3:10: Could not deduce (Num a) from the context () arising from a use of `+' at test.hs:3:10-14 Possible fix: add (Num a) to the context of the type signature for `add' In the expression: a + b In the definition of `add': add a b = a + b So GHC clearly can deduce that I need the Num type constraint, since it just told me: add :: Num a => a -> a -> a add a b = a + b Works. Why does GHC require me to add the type constraint? If I'm doing generic programming, why can't it just work for anything that knows how to use the + operator? In C++ template programming, you can do this easily: #include <string> #include <cstdio> using namespace std; template<typename T> T add(T a, T b) { return a + b; } int main() { printf("%d, %f, %s\n", add(1, 2), add(1.0, 3.4), add(string("foo"), string("bar")).c_str()); return 0; } The compiler figures out the types of the arguments to add and generates a version of the function for that type. There seems to be a fundamental difference in Haskell's approach, can you describe it, and discuss the trade-offs? It seems to me like it would be resolved if GHC simply filled in the type constraint for me, since it obviously decided it was needed. Still, why the type constraint at all? Why not just compile successfully as long as the function is only used in a valid context where the arguments are in Num? Thank you.

    Read the article

  • C++ CRTP question

    - by aaa
    following piece of code does not compile, the problem is in T::rank not be inaccessible (I think) or uninitialized in parent template. Can you tell me exactly what the problem is? is passing rank explicitly the only way? or is there a way to query tensor class directly? Thank you #include <boost/utility/enable_if.hpp> template<class T, // size_t N, class enable = void> struct tensor_operator; // template<class T, size_t N> template<class T> struct tensor_operator<T, typename boost::enable_if_c< T::rank == 4>::type > { tensor_operator(T &tensor) : tensor_(tensor) {} T& operator()(int i,int j,int k,int l) { return tensor_.layout.element_at(i, j, k, l); } T &tensor_; }; template<size_t N, typename T = double> // struct tensor : tensor_operator<tensor<N,T>, N> { struct tensor : tensor_operator<tensor<N,T> > { static const size_t rank = N; }; I know the workaround, however am interested in mechanics of template instantiation for self-education

    Read the article

  • LINQ to SQL Queries odd Materialization

    - by ptoinson
    I ran across an interesting Linq to SQL, uh, feature, the other day. Perhaps someone can give me a logical explanation for the reasoning behind the results. Take the code below as my example which utilizes the AdventureWorks database setup in a Linq to SQL DataContext. This is a clip from my unit test. The resulting customer returned from a call to both CustomerQuery_Test_01() and CustomerQuery_Test_02() is the same. However, the query executed on the SQLServer are different is a major way. The method CustomerQuery_Test_01 us causing the entire Customer table to be materialized, which the call to CustomerQuery_Test_02 is only causing the single customer to be materialized. The resulting SQL Queries are at the bottom of this post. Anyone have a good reason for this? To me, it was highly non-intuitive. protected virtual Customer GetByPrimaryKey(Func<Customer, bool> keySelection) { AdventureWorksDataContext context = new AdventureWorksDataContext(); return (from r in context.Customers select r).SingleOrDefault(keySelection); } [TestMethod] public void CustomerQuery_Test_01() { Customer customer = GetByPrimaryKey(c => c.CustomerID == 2); } [TestMethod] public void CustomerQuery_Test_02() { AdventureWorksDataContext context = new AdventureWorksDataContext(); Customer customer = (from r in context.Customers select r).SingleOrDefault(c => c.CustomerID == 2); } Query for CustomerQuery_Test_01 (notice the lack of a where clause) SELECT [t0].[CustomerID], [t0].[NameStyle], [t0].[Title], [t0].[FirstName], [t0].[MiddleName], [t0].[LastName], [t0].[Suffix], [t0].[CompanyName], [t0].[SalesPerson], [t0].[EmailAddress], [t0].[Phone], [t0].[PasswordHash], [t0].[PasswordSalt], [t0].[rowguid], [t0].[ModifiedDate] FROM [SalesLT].[Customer] AS [t0] Query for CustomerQuery_Test_02 (notice the where clause) SELECT [t0].[CustomerID], [t0].[NameStyle], [t0].[Title], [t0].[FirstName], [t0].[MiddleName], [t0].[LastName], [t0].[Suffix], [t0].[CompanyName], [t0].[SalesPerson], [t0].[EmailAddress], [t0].[Phone], [t0].[PasswordHash], [t0].[PasswordSalt], [t0].[rowguid], [t0].[ModifiedDate] FROM [SalesLT].[Customer] AS [t0] WHERE [t0].[CustomerID] = @p0

    Read the article

  • Optimizing C++ Tree Generation

    - by cam
    Hi, I'm generating a Tic-Tac-Toe game tree (9 seconds after the first move), and I'm told it should take only a few milliseconds. So I'm trying to optimize it, I ran it through CodeAnalyst and these are the top 5 calls being made (I used bitsets to represent the Tic-Tac-Toe board): std::_Iterator_base::_Orphan_me std::bitset<9::test std::_Iterator_base::_Adopt std::bitset<9::reference::operator bool std::_Iterator_base::~_Iterator_base void BuildTreeToDepth(Node &nNode, const int& nextPlayer, int depth) { if (depth > 0) { //Calculate gameboard states int evalBoard = nNode.m_board.CalculateBoardState(); bool isFinished = nNode.m_board.isFinished(); if (isFinished || (nNode.m_board.isWinner() > 0)) { nNode.m_winCount = evalBoard; } else { Ticboard tBoard = nNode.m_board; do { int validMove = tBoard.FirstValidMove(); if (validMove != -1) { Node f; Ticboard tempBoard = nNode.m_board; tempBoard.Move(validMove, nextPlayer); tBoard.Move(validMove, nextPlayer); f.m_board = tempBoard; f.m_winCount = 0; f.m_Move = validMove; int currPlay = (nextPlayer == 1 ? 2 : 1); BuildTreeToDepth(f,currPlay, depth - 1); nNode.m_winCount += f.m_board.CalculateBoardState(); nNode.m_branches.push_back(f); } else { break; } }while(true); } } } Where should I be looking to optimize it? How should I optimize these 5 calls (I don't recognize them=.

    Read the article

  • When to use pointer to a class and when to just instantiate it as a variable

    - by Enders
    Im sort of confused by it. The best I could find was reading through the cplusplus.com tutorial and all they have to say about pointers to classes. "It is perfectly valid to create pointers that point to classes. We simply have to consider that once declared, a class becomes a valid type, so we can use the class name as the type for the pointer" Which tells me nothing about when to use them over the normal instantiation. I've seen the - operator many times, and looked at some codes but cant really decipher why they did it. Generic examples will be appreciated; but more specifically related to gui programming. Its where I encountered it first. QGridLayout *mainLayout = new QGridLayout; mainLayout->addWidget(nameLabel, 0, 0); mainLayout->addWidget(nameLine, 0, 1); mainLayout->addWidget(addressLabel, 1, 0, Qt::AlignTop); mainLayout->addWidget(addressText, 1, 1); Why not QGridLayout mainLayout mainLayout.addWidget ... (It doesnt compile if I change the sample code to that and try it but you get the point) Thanks in advance

    Read the article

  • Interpretation of int (*a)[3]

    - by kapuzineralex
    When working with arrays and pointers in C, one quickly discovers that they are by no means equivalent although it might seem so at a first glance. I know about the differences in L-values and R-values. Still, recently I tried to find out the type of a pointer that I could use in conjunction with a two-dimensional array, i.e. int foo[2][3]; int (*a)[3] = foo; However, I just can't find out how the compiler "understands" the type definition of a in spite of the regular operator precedence rules for * and []. If instead I were to use a typedef, the problem becomes significantly simpler: int foo[2][3]; typedef int my_t[3]; my_t *a = foo; At the bottom line, can someone answer me the questions as to how the term int (*a)[3] is read by the compiler? int a[3] is simple, int *a[3] is simple as well. But then, why is it not int *(a[3])? EDIT: Of course, instead of "typecast" I meant "typedef" (it was just a typo).

    Read the article

  • Applying powershell outside IT Management.

    - by Tormod
    Hi. We have a flexible process control system by which automation engineers configure up large application comprising thousands of small logical units that are parameterized and integrated into the control flow. There are many tasks that are repetitive on the granular level, and there are a multitude of proprietary productivity tools that have been made to meet this demand. We have different business segments, and the automation engineers vary across the board in skill sets and interests. Fancy GUI and usability versus flexibility is a common discussion. At first glance, powershell seems to be a sensible platform to implement such tooling and which also would be a advantageous cross-over skill to manage the IT aspects of the system setup and deployment as a whole. This should allow the script savvy their desired flexibility (they are already a scripting crowd) and the GUI dependant could still get their desired GUI underpinned by powershell. But I can't seem to find many people/groups who have tried to use the scriptability and object passing of powershell extensively to accommodate a heterogeneous user community outside the realm of IT management. Do anybody have any tips or word of caution? Am I missing something obvious as to why this shouldn't be done? Shouldn't powershell be taking over the world? ;-)

    Read the article

  • Allowing Xform controls for optional XML elements

    - by Cam
    Hi, In designing an XForm interface to an XML database (using eXist and XSLTForms), I'd like to include an input control for an optional element. The XML data records already exist and while some contain the optional element, others don't. To update a record, I'm using the existing XML record as the model instance. The problem is that the form control is not displayed when the optional element is not present, which is logical, but presents a problem when a user wants to add data to the optional element. To be more explicit, here's an example data record, data.xml: <a> <b>content</b> </a> with RNC schema: start = element a { element b { text }, element notes { text }? } XForms model: <xf:model> <xf:instance xmlns="" src="data.xml"/> <xf:submission id="save" method="post" action="update.xq" /> </xf:model> And control: <xf:input ref="/a/notes"> <xf:label>Notes (optional): </xf:label> </xf:input> The problem is that the 'Notes' input control is simply not displayed. An obvious solution is to add a trigger button to allow the user to insert the element if needed, but it is preferable to just have the input control appear, and be empty. My question is: is there some subtle combination of lesser-know attributes/binds/multiple instances/xpath expressions that will cause the control to always be displayed? Thanks

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >