Search Results

Search found 14799 results on 592 pages for 'instance eval'.

Page 509/592 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • C++ Operator Ambiguity

    - by Scott
    Forgive me, for I am fairly new to C++, but I am having some trouble regarding operator ambiguity. I think it is compiler-specific, for the code compiled on my desktop. However, it fails to compile on my laptop. I think I know what's going wrong, but I don't see an elegant way around it. Please let me know if I am making an obvious mistake. Anyhow, here's what I'm trying to do: I have made my own vector class called Vector4 which looks something like this: class Vector4 { private: GLfloat vector[4]; ... } Then I have these operators, which are causing the problem: operator GLfloat* () { return vector; } operator const GLfloat* () const { return vector; } GLfloat& operator [] (const size_t i) { return vector[i]; } const GLfloat& operator [] (const size_t i) const { return vector[i]; } I have the conversion operator so that I can pass an instance of my Vector4 class to glVertex3fv, and I have subscripting for obvious reasons. However, calls that involve subscripting the Vector4 become ambiguous to the compiler: enum {x, y, z, w} Vector4 v(1.0, 2.0, 3.0, 4.0); glTranslatef(v[x], v[y], v[z]); Here are the candidates: candidate 1: const GLfloat& Vector4:: operator[](size_t) const candidate 2: operator[](const GLfloat*, int) <built-in> Why would it try to convert my Vector4 to a GLfloat* first when the subscript operator is already defined on Vector4? Is there a simple way around this that doesn't involve typecasting? Am I just making a silly mistake? Thanks for any help in advance.

    Read the article

  • WP7 - Cancelling ContextMenu click event propagation

    - by Praetorian
    I'm having a problem when the Silverlight toolkit's ContextMenu is clicked while it is over a UIElement that has registered a Tap event GestureListener. The context menu click propagates to the underlying element and fires its tap event. For instance, say I have a ListBox and each ListBoxItem within it has registered both a ContextMenu and a Tap GestureListener. Assume that clicking context menu item2 is supposed to take you to Page1.xaml, while tapping on any of ListBox items themselves is supposed to take you to Page2.xaml. If I open the context menu on item1 in the ListBox, then context menu item2 is on top of ListBox item2. When I click on context menu item2 I get weird behavior where the app navigates to Page1.xaml and then immediately to Page2.xaml because the click event also triggered the Tap gesture for ListBox item2. I've verified in the debugger that it is always the context menu that receives the click event first. How do I cancel the context menu item click's routed event propagation so it doesn't reach ListBox item2? Thanks for your help!

    Read the article

  • My app crash with MFMailComposeViewController and MFMessageComposeViewController when I re-launch it.

    - by Dolwen
    Hello all, I encounter a crash with MFMailComposeViewController, MFMessageComposeViewController and multitasking on IOS 4.2 (both simulator and IPHone 4). Code i use : Class emailClass = (NSClassFromString(@"MFMailComposeViewController")); if( emailClass != nil ) { MFMailComposeViewController * controller = [[emailClass alloc] init]; if([emailClass canSendMail]) { // delegate controller.mailComposeDelegate = self; // subject [controller setSubject:@"Hello All."]; // main message [controller setMessageBody:@"I love Stackoverflow.com !" isHTML:NO]; // adding image attachment // getting path for the image we have in the tutorial project NSString *path = [[NSBundle mainBundle] pathForResource:@"image" ofType:@"png"]; // loading content of the image into NSData NSData *imageData = [NSData dataWithContentsOfFile:path]; // adding the attachment to he message [controller addAttachmentData:imageData mimeType:@"image/png" fileName:@"My Byook image"]; // setting different than the default transition for the modal view controller [controller setModalTransitionStyle:UIModalTransitionStyleCoverVertical]; // show [[CGameStateManager getCurrentGameState] presentModalViewController:controller animated:YES]; } [controller release]; } To close MFMailComposeViewController i use : [[CGameStateManager getCurrentGameState] dismissModalViewControllerAnimated:NO]; Then the app crash on the "dismissModalViewControllerAnimated:" and we can read in the debugger with NSZombieEnabled : * -[UIImage isKindOfClass:]: message sent to deallocated instance 0xb0b9f80 Anyone have an answer to solve my problem ? :) Thx

    Read the article

  • c# wpf command pattern

    - by evan
    I have a wpf gui which displays a list of information in separate window and in a separate thread from the main application. As the user performs actions in the main window the side window is updated. (For example if you clicked page down in the main window a listbox in the side window would page down). Right now the architecture for this application feels very messy and I'm sure there is a cleaner way to do it. It looks like this: Main Window contains a singleton SideWindowControl which communicates with an instance of the SideWindowDisplay using events - so, for example, the pagedown button would work like: 1) the event handler of the button on the main window calls SideWindowControl.PageDown() 2) in the PageDown() function a event is created and thrown. 3) finally the gui, ShowSideWindowDisplay is subscribing to the SideWindowControl.Actions event handles the event and actually scrolls the listbox down - note because it is in a different thread it has to do that by running the command via Dispatcher.Invoke() This just seems like a very messy way to this and there must be a clearer way (The only part that can't change is that the main window and the side window must be on different threads). Perhaps using WPF commands? I'd really appreciate any suggestions!! Thanks

    Read the article

  • Many users, many cpus, no delays. Good for cloud?

    - by Eric
    I wish to set up a CPU-intensive time-important query service for users on the internet. A usage scenario is described below. Is cloud computing the right way to go for such an implementation? If so, what cloud vendor(s) cater to this type of application? I ask specifically, in terms of: 1) pricing 2) latency resulting from: - slow CPUs, instance creations, JIT compiles, etc.. - internal management and communication of processes inside the cloud (e.g. a queuing process and a calculation process) - communication between cloud and end user 3) ease of deployment A usage scenario I am expecting is: - A typical user sends a query (XML of size around 1K) once every 30 seconds on average. - Each query requires a numerical computation of average time 0.2 sec and max time 1 sec on a 1 GHz Pentium. The computation requires no data other than the query itself and is performed by the same piece of code each time. - The delay a user experiences between sending a query and receiving a response should be on average no more than 2 seconds and in general no more than 5 seconds. - A background save to a DB of the response should occur (not time critical) - There can be up to 30000 simultaneous users - i.e., on average 1000 queries a second, each requiring an average 0.2 sec calculation, so that would necessitate around 200 CPUs. Currently I'm look at GAE Java (for quicker deployment and less IT hassle) and EC2 (Speed and price optimization) as options. Where can I learn more about the right way to set ups such a system? past threads, different blogs, books, etc.. BTW, if my terminology is wrong or confusing, please let me know. I'd greatly appreciate any help.

    Read the article

  • "java.lang.OutOfMemoryError: Java heap space" in image and array storage

    - by totalconscience
    I am currently working on an image processing demonstration in java (Applet). I am running into the problem where my arrays are too large and I am getting the "java.lang.OutOfMemoryError: Java heap space" error. The algorithm I run creates an NxD float array where: N is the number of pixel in the image and D is the coordinates of each pixel plus the colorspace components of each pixel (usually 1 for grayscale or 3 for RGB). For each iteration of the algorithm it creates one of these NxD float arrays and stores it for later use in a vector, so that the user of the applet may look at the individual steps. My client wants the program to be able to load a 500x500 RGB image and run as the upper bound. There are about 12 to 20 iterations per run so that means I need to be able to store a 12x500x500x5 float in some fashion. Is there a way to process all of this data and, if possible, how? Example of the issue: I am loading a 512 by 512 Grayscale image and even before the first iteration completes I run out of heap space. The line it points me to is: Y.add(new float[N][D]) where Y is a Vector and N and D are described as above. This is the second instance of the code using that line.

    Read the article

  • C++ Virtual Methods for Class-Specific Attributes or External Structure

    - by acanaday
    I have a set of classes which are all derived from a common base class. I want to use these classes polymorphically. The interface defines a set of getter methods whose return values are constant across a given derived class, but vary from one derived class to another. e.g.: enum AVal { A_VAL_ONE, A_VAL_TWO, A_VAL_THREE }; enum BVal { B_VAL_ONE, B_VAL_TWO, B_VAL_THREE }; class Base { //... virtual AVal getAVal() const = 0; virtual BVal getBVal() const = 0; //... }; class One : public Base { //... AVal getAVal() const { return A_VAL_ONE }; BVal getBVal() const { return B_VAL_ONE }; //... }; class Two : public Base { //... AVal getAVal() const { return A_VAL_TWO }; BVal getBVal() const { return B_VAL_TWO }; //... }; etc. Is this a common way of doing things? If performance is an important consideration, would I be better off pulling the attributes out into an external structure, e.g.: struct Vals { AVal a_val; VBal b_val; }; storing a Vals* in each instance, and rewriting Base as follows? class Base { //... public: AVal getAVal() const { return _vals->a_val; }; BVal getBVal() const { return _vals->b_val; }; //... private: Vals* _vals; }; Is the extra dereference essentially the same as the vtable lookup? What is the established idiom for this type of situation? Are both of these solutions dumb? Any insights are greatly appreciated

    Read the article

  • Will TFS 2010 support non-contiguous merging?

    - by steve_d
    I know that merging non-contiguous changesets at once may not be a good idea. However there is at least one situation in which merging non-contiguous changesets is (probably) not going to break anything: when there are no intervening changes on the affected individual files. (At least, it wouldn't break any worse than would a series of cherry-picked merges, checked in each time; and at least this way you would discover breakage before checking in). For instance, let's say you have a Main and a Development branch. They start out identical (e.g. after a release). They have two files, foo.cs and bar.cs. Alice makes a change in Development\foo.cs and checks it in as changeset #1001. Bob makes a change in Development\bar.cs and checks it in as #1002. Alice makes another change to Development\foo.cs and checks it in as #1003. Now we could in theory merge both changes #1001 and #1003 from dev-to main in a single operation. If we try to merge at the branch level, dev-to-main, we will have to do it as two operations. In this simple, contrived example it's simple enough to merge the one file - but in the real world where there would be many files involved, it's not so simple. Non-contiguous merging is one of the reasons given for why "merge by workitem" is not implemented in TFS.

    Read the article

  • Why initialize an object to empty

    - by ProgEnthu
    I am learning windows programming with the help of MSDN.Why would somebody initialize an object like the following? WNDCLASS wc = { }; Will this zero all the memory of the object? Whole source code is following: #ifndef UNICODE #define UNICODE #endif #include <windows.h> LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam); int WINAPI wWinMain(HINSTANCE hInstance, HINSTANCE, PWSTR pCmdLine, int nCmdShow) { // Register the window class. const wchar_t CLASS_NAME[] = L"Sample Window Class"; WNDCLASS wc = { }; wc.lpfnWndProc = WindowProc; wc.hInstance = hInstance; wc.lpszClassName = CLASS_NAME; RegisterClass(&wc); // Create the window. HWND hwnd = CreateWindowEx( 0, // Optional window styles. CLASS_NAME, // Window class L"Learn to Program Windows", // Window text WS_OVERLAPPEDWINDOW, // Window style // Size and position CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, NULL, // Parent window NULL, // Menu hInstance, // Instance handle NULL // Additional application data ); if (hwnd == NULL) { return 0; } ShowWindow(hwnd, nCmdShow); // Run the message loop. MSG msg = { }; while (GetMessage(&msg, NULL, 0, 0)) { TranslateMessage(&msg); DispatchMessage(&msg); } return 0; } LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { switch (uMsg) { case WM_DESTROY: PostQuitMessage(0); return 0; case WM_PAINT: { PAINTSTRUCT ps; HDC hdc = BeginPaint(hwnd, &ps); FillRect(hdc, &ps.rcPaint, (HBRUSH) (COLOR_WINDOW+1)); EndPaint(hwnd, &ps); } return 0; } return DefWindowProc(hwnd, uMsg, wParam, lParam); }

    Read the article

  • Injecting the application TransactionManager into a JPA EntityListener

    - by nodje
    I want to use the JPA EntityListener to support spring security ACLs. On @PostPersist events, I create a permission corresponding to the persisted entity. I need this operation to participate to the current Transaction. For this to happen I need to have a reference to the application TransactionManager in the EntityListener. The problem is, Spring can't manage the EntityListener as it is created automatically when EntityManagerFactory is instantiated. And in a classic Spring app, the EntityManagerFactory is itself created during the TransactioManager instantiation. <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> So I have no way to inject the TransactionManager with the constructor, as it is not yet instantiated. Making the EntityManager a @Component create another instance of the EntityManager. Implementing InitiliazingBean and using afterPropertySet() doesn't work as it's not a Spring managed bean. Any idea would be helpful as I'm stuck and out of ideas.

    Read the article

  • google app engine atomic section???

    - by bokertov
    hi, Say you retrieve a set of records from the datastore (something like: select * from MyClass where reserved='false'). how do i ensure that another user doesn't set the reserved is still false? I've looked in the Transaction documentation and got shocked from google's solution which is to catch the exception and retry in a loop. Any solution that I'm missing - it's hard to believe that there's no way to have an atomic operation in this environment. (btw - i could use 'syncronize' inside the servlet but i think it's not valid as there's no way to ensure that there's only one instance of the servlet object, isn't it? same applies to static variable solution) Any idea on how to solve??? (here's the google solution: http://code.google.com/appengine/docs/java/datastore/transactions.html#Entity_Groups look at: Key k = KeyFactory.createKey("Employee", "k12345"); Employee e = pm.getObjectById(Employee.class, k); e.counter += 1; pm.makePersistent(e); This requires a transaction because the value may be updated by another user after this code fetches the object, but before it saves the modified object. Without a transaction, the user's request will use the value of counter prior to the other user's update, and the save will overwrite the new value. With a transaction, the application is told about the other user's update. If the entity is updated during the transaction, then the transaction fails with an exception. The application can repeat the transaction to use the new data. THANKS!

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • Forms authentication: how do you store username password in web.config?

    - by Nick G
    I'm used to using Forms Authentication with a database, but I'm writing a little internal utility and the app doesn't have a database so I want to store the username and password in web.config. However for some reason, forms authentication is still trying to access SQL Server and I can't see how to stop it doing this and pick up the credentials from web.config. What am I doing wrong? I just get the error "Failed to generate a user instance of SQL Server due to a failure in impersonating the client. The connection will be closed." Here are the relevant sections of my web.config: <configuration> <system.web> <authentication mode="Forms"> <forms loginUrl="~/Login.aspx" timeout="60" name=".LoginCookie" path="/" > <credentials passwordFormat="Clear"> <user name="user1" password="[pass]" /> <user name="user2" password="[pass]" /> </credentials> </forms> </authentication> <authorization> <deny users="?" /> </authorization> </system.web> </configuration>

    Read the article

  • Reordering fields in Django model

    - by Alex Lebedev
    I want to add few fields to every model in my django application. This time it's created_at, updated_at and notes. Duplicating code for every of 20+ models seems dumb. So, I decided to use abstract base class which would add these fields. The problem is that fields inherited from abstract base class come first in the field list in admin. Declaring field order for every ModelAdmin class is not an option, it's even more duplicate code than with manual field declaration. In my final solution, I modified model constructor to reorder fields in _meta before creating new instance: class MyModel(models.Model): # Service fields notes = my_fields.NotesField() created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) class Meta: abstract = True last_fields = ("notes", "created_at", "updated_at") def __init__(self, *args, **kwargs): new_order = [f.name for f in self._meta.fields] for field in self.last_fields: new_order.remove(field) new_order.append(field) self._meta._field_name_cache.sort(key=lambda x: new_order.index(x.name)) super(TwangooModel, self).__init__(*args, **kwargs) class ModelA(MyModel): field1 = models.CharField() field2 = models.CharField() #etc ... It works as intended, but I'm wondering, is there a better way to acheive my goal?

    Read the article

  • Silverlight: Button template with texttrimming cut off

    - by dex3703
    I'm replacing the default Button template's ContentPresenter with a TextBlock, so the text can be trimmed when it's too long. Works fine in WPF. In Silverlight the text gets pushed to one edge and cut off on the left, even when there's space on the right: Template is nothing special, just replaced the ContentPresenter with the TextBlock: <Border x:Name="bdrBackground" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" Background="{TemplateBinding Background}" /> <Rectangle x:Name="rectMouseOverVisualElement" Opacity="0"> <Rectangle.Fill> <SolidColorBrush x:Name="rectMouseOverColor" Color="{StaticResource MouseOverItemBgColor}"/> </Rectangle.Fill> </Rectangle> <Rectangle x:Name="rectPressedVisualElement" Opacity="0" Style="{TemplateBinding Tag}"/> <TextBlock x:Name="textblock" Text="{TemplateBinding Content}" TextTrimming="WordEllipsis" TextWrapping="NoWrap" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" Margin="{TemplateBinding Padding}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}"/> <Rectangle x:Name="rectDisabledVisualElement" Opacity="0" Style="{StaticResource RectangleDisabledStyle}"/> <Rectangle x:Name="rectFocusVisualElement" Opacity="0" Style="{StaticResource RectangleFocusStyle}"/> </Grid> </ControlTemplate> How do I fix this? More info: With the latest comment about HorizontalAlignment, it's clear that SL's implementation of TextTrimming differs from WPF's. In SL, TextTrimming only really works if the text is aligned left. SL isn't smart enough to align the text the way WPF does. For instance: WPF button: SL button with the textblock horizontalalignment = left: SL button with textblock horizontalalignment = center:

    Read the article

  • How can I find the common ancestor of two nodes in a binary tree?

    - by Siddhant
    The Binary Tree here is not a Binary Search Tree. Its just a Binary Tree. The structure could be taken as - struct node { int data; struct node *left; struct node *right; }; The maximum solution I could work out with a friend was something of this sort - Consider this binary tree (from http://lcm.csa.iisc.ernet.in/dsa/node87.html) : The inorder traversal yields - 8, 4, 9, 2, 5, 1, 6, 3, 7 And the postorder traversal yields - 8, 9, 4, 5, 2, 6, 7, 3, 1 So for instance, if we want to find the common ancestor of nodes 8 and 5, then we make a list of all the nodes which are between 8 and 5 in the inorder tree traversal, which in this case happens to be [4, 9, 2]. Then we check which node in this list appears last in the postorder traversal, which is 2. Hence the common ancestor for 8 and 5 is 2. The complexity for this algorithm, I believe is O(n) (O(n) for inorder/postorder traversals, the rest of the steps again being O(n) since they are nothing more than simple iterations in arrays). But there is a strong chance that this is wrong. :-) But this is a very crude approach, and I'm not sure if it breaks down for some case. Is there any other (possibly more optimal) solution to this problem?

    Read the article

  • How to call a generic method with an anonymous type involving generics?

    - by Alex Black
    I've got this code that works: def testTypeSpecialization = { class Foo[T] def add[T](obj: Foo[T]): Foo[T] = obj def addInt[X <% Foo[Int]](obj: X): X = { add(obj) obj } val foo = addInt(new Foo[Int] { def someMethod: String = "Hello world" }) assert(true) } But, I'd like to write it like this: def testTypeSpecialization = { class Foo[T] def add[X, T <% Foo[X](obj: T): T = obj val foo = add(new Foo[Int] { def someMethod: String = "Hello world" }) assert(true) } This second one fails to compile: no implicit argument matching parameter type (Foo[Int]{ ... }) = Foo[Nothing] was found. Basically: I'd like to create a new anonymous class/instance on the fly (e.g. new Foo[Int] { ... } ), and pass it into an "add" method which will add it to a list, and then return it The key thing here is that the variable from "val foo = " I'd like its type to be the anonymous class, not Foo[Int], since it adds methods (someMethod in this example) Any ideas? I think the 2nd one fails because the type Int is being erased. I can apparently 'hint' the compiler like this: def testTypeSpecialization = { class Foo[T] def add[X, T <% Foo[X]](dummy: X, obj: T): T = obj val foo = add(2, new Foo[Int] { def someMethod: String = "Hello world" }) assert(true) }

    Read the article

  • How do I execute queries upon DB connection in Rails?

    - by sycobuny
    I have certain initializing functions that I use to set up audit logging on the DB server side (ie, not rails) in PostgreSQL. At least one has to be issued (setting the current user) before inserting data into or updating any of the audited tables, or else the whole query will fail spectacularly. I can easily call these every time before running any save operation in the code, but DRY makes me think I should have the code repeated in as few places as possible, particularly since this diverges greatly from the ideal of database agnosticism. Currently I'm attempting to override ActiveRecord::Base.establish_connection in an initializer to set it up so that the queries are run as soon as I connect automatically, but it doesn't behave as I expect it to. Here is the code in the initializer: class ActiveRecord::Base # extend the class methods, not the instance methods class << self alias :old_establish_connection :establish_connection # hide the default def establish_connection(*args) ret = old_establish_connection(*args) # call the default # set up necessary session variables for audit logging # call these after calling default, to make sure conn is established 1st db = self.class.connection db.execute("SELECT SV.set('current_user', 'test@localhost')") db.execute("SELECT SV.set('audit_notes', NULL)") # end "empty variable" err ret # return the default's original value end end end puts "Loaded custom establish_connection into ActiveRecord::Base" sycobuny:~/rails$ ruby script/server = Booting WEBrick = Rails 2.3.5 application starting on http://0.0.0.0:3000 Loaded custom establish_connection into ActiveRecord::Base This doesn't give me any errors, and unfortunately I can't check what the method looks like internally (I was using ActiveRecord::Base.method(:establish_connection), but apparently that creates a new Method object each time it's called, which is seemingly worthless cause I can't check object_id for any worthwhile information and I also can't reverse the compilation). However, the code never seems to get called, because any attempt to run a save or an update on a database object fails as I predicted earlier. If this isn't a proper way to execute code immediately on connection to the database, then what is?

    Read the article

  • TCP/IP RST being sent differently in different browsers.

    - by Brian
    On Mac OS X (10.6), if I start a YouTube video download and pull the Ethernet cable for 5 or so seconds, then plug it back in, I get varying results depending on the browser. With Opera and Chrome, after I plug the cable back in the video continues to load. But with Safari and Firefox, it never does. Using Wireshark to look at the traffic, I found that Opera and Chrome simply ACK the first packet from YouTube after the cable has been plugged back in, but Safari and Firefox set the RST flag (0x4) in the TCP header and no more traffic follows. I can put a HUB in between the machine and the internet connection, the problem goes away and all four browsers continue loading the video when the cable is plugged back into the HUB. Again, looking at the Wireshark logs, it's evident that the machine doesn't see the Mulitcast connection close and there is simply a delay in the packets flowing through. So it seems that if Safari and Firefox sees a Multicast connection close, and then later see data on that same connection, they will send a RST. My question is why? What is the correct course of action, and why are 2/4 browsers doing it one way, while the other 2/4 are doing it another way? Is there somewhere in the code that I can see where this is happening in Firefox, for instance? Thank you very much.

    Read the article

  • How do I efficiently locate key-value pairs in a multi-dimensional PHP array?

    - by Kyle Noland
    I have an array in PHP as a result of the following query to a Wordpress database: SELECT * FROM wp_postmeta WHERE post_id = :id I am returned a multidimensional array that looks like this: Array ( [0] => Array ( [meta_id] => 380 [post_id] => 72 [meta_key] => _edit_last [meta_value] => 1 ) ... etc. What is the best way to find a particular key-value pair in this array? For instance, how would I located the row where [meta_key] = event_name so that I can extract that same row's [meta_value] value into a PHP variable? I realize I could turn this into many individual MySQL queries. Does anyone have an opinion of the efficiency of doing 10 SQL queries in a row rather than searching the array 10 times? I would think since the array is in memory, that will be the fastest method to find the values I need. Alternatively, is there a better way to query the database from the beginning so that my result set is formatted in a way that is easier to search?

    Read the article

  • Permutations distinct under given symmetry (Mathematica 8 group theory)

    - by Yaroslav Bulatov
    Given a list of integers like {2,1,1,0} I'd like to list all permutations of that list that are not equivalent under given group. For instance, using symmetry of the square, the result would be {{2, 1, 1, 0}, {2, 1, 0, 1}}. Approach below (Mathematica 8) generates all permutations, then weeds out the equivalent ones. I can't use it because I can't afford to generate all permutations, is there a more efficient way? Update: actually, the bottleneck is in DeleteCases. The following list {2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0} has about a million permutations and takes 0.1 seconds to compute. Apparently there are supposed to be 1292 orderings after removing symmetries, but my approach doesn't finish in 10 minutes removeEquivalent[{}] := {}; removeEquivalent[list_] := ( Sow[First[list]]; equivalents = Permute[First[list], #] & /@ GroupElements[group]; DeleteCases[list, Alternatives @@ equivalents] ); nonequivalentPermutations[list_] := ( reaped = Reap@FixedPoint[removeEquivalent, Permutations@list]; reaped[[2, 1]] ); group = DihedralGroup[4]; nonequivalentPermutations[{2, 1, 1, 0}]

    Read the article

  • Move SELECT to SQL Server side

    - by noober
    Hello all, I have an SQLCLR trigger. It contains a large and messy SELECT inside, with parts like: (CASE WHEN EXISTS(SELECT * FROM INSERTED I WHERE I.ID = R.ID) THEN '1' ELSE '0' END) AS IsUpdated -- Is selected row just added? as well as JOINs etc. I like to have the result as a single table with all included. Question 1. Can I move this SELECT to SQL Server side? If yes, how to do this? Saying "move", I mean to create a stored procedure or something else that can be executed before reading dataset in while cycle. The 2 following questions make sense only if answer is "yes". Why do I want to move SELECT? First off, I don't like mixing SQL with C# code. At second, I suppose that server-side queries run faster, since the server have more chances to cache them. Question 2. Am I right? Is it some sort of optimizing? Also, the SELECT contains constant strings, but they are localizable. For instance, WHERE R.Status = "Enabled" "Enabled" should be changed for French, German etc. So, I want to write 2 static methods -- OnCreate and OnDestroy -- then mark them as stored procedures. When registering/unregistering my assembly on server side, just call them respectively. In OnCreate format the SELECT string, replacing {0}, {1}... with required values from the assembly resources. Then I can localize resources only, not every script. Question 3. Is it good idea? Is there an existing attribute to mark methods to be executed by SQL Server automatically after (un)registartion an assembly? Regards,

    Read the article

  • Short snippet summarizing a webpage?

    - by Legend
    Is there a clean way of grabbing the first few lines of a given link that summarizes that link? I have seen this being done in some online bookmarking applications but have no clue on how they were implemented. For instance, if I give this link, I should be able to get a summary which is roughly like: I'll admit it, I was intimidated by MapReduce. I'd tried to read explanations of it, but even the wonderful Joel Spolsky left me scratching my head. So I plowed ahead trying to build decent pipelines to process massive amounts of data Nothing complex at first sight but grabbing these is the challenging part. Just the first few lines of the actual post should be fine. Should I just use a raw approach of grabbing the entire html and parsing the meta tags or something fancy like that (which obviously and unfortunately is not generalizable to every link out there) or is there a smarter way to achieve this? Any suggestions? Update: I just found InstaPaper do this but am not sure if it is getting the information from RSS feeds or some other way.

    Read the article

  • C++: inheritance problem

    - by Helltone
    It's quite hard to explain what I'm trying to do, I'll try: Imagine a base class A which contains some variables, and a set of classes deriving from A which all implement some method bool test() that operates on the variables inherited from A. class A { protected: int somevar; // ... }; class B : public A { public: bool test() { return (somevar == 42); } }; class C : public A { public: bool test() { return (somevar > 23); } }; // ... more classes deriving from A Now I have an instance of class A and I have set the value of somevar. int main(int, char* []) { A a; a.somevar = 42; Now, I need some kind of container that allows me to iterate over the elements i of this container, calling i::test() in the context of a... that is: std::vector<...> vec; // push B and C into vec, this is pseudo-code vec.push_back(&B); vec.push_back(&C); bool ret = true; for(i = vec.begin(); i != vec.end(); ++i) { // call B::test(), C::test(), setting *this to a ret &= ( a .* (&(*i)::test) )(); } return ret; } How can I do this? I've tried two methods: forcing a cast from B::* to A::*, adapting a pointer to call a method of a type on an object of a different type (works, but seems to be bad); using std::bind + the solution above, ugly hack; changing the signature of bool test() so that it takes an argument of type const A& instead of inheriting from A, I don't really like this solution because somevar must be public.

    Read the article

  • Ruby on Rails - pass variable to nested form

    - by Krule
    I am trying to build a multilingual site using Rails, but I can't figure out how to pass variable to nested form. Right now I am creating nested form like this. @languages.each do @article.article_locale.build(:language_id => language.id) end But i would like to pass value of language to it so i can distinguish fields. Something like this. @languages.each do |language| @language = language @article.article_locale.build(:language_id => language.id) end However, I always end up with language of the last loop iteration. Any way to pass this variable? -- edit -- In the end, since I've got no answer I have solved this problem so it, at least, works as it should. Following code is my partial solution. In model: def self.languages Language.all end def self.language_name language = [] self.languages.each_with_index do |lang, i| language[i] = lang.longname end return language end In Controller: def new @article = Article.new Article.languages.each do |language| @article.article_locale.build(:language_id => language.id) end end In HAML View: -count = 0 -f.fields_for :article_locale do |al| %h3= Article.language_name[count] -count+=1 -field_set_tag do %p =al.label :name, t(:name) =al.text_field :name %p =al.label :description, t(:description) =al.text_area :description =al.hidden_field :language_id It's not the most elegant solution I suppose, but it works. I would really love if I could get rid of counter in view for instance.

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >