Search Results

Search found 14799 results on 592 pages for 'instance eval'.

Page 509/592 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • Extremely Difficult Problem with ASP.Net 4.0 WebForms app using Routing

    - by dudeNumber4
    I have a completed app running in a QA environment. Everything works fine under most circumstances. If you hit a plain URL (no identifying information in the URL), you see an intro page with a button (generated by an asp LinkButton control) that posts back and directs you to another page. The markup looks the same when it fails and when it doesn't. When such a URL is followed from, e.g., Word and the default browser is IE, the intro page loads fine, but clicking the button causes an error. When not debugging, this behavior occurs every time. While debugging, the error occurs only ~ 1 in 10 times (closing the browser instance and starting over every time). When the error occurs, the intro page Page_Load fires and IsPostBack is false. Somehow, instead of a post, a get is being issued. When I run fiddler to try to analyze the actual calls (can't use firebug because it never happens using Firefox), everything works every time. I don't know whether this issue has anything to do with routing, and I've no idea even what to look at next. The strange thing is, when I debug, the intro page doesn't fully load every time. Only about 1 in 3 times does it fully load even if I've just cleared browser cache. When I run it through fiddler, it fully loads and works fine every time.

    Read the article

  • Many users, many cpus, no delays. Good for cloud?

    - by Eric
    I wish to set up a CPU-intensive time-important query service for users on the internet. A usage scenario is described below. Is cloud computing the right way to go for such an implementation? If so, what cloud vendor(s) cater to this type of application? I ask specifically, in terms of: 1) pricing 2) latency resulting from: - slow CPUs, instance creations, JIT compiles, etc.. - internal management and communication of processes inside the cloud (e.g. a queuing process and a calculation process) - communication between cloud and end user 3) ease of deployment A usage scenario I am expecting is: - A typical user sends a query (XML of size around 1K) once every 30 seconds on average. - Each query requires a numerical computation of average time 0.2 sec and max time 1 sec on a 1 GHz Pentium. The computation requires no data other than the query itself and is performed by the same piece of code each time. - The delay a user experiences between sending a query and receiving a response should be on average no more than 2 seconds and in general no more than 5 seconds. - A background save to a DB of the response should occur (not time critical) - There can be up to 30000 simultaneous users - i.e., on average 1000 queries a second, each requiring an average 0.2 sec calculation, so that would necessitate around 200 CPUs. Currently I'm look at GAE Java (for quicker deployment and less IT hassle) and EC2 (Speed and price optimization) as options. Where can I learn more about the right way to set ups such a system? past threads, different blogs, books, etc.. BTW, if my terminology is wrong or confusing, please let me know. I'd greatly appreciate any help.

    Read the article

  • WP7 - Cancelling ContextMenu click event propagation

    - by Praetorian
    I'm having a problem when the Silverlight toolkit's ContextMenu is clicked while it is over a UIElement that has registered a Tap event GestureListener. The context menu click propagates to the underlying element and fires its tap event. For instance, say I have a ListBox and each ListBoxItem within it has registered both a ContextMenu and a Tap GestureListener. Assume that clicking context menu item2 is supposed to take you to Page1.xaml, while tapping on any of ListBox items themselves is supposed to take you to Page2.xaml. If I open the context menu on item1 in the ListBox, then context menu item2 is on top of ListBox item2. When I click on context menu item2 I get weird behavior where the app navigates to Page1.xaml and then immediately to Page2.xaml because the click event also triggered the Tap gesture for ListBox item2. I've verified in the debugger that it is always the context menu that receives the click event first. How do I cancel the context menu item click's routed event propagation so it doesn't reach ListBox item2? Thanks for your help!

    Read the article

  • My app crash with MFMailComposeViewController and MFMessageComposeViewController when I re-launch it.

    - by Dolwen
    Hello all, I encounter a crash with MFMailComposeViewController, MFMessageComposeViewController and multitasking on IOS 4.2 (both simulator and IPHone 4). Code i use : Class emailClass = (NSClassFromString(@"MFMailComposeViewController")); if( emailClass != nil ) { MFMailComposeViewController * controller = [[emailClass alloc] init]; if([emailClass canSendMail]) { // delegate controller.mailComposeDelegate = self; // subject [controller setSubject:@"Hello All."]; // main message [controller setMessageBody:@"I love Stackoverflow.com !" isHTML:NO]; // adding image attachment // getting path for the image we have in the tutorial project NSString *path = [[NSBundle mainBundle] pathForResource:@"image" ofType:@"png"]; // loading content of the image into NSData NSData *imageData = [NSData dataWithContentsOfFile:path]; // adding the attachment to he message [controller addAttachmentData:imageData mimeType:@"image/png" fileName:@"My Byook image"]; // setting different than the default transition for the modal view controller [controller setModalTransitionStyle:UIModalTransitionStyleCoverVertical]; // show [[CGameStateManager getCurrentGameState] presentModalViewController:controller animated:YES]; } [controller release]; } To close MFMailComposeViewController i use : [[CGameStateManager getCurrentGameState] dismissModalViewControllerAnimated:NO]; Then the app crash on the "dismissModalViewControllerAnimated:" and we can read in the debugger with NSZombieEnabled : * -[UIImage isKindOfClass:]: message sent to deallocated instance 0xb0b9f80 Anyone have an answer to solve my problem ? :) Thx

    Read the article

  • c# wpf command pattern

    - by evan
    I have a wpf gui which displays a list of information in separate window and in a separate thread from the main application. As the user performs actions in the main window the side window is updated. (For example if you clicked page down in the main window a listbox in the side window would page down). Right now the architecture for this application feels very messy and I'm sure there is a cleaner way to do it. It looks like this: Main Window contains a singleton SideWindowControl which communicates with an instance of the SideWindowDisplay using events - so, for example, the pagedown button would work like: 1) the event handler of the button on the main window calls SideWindowControl.PageDown() 2) in the PageDown() function a event is created and thrown. 3) finally the gui, ShowSideWindowDisplay is subscribing to the SideWindowControl.Actions event handles the event and actually scrolls the listbox down - note because it is in a different thread it has to do that by running the command via Dispatcher.Invoke() This just seems like a very messy way to this and there must be a clearer way (The only part that can't change is that the main window and the side window must be on different threads). Perhaps using WPF commands? I'd really appreciate any suggestions!! Thanks

    Read the article

  • C++ Virtual Methods for Class-Specific Attributes or External Structure

    - by acanaday
    I have a set of classes which are all derived from a common base class. I want to use these classes polymorphically. The interface defines a set of getter methods whose return values are constant across a given derived class, but vary from one derived class to another. e.g.: enum AVal { A_VAL_ONE, A_VAL_TWO, A_VAL_THREE }; enum BVal { B_VAL_ONE, B_VAL_TWO, B_VAL_THREE }; class Base { //... virtual AVal getAVal() const = 0; virtual BVal getBVal() const = 0; //... }; class One : public Base { //... AVal getAVal() const { return A_VAL_ONE }; BVal getBVal() const { return B_VAL_ONE }; //... }; class Two : public Base { //... AVal getAVal() const { return A_VAL_TWO }; BVal getBVal() const { return B_VAL_TWO }; //... }; etc. Is this a common way of doing things? If performance is an important consideration, would I be better off pulling the attributes out into an external structure, e.g.: struct Vals { AVal a_val; VBal b_val; }; storing a Vals* in each instance, and rewriting Base as follows? class Base { //... public: AVal getAVal() const { return _vals->a_val; }; BVal getBVal() const { return _vals->b_val; }; //... private: Vals* _vals; }; Is the extra dereference essentially the same as the vtable lookup? What is the established idiom for this type of situation? Are both of these solutions dumb? Any insights are greatly appreciated

    Read the article

  • "java.lang.OutOfMemoryError: Java heap space" in image and array storage

    - by totalconscience
    I am currently working on an image processing demonstration in java (Applet). I am running into the problem where my arrays are too large and I am getting the "java.lang.OutOfMemoryError: Java heap space" error. The algorithm I run creates an NxD float array where: N is the number of pixel in the image and D is the coordinates of each pixel plus the colorspace components of each pixel (usually 1 for grayscale or 3 for RGB). For each iteration of the algorithm it creates one of these NxD float arrays and stores it for later use in a vector, so that the user of the applet may look at the individual steps. My client wants the program to be able to load a 500x500 RGB image and run as the upper bound. There are about 12 to 20 iterations per run so that means I need to be able to store a 12x500x500x5 float in some fashion. Is there a way to process all of this data and, if possible, how? Example of the issue: I am loading a 512 by 512 Grayscale image and even before the first iteration completes I run out of heap space. The line it points me to is: Y.add(new float[N][D]) where Y is a Vector and N and D are described as above. This is the second instance of the code using that line.

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • Will TFS 2010 support non-contiguous merging?

    - by steve_d
    I know that merging non-contiguous changesets at once may not be a good idea. However there is at least one situation in which merging non-contiguous changesets is (probably) not going to break anything: when there are no intervening changes on the affected individual files. (At least, it wouldn't break any worse than would a series of cherry-picked merges, checked in each time; and at least this way you would discover breakage before checking in). For instance, let's say you have a Main and a Development branch. They start out identical (e.g. after a release). They have two files, foo.cs and bar.cs. Alice makes a change in Development\foo.cs and checks it in as changeset #1001. Bob makes a change in Development\bar.cs and checks it in as #1002. Alice makes another change to Development\foo.cs and checks it in as #1003. Now we could in theory merge both changes #1001 and #1003 from dev-to main in a single operation. If we try to merge at the branch level, dev-to-main, we will have to do it as two operations. In this simple, contrived example it's simple enough to merge the one file - but in the real world where there would be many files involved, it's not so simple. Non-contiguous merging is one of the reasons given for why "merge by workitem" is not implemented in TFS.

    Read the article

  • Why initialize an object to empty

    - by ProgEnthu
    I am learning windows programming with the help of MSDN.Why would somebody initialize an object like the following? WNDCLASS wc = { }; Will this zero all the memory of the object? Whole source code is following: #ifndef UNICODE #define UNICODE #endif #include <windows.h> LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam); int WINAPI wWinMain(HINSTANCE hInstance, HINSTANCE, PWSTR pCmdLine, int nCmdShow) { // Register the window class. const wchar_t CLASS_NAME[] = L"Sample Window Class"; WNDCLASS wc = { }; wc.lpfnWndProc = WindowProc; wc.hInstance = hInstance; wc.lpszClassName = CLASS_NAME; RegisterClass(&wc); // Create the window. HWND hwnd = CreateWindowEx( 0, // Optional window styles. CLASS_NAME, // Window class L"Learn to Program Windows", // Window text WS_OVERLAPPEDWINDOW, // Window style // Size and position CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, NULL, // Parent window NULL, // Menu hInstance, // Instance handle NULL // Additional application data ); if (hwnd == NULL) { return 0; } ShowWindow(hwnd, nCmdShow); // Run the message loop. MSG msg = { }; while (GetMessage(&msg, NULL, 0, 0)) { TranslateMessage(&msg); DispatchMessage(&msg); } return 0; } LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { switch (uMsg) { case WM_DESTROY: PostQuitMessage(0); return 0; case WM_PAINT: { PAINTSTRUCT ps; HDC hdc = BeginPaint(hwnd, &ps); FillRect(hdc, &ps.rcPaint, (HBRUSH) (COLOR_WINDOW+1)); EndPaint(hwnd, &ps); } return 0; } return DefWindowProc(hwnd, uMsg, wParam, lParam); }

    Read the article

  • Injecting the application TransactionManager into a JPA EntityListener

    - by nodje
    I want to use the JPA EntityListener to support spring security ACLs. On @PostPersist events, I create a permission corresponding to the persisted entity. I need this operation to participate to the current Transaction. For this to happen I need to have a reference to the application TransactionManager in the EntityListener. The problem is, Spring can't manage the EntityListener as it is created automatically when EntityManagerFactory is instantiated. And in a classic Spring app, the EntityManagerFactory is itself created during the TransactioManager instantiation. <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> So I have no way to inject the TransactionManager with the constructor, as it is not yet instantiated. Making the EntityManager a @Component create another instance of the EntityManager. Implementing InitiliazingBean and using afterPropertySet() doesn't work as it's not a Spring managed bean. Any idea would be helpful as I'm stuck and out of ideas.

    Read the article

  • google app engine atomic section???

    - by bokertov
    hi, Say you retrieve a set of records from the datastore (something like: select * from MyClass where reserved='false'). how do i ensure that another user doesn't set the reserved is still false? I've looked in the Transaction documentation and got shocked from google's solution which is to catch the exception and retry in a loop. Any solution that I'm missing - it's hard to believe that there's no way to have an atomic operation in this environment. (btw - i could use 'syncronize' inside the servlet but i think it's not valid as there's no way to ensure that there's only one instance of the servlet object, isn't it? same applies to static variable solution) Any idea on how to solve??? (here's the google solution: http://code.google.com/appengine/docs/java/datastore/transactions.html#Entity_Groups look at: Key k = KeyFactory.createKey("Employee", "k12345"); Employee e = pm.getObjectById(Employee.class, k); e.counter += 1; pm.makePersistent(e); This requires a transaction because the value may be updated by another user after this code fetches the object, but before it saves the modified object. Without a transaction, the user's request will use the value of counter prior to the other user's update, and the save will overwrite the new value. With a transaction, the application is told about the other user's update. If the entity is updated during the transaction, then the transaction fails with an exception. The application can repeat the transaction to use the new data. THANKS!

    Read the article

  • TCP/IP RST being sent differently in different browsers.

    - by Brian
    On Mac OS X (10.6), if I start a YouTube video download and pull the Ethernet cable for 5 or so seconds, then plug it back in, I get varying results depending on the browser. With Opera and Chrome, after I plug the cable back in the video continues to load. But with Safari and Firefox, it never does. Using Wireshark to look at the traffic, I found that Opera and Chrome simply ACK the first packet from YouTube after the cable has been plugged back in, but Safari and Firefox set the RST flag (0x4) in the TCP header and no more traffic follows. I can put a HUB in between the machine and the internet connection, the problem goes away and all four browsers continue loading the video when the cable is plugged back into the HUB. Again, looking at the Wireshark logs, it's evident that the machine doesn't see the Mulitcast connection close and there is simply a delay in the packets flowing through. So it seems that if Safari and Firefox sees a Multicast connection close, and then later see data on that same connection, they will send a RST. My question is why? What is the correct course of action, and why are 2/4 browsers doing it one way, while the other 2/4 are doing it another way? Is there somewhere in the code that I can see where this is happening in Firefox, for instance? Thank you very much.

    Read the article

  • How to call a generic method with an anonymous type involving generics?

    - by Alex Black
    I've got this code that works: def testTypeSpecialization = { class Foo[T] def add[T](obj: Foo[T]): Foo[T] = obj def addInt[X <% Foo[Int]](obj: X): X = { add(obj) obj } val foo = addInt(new Foo[Int] { def someMethod: String = "Hello world" }) assert(true) } But, I'd like to write it like this: def testTypeSpecialization = { class Foo[T] def add[X, T <% Foo[X](obj: T): T = obj val foo = add(new Foo[Int] { def someMethod: String = "Hello world" }) assert(true) } This second one fails to compile: no implicit argument matching parameter type (Foo[Int]{ ... }) = Foo[Nothing] was found. Basically: I'd like to create a new anonymous class/instance on the fly (e.g. new Foo[Int] { ... } ), and pass it into an "add" method which will add it to a list, and then return it The key thing here is that the variable from "val foo = " I'd like its type to be the anonymous class, not Foo[Int], since it adds methods (someMethod in this example) Any ideas? I think the 2nd one fails because the type Int is being erased. I can apparently 'hint' the compiler like this: def testTypeSpecialization = { class Foo[T] def add[X, T <% Foo[X]](dummy: X, obj: T): T = obj val foo = add(2, new Foo[Int] { def someMethod: String = "Hello world" }) assert(true) }

    Read the article

  • Silverlight: Button template with texttrimming cut off

    - by dex3703
    I'm replacing the default Button template's ContentPresenter with a TextBlock, so the text can be trimmed when it's too long. Works fine in WPF. In Silverlight the text gets pushed to one edge and cut off on the left, even when there's space on the right: Template is nothing special, just replaced the ContentPresenter with the TextBlock: <Border x:Name="bdrBackground" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" Background="{TemplateBinding Background}" /> <Rectangle x:Name="rectMouseOverVisualElement" Opacity="0"> <Rectangle.Fill> <SolidColorBrush x:Name="rectMouseOverColor" Color="{StaticResource MouseOverItemBgColor}"/> </Rectangle.Fill> </Rectangle> <Rectangle x:Name="rectPressedVisualElement" Opacity="0" Style="{TemplateBinding Tag}"/> <TextBlock x:Name="textblock" Text="{TemplateBinding Content}" TextTrimming="WordEllipsis" TextWrapping="NoWrap" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" Margin="{TemplateBinding Padding}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}"/> <Rectangle x:Name="rectDisabledVisualElement" Opacity="0" Style="{StaticResource RectangleDisabledStyle}"/> <Rectangle x:Name="rectFocusVisualElement" Opacity="0" Style="{StaticResource RectangleFocusStyle}"/> </Grid> </ControlTemplate> How do I fix this? More info: With the latest comment about HorizontalAlignment, it's clear that SL's implementation of TextTrimming differs from WPF's. In SL, TextTrimming only really works if the text is aligned left. SL isn't smart enough to align the text the way WPF does. For instance: WPF button: SL button with the textblock horizontalalignment = left: SL button with textblock horizontalalignment = center:

    Read the article

  • How do I execute queries upon DB connection in Rails?

    - by sycobuny
    I have certain initializing functions that I use to set up audit logging on the DB server side (ie, not rails) in PostgreSQL. At least one has to be issued (setting the current user) before inserting data into or updating any of the audited tables, or else the whole query will fail spectacularly. I can easily call these every time before running any save operation in the code, but DRY makes me think I should have the code repeated in as few places as possible, particularly since this diverges greatly from the ideal of database agnosticism. Currently I'm attempting to override ActiveRecord::Base.establish_connection in an initializer to set it up so that the queries are run as soon as I connect automatically, but it doesn't behave as I expect it to. Here is the code in the initializer: class ActiveRecord::Base # extend the class methods, not the instance methods class << self alias :old_establish_connection :establish_connection # hide the default def establish_connection(*args) ret = old_establish_connection(*args) # call the default # set up necessary session variables for audit logging # call these after calling default, to make sure conn is established 1st db = self.class.connection db.execute("SELECT SV.set('current_user', 'test@localhost')") db.execute("SELECT SV.set('audit_notes', NULL)") # end "empty variable" err ret # return the default's original value end end end puts "Loaded custom establish_connection into ActiveRecord::Base" sycobuny:~/rails$ ruby script/server = Booting WEBrick = Rails 2.3.5 application starting on http://0.0.0.0:3000 Loaded custom establish_connection into ActiveRecord::Base This doesn't give me any errors, and unfortunately I can't check what the method looks like internally (I was using ActiveRecord::Base.method(:establish_connection), but apparently that creates a new Method object each time it's called, which is seemingly worthless cause I can't check object_id for any worthwhile information and I also can't reverse the compilation). However, the code never seems to get called, because any attempt to run a save or an update on a database object fails as I predicted earlier. If this isn't a proper way to execute code immediately on connection to the database, then what is?

    Read the article

  • Reordering fields in Django model

    - by Alex Lebedev
    I want to add few fields to every model in my django application. This time it's created_at, updated_at and notes. Duplicating code for every of 20+ models seems dumb. So, I decided to use abstract base class which would add these fields. The problem is that fields inherited from abstract base class come first in the field list in admin. Declaring field order for every ModelAdmin class is not an option, it's even more duplicate code than with manual field declaration. In my final solution, I modified model constructor to reorder fields in _meta before creating new instance: class MyModel(models.Model): # Service fields notes = my_fields.NotesField() created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) class Meta: abstract = True last_fields = ("notes", "created_at", "updated_at") def __init__(self, *args, **kwargs): new_order = [f.name for f in self._meta.fields] for field in self.last_fields: new_order.remove(field) new_order.append(field) self._meta._field_name_cache.sort(key=lambda x: new_order.index(x.name)) super(TwangooModel, self).__init__(*args, **kwargs) class ModelA(MyModel): field1 = models.CharField() field2 = models.CharField() #etc ... It works as intended, but I'm wondering, is there a better way to acheive my goal?

    Read the article

  • How can I find the common ancestor of two nodes in a binary tree?

    - by Siddhant
    The Binary Tree here is not a Binary Search Tree. Its just a Binary Tree. The structure could be taken as - struct node { int data; struct node *left; struct node *right; }; The maximum solution I could work out with a friend was something of this sort - Consider this binary tree (from http://lcm.csa.iisc.ernet.in/dsa/node87.html) : The inorder traversal yields - 8, 4, 9, 2, 5, 1, 6, 3, 7 And the postorder traversal yields - 8, 9, 4, 5, 2, 6, 7, 3, 1 So for instance, if we want to find the common ancestor of nodes 8 and 5, then we make a list of all the nodes which are between 8 and 5 in the inorder tree traversal, which in this case happens to be [4, 9, 2]. Then we check which node in this list appears last in the postorder traversal, which is 2. Hence the common ancestor for 8 and 5 is 2. The complexity for this algorithm, I believe is O(n) (O(n) for inorder/postorder traversals, the rest of the steps again being O(n) since they are nothing more than simple iterations in arrays). But there is a strong chance that this is wrong. :-) But this is a very crude approach, and I'm not sure if it breaks down for some case. Is there any other (possibly more optimal) solution to this problem?

    Read the article

  • Forms authentication: how do you store username password in web.config?

    - by Nick G
    I'm used to using Forms Authentication with a database, but I'm writing a little internal utility and the app doesn't have a database so I want to store the username and password in web.config. However for some reason, forms authentication is still trying to access SQL Server and I can't see how to stop it doing this and pick up the credentials from web.config. What am I doing wrong? I just get the error "Failed to generate a user instance of SQL Server due to a failure in impersonating the client. The connection will be closed." Here are the relevant sections of my web.config: <configuration> <system.web> <authentication mode="Forms"> <forms loginUrl="~/Login.aspx" timeout="60" name=".LoginCookie" path="/" > <credentials passwordFormat="Clear"> <user name="user1" password="[pass]" /> <user name="user2" password="[pass]" /> </credentials> </forms> </authentication> <authorization> <deny users="?" /> </authorization> </system.web> </configuration>

    Read the article

  • How do I efficiently locate key-value pairs in a multi-dimensional PHP array?

    - by Kyle Noland
    I have an array in PHP as a result of the following query to a Wordpress database: SELECT * FROM wp_postmeta WHERE post_id = :id I am returned a multidimensional array that looks like this: Array ( [0] => Array ( [meta_id] => 380 [post_id] => 72 [meta_key] => _edit_last [meta_value] => 1 ) ... etc. What is the best way to find a particular key-value pair in this array? For instance, how would I located the row where [meta_key] = event_name so that I can extract that same row's [meta_value] value into a PHP variable? I realize I could turn this into many individual MySQL queries. Does anyone have an opinion of the efficiency of doing 10 SQL queries in a row rather than searching the array 10 times? I would think since the array is in memory, that will be the fastest method to find the values I need. Alternatively, is there a better way to query the database from the beginning so that my result set is formatted in a way that is easier to search?

    Read the article

  • Move SELECT to SQL Server side

    - by noober
    Hello all, I have an SQLCLR trigger. It contains a large and messy SELECT inside, with parts like: (CASE WHEN EXISTS(SELECT * FROM INSERTED I WHERE I.ID = R.ID) THEN '1' ELSE '0' END) AS IsUpdated -- Is selected row just added? as well as JOINs etc. I like to have the result as a single table with all included. Question 1. Can I move this SELECT to SQL Server side? If yes, how to do this? Saying "move", I mean to create a stored procedure or something else that can be executed before reading dataset in while cycle. The 2 following questions make sense only if answer is "yes". Why do I want to move SELECT? First off, I don't like mixing SQL with C# code. At second, I suppose that server-side queries run faster, since the server have more chances to cache them. Question 2. Am I right? Is it some sort of optimizing? Also, the SELECT contains constant strings, but they are localizable. For instance, WHERE R.Status = "Enabled" "Enabled" should be changed for French, German etc. So, I want to write 2 static methods -- OnCreate and OnDestroy -- then mark them as stored procedures. When registering/unregistering my assembly on server side, just call them respectively. In OnCreate format the SELECT string, replacing {0}, {1}... with required values from the assembly resources. Then I can localize resources only, not every script. Question 3. Is it good idea? Is there an existing attribute to mark methods to be executed by SQL Server automatically after (un)registartion an assembly? Regards,

    Read the article

  • Permutations distinct under given symmetry (Mathematica 8 group theory)

    - by Yaroslav Bulatov
    Given a list of integers like {2,1,1,0} I'd like to list all permutations of that list that are not equivalent under given group. For instance, using symmetry of the square, the result would be {{2, 1, 1, 0}, {2, 1, 0, 1}}. Approach below (Mathematica 8) generates all permutations, then weeds out the equivalent ones. I can't use it because I can't afford to generate all permutations, is there a more efficient way? Update: actually, the bottleneck is in DeleteCases. The following list {2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0} has about a million permutations and takes 0.1 seconds to compute. Apparently there are supposed to be 1292 orderings after removing symmetries, but my approach doesn't finish in 10 minutes removeEquivalent[{}] := {}; removeEquivalent[list_] := ( Sow[First[list]]; equivalents = Permute[First[list], #] & /@ GroupElements[group]; DeleteCases[list, Alternatives @@ equivalents] ); nonequivalentPermutations[list_] := ( reaped = Reap@FixedPoint[removeEquivalent, Permutations@list]; reaped[[2, 1]] ); group = DihedralGroup[4]; nonequivalentPermutations[{2, 1, 1, 0}]

    Read the article

  • Short snippet summarizing a webpage?

    - by Legend
    Is there a clean way of grabbing the first few lines of a given link that summarizes that link? I have seen this being done in some online bookmarking applications but have no clue on how they were implemented. For instance, if I give this link, I should be able to get a summary which is roughly like: I'll admit it, I was intimidated by MapReduce. I'd tried to read explanations of it, but even the wonderful Joel Spolsky left me scratching my head. So I plowed ahead trying to build decent pipelines to process massive amounts of data Nothing complex at first sight but grabbing these is the challenging part. Just the first few lines of the actual post should be fine. Should I just use a raw approach of grabbing the entire html and parsing the meta tags or something fancy like that (which obviously and unfortunately is not generalizable to every link out there) or is there a smarter way to achieve this? Any suggestions? Update: I just found InstaPaper do this but am not sure if it is getting the information from RSS feeds or some other way.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • C++: inheritance problem

    - by Helltone
    It's quite hard to explain what I'm trying to do, I'll try: Imagine a base class A which contains some variables, and a set of classes deriving from A which all implement some method bool test() that operates on the variables inherited from A. class A { protected: int somevar; // ... }; class B : public A { public: bool test() { return (somevar == 42); } }; class C : public A { public: bool test() { return (somevar > 23); } }; // ... more classes deriving from A Now I have an instance of class A and I have set the value of somevar. int main(int, char* []) { A a; a.somevar = 42; Now, I need some kind of container that allows me to iterate over the elements i of this container, calling i::test() in the context of a... that is: std::vector<...> vec; // push B and C into vec, this is pseudo-code vec.push_back(&B); vec.push_back(&C); bool ret = true; for(i = vec.begin(); i != vec.end(); ++i) { // call B::test(), C::test(), setting *this to a ret &= ( a .* (&(*i)::test) )(); } return ret; } How can I do this? I've tried two methods: forcing a cast from B::* to A::*, adapting a pointer to call a method of a type on an object of a different type (works, but seems to be bad); using std::bind + the solution above, ugly hack; changing the signature of bool test() so that it takes an argument of type const A& instead of inheriting from A, I don't really like this solution because somevar must be public.

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >