Search Results

Search found 68828 results on 2754 pages for 'knapsack problem'.

Page 813/2754 | < Previous Page | 809 810 811 812 813 814 815 816 817 818 819 820  | Next Page >

  • Using PHP and cURL to login to indyarocks.com

    - by Divya
    I am new to cURL and don't know much about it. I basically want to login to my account on www.indyarocks.com through libcurl for PHP. I don't know what type of authentication it uses (I don't know how to find that out.). When I go to http://www.indyarocks.com, I get a login form which asks for my username and password. I put in my username and password and click login and everything is good. I tried to automate this using cURL. This is a snippet of my code. curl_setopt($curl_connection, CURLOPT_URL, "http://www.indyarocks.com/loginchk.php"); curl_setopt($curl_connection, CURLOPT_POST, 1); curl_setopt($curl_connection, CURLOPT_HTTPAUTH, CURLAUTH_ANY); curl_setopt($curl_connection, CURLOPT_USERPWD, $username.':'.$password); I looked at the source of the login page and found out the address of the page to which the username and password are sent (the action attribute of the form) which is "http://www.indyarocks.com/loginchk.php" and set it as the target url. When I run this, I get username or password is wrong error and the login fails. My username and password is correct. I don't know what the problem is. Can the password be encrypted? Can that be responsible for this failure? Please help me get around this problem. I'll be really thankful. Thanks in advance.

    Read the article

  • How do I run NUnit in debug mode from Visual Studio?

    - by Jon Cage
    I've recently been building a test framework for a bit of C# I've been working on. I have NUnit set up and a new project within my workspace to test the component. All works well if I load up my unit tests from Nunit (v2.4), but I've got to the point where it would be really useful to run in debug mode and set some break points. I've tried the suggestions from several guides which all suggest changing the 'Debug' properties of the test project: Start external program: C:\Program Files\NUnit 2.4.8\bin\nunit-console.exe Command line arguments: /assembly: <full-path-to-solution>\TestDSP\bin\Debug\TestDSP.dll I'm using the console version there, but have tried the calling the GUI as well. Both give me the same error when I try and start debugging: Cannot start test project 'TestDSP' because the project does not contain any tests. Is this because I normally load \DSP.nunit into the Nunit GUI and that's where the tests are held? I'm beginning to think the problem may be that VS wants to run it's own test framework and that's why it's failing to find the NUnit tests? [Edit] To those asking about test fixtures, one of my .cs files in the TestDSP project looks roughly like this: namespace Some.TestNamespace { // Testing framework includes using NUnit.Framework; [TestFixture] public class FirFilterTest { /// <summary> /// Tests that a FirFilter can be created /// </summary> [Test] public void Test01_ConstructorTest() { ...some tests... } } } ...I'm pretty new to C# and the Nunit test framework so it's entirely possible I've missed some crucial bit of information ;-) [FINAL SOLUTION] The big problem was the project I'd used. If you pick: Other Languages->Visual C#->Test->Test Project ...when you're choosing the project type, Visual Studio will try and use it's own testing framework as far as I can tell. You should pick a normal c# class library project instead and then the instructions in my selected answer will work.

    Read the article

  • Looping and pausing after loading ajax content in Javascript JQuery

    - by Tristan
    I have what I though was a simple problem to solve (never is!) I'm trying to loop through a list of URL's in a javascript array I have made, load the first one, wait X seconds, then load the second, and continue until I start again. I got the array and looping working, trouble is, however I try and implement a "wait" using setInterval or similar, I have a structural issue, as the loop continues in the background. I tried to code it like this: $(document).ready(function(){ // my array of URL's var urlArray = new Array(); urlArray[0] = "urlOne"; urlArray[1] = "urlTwo"; urlArray[2] = "urlThree"; // my looping logic that continues to execute (problem starts here) while (true) { for (var i = 0; i < urlArray.length; i++) { $('#load').load(urlArray[i], function(){ // now ideally I want it to wait here for X seconds after loading that URL and then start the loop again, but javascript doesn't seem to work this way, and I'm not sure how to structure it to get the same effect }); } } });

    Read the article

  • Memorystream and Large Object Heap

    - by Flo
    I have to transfer large files between computers on via unreliable connections using WCF. Because I want to be able to resume the file and I don't want to be limited in my filesize by WCF, I am chunking the files into 1MB pieces. These "chunk" are transported as stream. Which works quite nice, so far. My steps are: open filestream read chunk from file into byet[] and create memorystream transfer chunk back to 2. until the whole file is sent My problem is in step 2. I assume that when I create a memory stream from a byte array, it will end up on the LOH and ultimately cause an outofmemory exception. I could not actually create this error, maybe I am wrong in my assumption. Now, I don't want to send the byte[] in the message, as WCF will tell me the array size is too big. I can change the max allowed array size and/or the size of my chunk, but I hope there is another solution. My actual question(s): Will my current solution create objects on the LOH and will that cause me problem? Is there a better way to solve this? Btw.: On the receiving side I simple read smaller chunks from the arriving stream and write them directly into the file, so no large byte arrays involved.

    Read the article

  • Null reference exceptions in .net

    - by Carlo
    Hello, we're having this big problem with our application. It's a rather large application with several modules, and thousands and thousands lines of code. A lot of parts of the application are designed to exist only with a reference to another object, for example a Person object can never exists without a House object, so if you at any point in the app say: bool check = App.Person.House == null; check should always be false (by design), so, to keep using that example, while creating modules, testing, debugging, App.Person.House is never null, but once we shipped the application to our client, they started getting a bunch of NullReferenceException with the objects that by design, should never have a null reference. They tell us the bug, we try to reproduce it here, but 90% of the times we can't, because here it works fine. The app is being developed with C# and WPF, and by design, it only runs on Windows XP SP 3, and the .net framework v3.5, so we KNOW the user has the same operative system, service pack, and .net framework version as we do here, but they still get this weird NullReferenceExceptions that we can't reproduce. So, I'm just wondering if anyone has seen this before and how you fixed it, we have the app running here at least 8 hours a day in 5 different computers, and we never see those exceptions, this only happens to the client for some reason. ANY thought, any clue, any solution that could get us closer to fixing this problem will be greatly appreciated. Thanks!

    Read the article

  • MVVM Listbox DataTemplate SelectedItem

    - by StinkerPeter
    I am using a ListBox with a DataTemplate as shown below (xaml simplified and variable names changed). <ListBox ItemsSource="{Binding Path=ObservCollectionItems}" SelectedItem="{Binding Path=SelectedItemVar, Mode=TwoWay}"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel> <TextBlock Text="{Binding SomeVar}" /> <Border> <StackPanel> <Button Content="String1" Command="{Binding DataContext.Command1} RelativeSource={RelativeSource FindAncestor, ListBox, 1}}" /> <Button Content="String2" Command="{Binding DataContext.Command2} RelativeSource={RelativeSource FindAncestor, ListBox, 1}}" /> </StackPanel> </Border> </StackPanel> </DataTemplate> <ListBox.ItemTemplate> </ListBox> I need the SelectedItemVar (dependency property) to update when I click on one of the buttons. SelectedItemVar is then used for the respective button's command. SelectedItemVar does update when I click on the TextBlock or the Border, but not when I click either button. I found a non-MVVM solution to this problem here. I do not want to add code in the file-behind to solve this, as they did in the link. Is there a clean solution that can be done in XAML. Beyond the non-MVVM solutions, I have not found anyone with this problem. I would have thought this was fairly common. Finally, I found this Command="{Binding DataContext.CommandName} RelativeSource={RelativeSource FindAncestor, ListBox, 1} for the Command binding. I do not fully understand what it is doing, but I do know that the command wasn't firing when I was binding directly to CommandName.

    Read the article

  • SOAP security in Salesforce

    - by Dean Barnes
    I am trying to change the wsdl2apex code for a web service call header that currently looks like this: <env:Header> <Security xmlns="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd"> <UsernameToken Id="UsernameToken-4"> <Username>test</Username> <Password>test</Password> </UsernameToken> </Security> </env:Header> to look like this: <soapenv:Header> <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken wsu:Id="UsernameToken-4" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsse:Username>Test</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">Test</wsse:Password> </wsse:UsernameToken> </wsse:Security> </soapenv:Header> One problem is that I can't work out how to change the namespaces for elements (or even if it matters what name they have). A secondary problem is putting the Type attribute onto the Password element. Can any provide any information that might help? Thanks

    Read the article

  • WCF service reference namespace differs from original

    - by Thorarin
    I'm having a problem regarding namespaces used by my service references. I have a number of WCF services, say with the namespace MyCompany.Services.MyProduct (the actual namespaces are longer). As part of the product, I'm also providing a sample C# .NET website. This web application uses the namespace MyCompany.MyProduct. During initial development, the service was added as a project reference to the website and uses directly. I used a factory pattern that returns an object instance that implements MyCompany.Services.MyProduct.IMyService. So far, so good. Now I want to change this to use an actual service reference. After adding the reference and typing MyCompany.Services.MyProduct in the namespace textbox, it generates classes in the namespace MyCompany.MyProduct.MyCompany.Services.MyProduct. BAD! I don't want to have to change using directives in several places just because I'm using a proxy class. So I tried prepending the namespace with global::, but that is not accepted. Note that I hadn't even deleted the original assembly references yet, and "reuse types" is enabled, but no reusing was done, apparently. However, I don't want to keep the assembly references around in my sample website for it to work anyway. The only solution I've come up with so far is setting the default namespace for my web application to MyCompany (because it cannot be empty), and adding the service reference as Services.MyProduct. Suppose that a customer wants to use my sample website as a starting point, and they change the default namespace to OtherCompany.Whatever, this will obviously break my workaround. Is there a good solution to this problem? To summarize: I want to generate a service reference proxy in the original namespace, without referencing the assembly. Note: I have seen this question, but there was no solution provided that is acceptable for my use case. Edit: As John Saunders suggested, I've submitted some feedback to Microsoft about this: Feedback item @ Microsoft Connect

    Read the article

  • C++ Switch won't compile with externally defined variable used as case

    - by C Nielsen
    I'm writing C++ using the MinGW GNU compiler and the problem occurs when I try to use an externally defined integer variable as a case in a switch statement. I get the following compiler error: "case label does not reduce to an integer constant". Because I've defined the integer variable as extern I believe that it should compile, does anyone know what the problem may be? Below is an example: test.cpp #include <iostream> #include "x_def.h" int main() { std::cout << "Main Entered" << std::endl; switch(0) { case test_int: std::cout << "Case X" << std::endl; break; default: std::cout << "Case Default" << std::endl; break; } return 0; } x_def.h extern const int test_int; x_def.cpp const int test_int = 0; This code will compile correctly on Visual C++ 2008. Furthermore a Montanan friend of mine checked the ISO C++ standard and it appears that any const-integer expression should work. Is this possibly a compiler bug or have I missed something obvious? Here's my compiler version information: Reading specs from C:/MinGW/bin/../lib/gcc/mingw32/3.4.5/specs Configured with: ../gcc-3.4.5-20060117-3/configure --with-gcc --with-gnu-ld --with-gnu-as --host=mingw32 --target=mingw32 --prefix=/mingw --enable-threads --disable-nls --enable-languages=c,c++,f77,ada,objc,java --disable-win32-registry --disable-shared --enable-sjlj-exceptions --enable-libgcj --disable-java-awt --without-x --enable-java-gc=boehm --disable-libgcj-debug --enable-interpreter --enable-hash-synchronization --enable-libstdcxx-debug Thread model: win32 gcc version 3.4.5 (mingw-vista special r3)

    Read the article

  • Git merge 2 new file with removed content and added content

    - by Loïc Faure-Lacroix
    So we are working in with 2 different repositories and both designers modified the same file. the problem is quite simple but I have no ideas how to solve it yet. Both files are marked as new since they have almost nothing in common except that file. When I try to merge from branch A to B it mark the parts added in A deleted in B and on the other side, what was added in B appears deleted in A. git seems to try to outsmart me when I know that I need almost every changes and nothing should be mark as deletion. I have 2 other branch that should merge without problem after these 2 branch. I can't merge them yet since there are some recent changes that may not merge really well too. I have to merge A and B = E then C and D = F and then hopefully E and F So the big question here is how can I do a completely manual merge that will mark every changes as conflict anything deleted anything added should be marked as conflict that I can solve by myself using an editor. Git is trying to outsmart me and fail terribly at it.

    Read the article

  • Java Socks Proxy Socket Error

    - by Ionut Ungureanu
    I am trying to create a http request through a SOCKS (v4 / v5) proxy in Java. After reading about socks communication protocol on WikiPedia, I have put togheter this piece of code: Socket sock = new Socket(); InetSocketAddress remoteProxyAddress = new InetSocketAddress(proxy ip, proxy port); sock.connect(remoteProxyAddress, connTimeout); InputStream in = sock.getInputStream(); OutputStream out = sock.getOutputStream(); out.write(0x04); out.write(0x01); out.write((endpoint.getPort() >> 8) & 0xff); out.write((endpoint.getPort() >> 0) & 0xff); out.write(endpoint.getAddress().getAddress()); out.write(0x0); out.flush(); And here comes the part where I read from the proxy server. The problem is that the response is always "-1". I have tried the proxy on Firefox and it works perfect. So... the problem is in my app. Can anyone help me? Thanks!

    Read the article

  • Linux C debugging library to detect memory corruptions

    - by calandoa
    When working sometimes ago on an embedded system with a simple MMU, I used to program dynamically this MMU to detect memory corruptions. For instance, at some moment at runtime, the foo variable was overwritten with some unexpected data (probably by a dangling pointer or whatever). So I added the additional debugging code : at init, the memory used by foo was indicated as a forbidden region to the MMU; each time foo was accessed on purpose, access to the region was allowed just before then forbidden just after; a MMU irq handler was added to dump the master and the address responsible of the violation. This was actually some kind of watchpoint, but directly self-handled by the code itself. Now, I would like to reuse the same trick, but on a x86 platform. The problem is that I am very far from understanding how is working the MMU on this platform, and how it is used by Linux, but I wonder if any library/tool/system call already exist to deal with this problem. Note that I am aware that various tools exist like Valgrind or GDB to manage memory problems, but as far as I know, none of these tools car be dynamically reconfigured by the debugged code. I am mainly interested for user space under Linux, but any info on kernel mode or under Windows is also welcome!

    Read the article

  • C# Fun with Generics - Mutual Dependencies

    - by Kenneth Cochran
    As an experiment I'm trying to write a generic MVP framework. I started with: public interface IPresenter<TView> where TView: IView<IPresenter<... { TView View { get; set;} } public interface IView<TPresenter> where TPresenter:IPresenter<IView<... { TPresenter Presenter { get; set; } } Obviously this can't work because the types of TView and TPresenter can't be resolved. You'd be writing Type<Type<... forever. So my next attempt looked like this: public interface IView<T> where T:IPresenter { ... } public interface IView:IView<IPresenter> { } public interface IPresenter<TView> where TView: IView { ... } public interface IPresenter: IPresenter<IView> { ... } This actually compiles and you can even inherit from these interfaces like so: public class MyView : IView, IView<MyPresenter> { ... } public class MyPresenter : IPresenter, IPresenter<MyView> { ... } The problem is in the class definition you have to define any members declared in the generic type twice. Not ideal but it still compiles. The problem's start creeping up when you actually try to access the members of a Presenter from a View or vice versa. You get an Ambiguous reference when you try to compile. Is there any way to avoid this double implementation of a member when you inherit from both interfaces? Is it even possible to resolve two mutually dependent generic types at compile time?

    Read the article

  • Best way to dynamically get column names from oracle tables

    - by MNC
    Hi, We are using an extractor application that will export data from the database to csv files. Based on some condition variable it extracts data from different tables, and for some conditions we have to use UNION ALL as the data has to be extracted from more than one table. So to satisfy the UNION ALL condition we are using nulls to match the number of columns. Right now all the queries in the system are pre-built based on the condition variable. The problem is whenever there is change in the table projection (i.e new column added, existing column modified, column dropped) we have to manually change the code in the application. Can you please give some suggestions how to extract the column names dynamically so that any changes in the table structure do not require change in the code? My concern is the condition that decides which table to query. The variable condition is like if the condition is A, then load from TableX if the condition is B then load from TableA and TableY. We must know from which table we need to get data. Once we know the table it is straightforward to query the column names from the data dictionary. But there is one more condition, which is that some columns need to be excluded, and these columns are different for each table. I am trying to solve the problem only for dynamically generating the list columns. But my manager told me to make solution on the conceptual level rather than just fixing. This is a very big system with providers and consumers constantly loading and consuming data. So he wanted solution that can be general. So what is the best way for storing condition, tablename, excluded columns? One way is storing in database. Are there any other ways? If yes what is the best? As I have to give at least a couple of ideas before finalizing. Thanks,

    Read the article

  • How should we setup up complex situations for tests?

    - by ShaneC
    I'm currently working on what I would call integration tests. I want to verify that if a WCF service is called it will do what I expect. Let's take a very simple scenario. Assume we have a contract object that we can put on hold or take off hold. Now writing the put on hold test is quite simple. You create a contract instance and execute the code that puts it on code. The question I have comes when we want to test the taking off hold service call. The problem is that putting a contract on hold can be actually quite complicated leading to various objects all be modified. So usually I would use the Builder pattern and do something like this.. var onHoldContract = new ContractBuilder().PutOnHold().Build(); The problem I have with this is now I have to pretty much replicate a large part of my put on hold service. Now when I change what putting something on hold means I have two places I have to modify. The other option that immediately jumps out at me is to just use the put on hold service as part of my test setup but now I'm coupling my test to the success of another piece of code which is something I don't like to do since it can lead to failures in one spot breaking unrelated tests elsewhere (if put on hold failed for example). Any other options I'm missing out here? or opinions on which method is preferable and why?

    Read the article

  • Pointer incrementing query

    - by Craig
    I have been looking at this piece of code, and it is not doing what I expect. I have 3 globals. int x, y, *pointer, z; Inside of main I declare them. x = 10; y = 25; pointer = &x; now at this point &x is 0x004A144 &y is 0x004A138 pointer is pointing to 0x004A144 Now when I increment: y = *++pointer; it points to 0x004A148, this is the address y should be at shouldn't it? The idea is that incrementing the pointer to 'x' should increment it to point at y, but it doesn't seem to want to declare them in in order like I expect. If this a VS2005 / 2008 problem? Or maybe an Express problem? This isn't really homework, as I have done it a couple of years ago but I was revising on my pointer stuff and I tried this again. But this time I am getting unexpected results. Does anyone have opinions on this? *UPDATE sorry should be more clear, 'thought' on declaration 'y' should be at 148, and that incrementing the pointer pointing to x should increment 'pointer' to 148 (which it does), but that isn't where y is. Why isn't y declaring where it should be.

    Read the article

  • Object-oriented GUI development in python

    - by ptabatt
    Hey guys, new programmer here. I have an assignment for class and I'm stuck... What I need to do is a create a GUI that gives someone a basic arithmetic problem in one box, asks the person to answer it, evaluates it, and tells you if you're right or wrong... Basically, what I have is this: [code] class Lesson(Frame): def init (self, parent=None): Frame.init(self, parent) self.pack() Lesson.make_widgets(self) def make_widgets(self): Label(self, text="").pack(side=TOP) ent = Entry(self) self.a = randrange(1,10) self.b = randrange(1,10) self.expr = choice(["+","-"]) ent.insert(END, str(self.a) + str(self.expr) + str(self.a)) [/code] I've broken this down into many little steps and basically, what I'm trying to do right now is insert a default random expression into the first entry widget. When I run this code, I just get a blank Label. Why is that? How can I put a something like "7+7" into the box? If you absolutely need background to the problem, it's question #3 on this link. http://reed.cs.depaul.edu/lperkovic/csc242/homeworks/Homework8.html -Thanks for all help in advance.

    Read the article

  • Vector insert() causes program to crash

    - by wrongusername
    This is the first part of a function I have that's causing my program to crash: vector<Student> sortGPA(vector<Student> student) { vector<Student> sorted; Student test = student[0]; cout << "here\n"; sorted.insert(student.begin(), student[0]); cout << "it failed.\n"; ... It crashes right at the sorted part because I can see "here" on the screen but not "it failed." The following error message comes up: Debug Assertion Failed! (a long path here...) Expression: vector emplace iterator outside range For more information on how your program can cause an assertion failure, see the Visual C++ documentation on asserts. I'm not sure what's causing the problem now, since I have a similar line of code elsewhere student.insert(student.begin() + position(temp, student), temp); that does not crash (where position returns an int and temp is another declaration of a struct Student). What can I do to resolve the problem, and how is the first insert different from the second one?

    Read the article

  • How to use Grails Spring Security Plugin to require logging in before access an action?

    - by Hoàng Long
    Hi all, I know that I can use annotation or Request mapping to restrict access to an ACTION by some specific ROLES. But now I have a different circumstance. My scenario is: every user of my site can create posts, and they can make their own post public, private, or only share to some other users. I implement sharing post by a database table PERMISSION, which specify if a user have the right to view a post or not. The problem arises here is that when a customer access a post through a direct link, how can I determine he/she have the privilege to view it? There's 3 circumstances: The post is public, so it can be viewed by anyone (include not-login user) The post is private, so only the login-owner can view it The post is sharing, it means only the login-user that is shared and the owner can view it. I want to process like this: If the requested post is public: ok. If the requested post is private/sharing: I want to redirect the customer to the login page; after logging in, the user will be re-direct to the page he wants to see. The problem here is that I can redirect the user to login controller/ auth action, but after that I don't know how to redirect it back. The link to every post is different by post_id, so I can't use SpringSecurityUtils.securityConfig.successHandler.defaultTargetUrl Could anyone know a way to do this?

    Read the article

  • Overriding content_type for Rails Paperclip plugin

    - by Fotios
    I think I have a bit of a chicken and egg problem. I would like to set the content_type of a file uploaded via Paperclip. The problem is that the default content_type is only based on extension, but I'd like to base it on another module. I seem to be able to set the content_type with the before_post_process class Upload < ActiveRecord::Base has_attached_file :upload before_post_process :foo def foo logger.debug "Changing content_type" #This works self.upload.instance_write(:content_type,"foobar") # This fails because the file does not actually exist yet self.upload.instance_write(:content_type,file_type(self.upload.path) end # Returns the filetype based on file command (assume it works) def file_type(path) return `file -ib '#{path}'`.split(/;/)[0] end end But...I cannot base the content type on the file because Paperclip doesn't write the file until after_create. And I cannot seem to set the content_type after it has been saved or with the after_create callback (even back in the controller) So I would like to know if I can somehow get access to the actual file object (assume there are no processors doing anything to the original file) before it is saved, so that I can run the file_type command on that. Or is there a way to modify the content_type after the objects have been created.

    Read the article

  • Validating and filling default values in XML based on XSD in Python

    - by PoltoS
    I have an XML like <a> <b/> <b c="2"/> </a> I have my XSD <xs:element name="a"> <xs:complexType> <xs:sequence> <xs:element name="b" maxOccurs="unbounded"> <xs:attribute name="c" default="1"/> </xs:element> </xs:sequence> </xs:complexType> </xs:element> I want to use my XSD to validate my original XML and fill all default values: <a> <b c="1"/> <b c="2"/> </a> How do I get it in Python? With validation there is no problem (e.g. XMLSchema). The problem are the default values.

    Read the article

  • getPackage() returning null when my JUnit test is run from Ant

    - by philharvey
    I'm having problems running a JUnit test. It runs fine in Eclipse, but now I'm trying to run it from the command-line using Ant. The problem is that the following code is returning null: getClass().getPackage(). I'm running my JUnit test like so: <junit fork="no" printsummary="yes" haltonfailure="no"> <classpath refid="junit.classpath" /> <batchtest fork="yes" todir="${reports.junit}"> <fileset dir="${junit.classdir}"> <include name="**/FileHttpServerTest.class" /> <exclude name="**/*$*" /> </fileset> </batchtest> <formatter type="xml" /> ... I Googled for this sort of error, and found a number of references to classloader misbehaviour. But I've found nothing gave me enough information to solve my problem. I really need getClass().getPackage() to not return null. Can anyone help me? Thanks, Phil

    Read the article

  • Disadvantages of MySQL Row Locking

    - by Nyxynyx
    I am using row locking (transactions) in MySQL for creating a job queue. Engine used is InnoDB. SQL Query START TRANSACTION; SELECT * FROM mytable WHERE status IS NULL ORDER BY timestamp DESC LIMIT 1 FOR UPDATE; UPDATE mytable SET status = 1; COMMIT; According to this webpage, The problem with SELECT FOR UPDATE is that it usually creates a single synchronization point for all of the worker processes, and you see a lot of processes waiting for the locks to be released with COMMIT. Question: Does this mean that when the first query is executed, which takes some time to finish the transaction before, when the second similar query occurs before the first transaction is committed, it will have to wait for it to finish before the query is executed? If this is true, then I do not understand why the row locking of a single row (which I assume) will affect the next transaction query that would not require reading that locked row? Additionally, can this problem be solved (and still achieve the effect row locking does for a job queue) by doing a UPDATE instead of the transaction? UPDATE mytable SET status = 1 WHERE status IS NULL ORDER BY timestamp DESC LIMIT 1

    Read the article

  • Generate random number from an arbitrary weighted list

    - by Fernando
    Here's what I need to do, I'll be doing this both in PHP and JavaScript. I have a list of numbers that will range from 1 to 300-500 (I haven't set the limit yet). I will be running a drawing were 10 numbers will be picked at random from the given range. Here's the tricky part: I want some numbers to be less likely to be drawn up. A small set of those 300-500 will be flagged as "lucky numbers". For example, out of 100 drawings, most numbers have equal chances of being drawn, except for a few, that will only be picked once every 30-50 drawings. Basically I need to artificially set the probability of certain numbers to be picked while maintaining an even distribution with the rest of the numbers. The only similar thing I've found so far is this question: Generate A Weighted Random Number, the problem being that my spec has considerably more numbers (up to 500) so the weights would get very small and supposedly this could be a problem with that solution (Rejection Sampling). I'm still trying it, though, but I wonder if there other solutions. Math is not my thing so I appreciate any input. Thanks.

    Read the article

  • Help with why my app crashed?

    - by Moshe
    I'm writing an iPad app that is a "kiosk" app. The iPad should be hanging on the wall and the app should just run. I did a test, starting the app last night (Friday, December 31) and letting it run. This morning, when I woke up, it was not running. I just checked the iPad's console and I can't figure out why it crashed. The iPad was plugged in and so the battery is not the issued. I did disable the idleTimer in my application delegate. The app was seen running as late as midnight last night. I would like to note that my app acts as a Bluetooth server through Game Kit and large portion of the console output is occupied by bluetooth status messages. When I opened the iPad, the app was paused and there was a system alert which prompted me to check an "Expiring Provisioning Profile". I tapped "dismiss" and the alert went away. The app crashed about a second after I dismissed the system alert. Any ideas how I can diagnose this problem? Why would my app crash? Here is my iPad's Console log, as copied from Xcode's organizer. Edit: A bit of Googling lead me to this site which says that alert views cause the app to lose focus. Could that be involved? What can I do to fix the problem? EDIT2: My Crash log describes the situation as: Application Specific Information: appname failed to resume in time Elapsed total CPU time (seconds): 10.010 (user 8.070, system 1.940), 100% CPU Elapsed application CPU time (seconds): 9.470, 95% CPU

    Read the article

< Previous Page | 809 810 811 812 813 814 815 816 817 818 819 820  | Next Page >