Search Results

Search found 20350 results on 814 pages for 'license key'.

Page 699/814 | < Previous Page | 695 696 697 698 699 700 701 702 703 704 705 706  | Next Page >

  • How do I exclude data from local table schema_migrations from being pushed to Heroku DB?

    - by Thierry Lam
    I was able to push my Ruby on Rails app with MySQL(local dev) to the Heroku server along with migrating my model with the command heroku rake db:migrate. I have also read the documentation on Database Import/Export. Is that doc referring to pushing actual data from my local dev DB to whichever Heroku's DB? Do I need to modify anything in the file database.yml to make it happen? I ran the following command: heroku db:push and I am getting the error: Sending data 2 tables, 3 records !!! Caught Server Exception | ETA: --:--:-- Taps Server Error: PGError ERROR: duplicate key value violates unique constraint "unique_schema_migrations" I have 2 tables, one I create for my app and the other schema_migrations. The total number of entries among the 2 tables is 3. I'm also printing the number of entries I have in the table I have created and it's showing 0. Any ideas what I might be missing or what I am doing wrong? EDIT: I figured out the above, Heroku's DB already have schema_migrations the moment I ran migrate. New question: Does anyone know how I can exclude data from a specific table from being pushed to Heroku DB. The table to exclude in this case will be schema_migrations. Not so good solution: I googled around and someone else was having the same issue. He suggested naming the schema_migrations table to zschema_migrations. In this way data from the other tables will be pushed properly until it fails on the last table. It's a pretty bad solution but will do for the time being. A better solution will be to use an existing Rails command which can reset a specific table from a database. I don't think Rake can do that.

    Read the article

  • Python many-to-one mapping (creating equivalence classes)

    - by Adam Matan
    Hi, I have a project of converting one database to another. One of the original database columns defines the row's category. This coulmn should be mapepd to a new category in the new databse. For example, let's assume the original categories are:parrot, spam, cheese_shop, Cleese, Gilliam, Palin Now that's a little verbose for me, And I want to have these rows categorized as sketch, actor - That is, define all the sketches and all the actors as two equivalence classes. >>> monty={'parrot':'sketch', 'spam':'sketch', 'cheese_shop':'sketch', 'Cleese':'actor', 'Gilliam':'actor', 'Palin':'actor'} >>> monty {'Gilliam': 'actor', 'Cleese': 'actor', 'parrot': 'sketch', 'spam': 'sketch', 'Palin': 'actor', 'cheese_shop': 'sketch'} That's quite awkward- I would prefer having something like: monty={ ('parrot','spam','cheese_shop'): 'sketch', ('Cleese', 'Gilliam', 'Palin') : 'actors'} But this, of course, sets the entire tuple as a key: >>> monty['parrot'] Traceback (most recent call last): File "<pyshell#29>", line 1, in <module> monty['parrot'] KeyError: 'parrot' Any ideas how to create an elegant many-to-one dictionary in Python? Thanks, Adam

    Read the article

  • Android ANR keyDispatchingTimedOut Error while continuous tapping on screen.

    - by user519846
    Hi All, I am getting Application Not Responding (ANR) dialog while continuous tapping on the screen. There is no view on the screen where i am tapping. Frequency of this issue is less but still i am not able to remove it completely. Here i am attaching the log what i caught during this error. ERROR/ActivityManager(1322): ANR in com.test.mj.and.ui (com.test.mj.and.ui/.TermsAndCondActivity) ERROR/ActivityManager(1322): Reason: keyDispatchingTimedOut ERROR/ActivityManager(1322): Parent: com.test.mj.and.ui/.SplashActivity ERROR/ActivityManager(1322): Load: 6.59 / 6.37 / 5.21 ERROR/ActivityManager(1322): CPU usage from 11430ms to 2196ms ago: ERROR/ActivityManager(1322): rtal.mj.and.ui: 9% = 7% user + 1% kernel / faults: 649 minor ERROR/ActivityManager(1322): system_server: 4% = 2% user + 2% kernel / faults: 10 minor ERROR/ActivityManager(1322): logcat: 3% = 1% user + 1% kernel / faults: 675 minor 1 major ERROR/ActivityManager(1322): synaptics_wq: 1% = 0% user + 1% kernel ERROR/ActivityManager(1322): ami304d: 1% = 0% user + 0% kernel ERROR/ActivityManager(1322): .process.lghome: 1% = 0% user + 0% kernel / faults: 47 minor ERROR/ActivityManager(1322): sync_supers: 0% = 0% user + 0% kernel ERROR/ActivityManager(1322): droid.DunServer: 0% = 0% user + 0% kernel / faults: 6 minor ERROR/ActivityManager(1322): events/0: 0% = 0% user + 0% kernel ERROR/ActivityManager(1322): oid.inputmethod: 0% = 0% user + 0% kernel / faults: 2 minor ERROR/ActivityManager(1322): m.android.phone: 0% = 0% user + 0% kernel / faults: 2 minor ERROR/ActivityManager(1322): ndroid.settings: 0% = 0% user + 0% kernel ERROR/ActivityManager(1322): sh: 0% = 0% user + 0% kernel / faults: 110 minor ERROR/ActivityManager(1322): -flush-179:0: 0% = 0% user + 0% kernel ERROR/ActivityManager(1322): TOTAL: 19% = 13% user + 6% kernel WARN/WindowManager(1322): Continuing to wait for key to be dispatched WARN/WindowManager(1322): No window to dispatch pointer action 1 Can anyone please help me to solve this issue? Thanks in advance.

    Read the article

  • C# Byte[] to Url Friendly String

    - by LorenVS
    Hello, I'm working on a quick captcha generator for a simple site I'm putting together, and I'm hoping to pass an encrypted key in the url of the page. I could probably do this as a query string parameter easy enough, but I'm hoping not too (just because nothing else runs off the query string)... My encryption code produces a byte[], which is then transformed using Convert.ToBase64String(byte[]) into a string. This string, however, is still not quite url friendly, as it can contain things like '/' and '='. Does anyone know of a better function in the .NET framework to convert a byte array to a url friendly string? I know all about System.Web.HttpUtility.UrlEncode() and its equivalents, however, they only work properly with query string parameters. If I url encode an '=' inside of the path, my web server brings back a 400 Bad Request error. Anyways, not a critical issue, but hoping someone can give me a nice solution **EDIT: Just to be absolutely sure exactly what I'm doing with the string, I figured I would supply a little more information. The byte[] that results from my encryption algorithm should be fed through some sort of algorithm to make it into a url friendly string. After this, it becomes the content of an XElement, which is then used as the source document for an XSLT transformation, and is used as a part of the href attribute for an anchor. I don't believe the xslt transformation is causing the issues, since what is coming through on the path appears to be an encoded query string parameter, but causes the HTTP 400 I've also tried HttpUtility.UrlPathEncode() on a base64 string, but that doesn't seem to do the trick either (I still end up with '/'s in my url)**

    Read the article

  • Java XMLRPC request-String

    - by Philip
    Hi, I'm using Apache XML-RPC 3.1.2 to talk to an online-service. They have something special, they need a hash over the whole XML with a secret key for some kind of security, like this: String hash = md5(xmlRequest + secretKey); String requestURL = "http://foo.bar/?authHash=" + hash; So I need the XML-request like this: <?xml version="1.0"?> <methodCall> <methodName>foo.bar</methodName> <params> <param> <value><struct> <member><name>bla</name> <value><int>1</int></value> </member> <member><name>blubb</name> <value><int>2</int></value> </member> </struct></value> </param> </params> </methodCall> But how do I get this String-representation of the XMLRPC-Request with the lib Apache XML-RPC?

    Read the article

  • What features do you miss most when switching from IntelliJ to XCode ?

    - by Ori
    Hi, I've started to use XCode several months ago, after using IntelliJ for several years, and there are quite a few features that I really miss. XCode is not that bad, but it is lacking some basic stuff. To trigger the discussion, here are some of the features that I miss most, who knows maybe someone from Apple will bump into this post and steal some Ideas :) Source-level error-highlighting. The write-compile-fix cycle feels like going back in time 15 years ago to my early C days. Many errors can be spotted without having to compile and Java IDEs have been doing it for years. A decent debugger. This is a bit unfair because IntelliJ's debugger is the best I've used so far, but XCode's debugger is at least 5 years behind and Apple has a few more developers than JetBrains... Stronger re-factoring. A no-brainer I guess. XCode has some renaming capabilities (which they call re-factoring), but they are very few. Override method. This one is really amazing. XCode doesn't have an "override method" command which lets you choose the method you want to override from a super class or protocol. You need to go to the documentation or header file and start copy-pasting. Duplicate selected line(s). I've bumped into some posts that offer workarounds for this via custom key-binding, but none of them works, at least for me. Go to last edit point. Bummer! Come on Apple, this one is so easy to implement and so useful! A better open quickly feature. IntelliJ's quick find of classes/files/text is so much better... Turns out that my list goes on and on, so I'll stop here... What other features do you miss most in your transition to XCode?? Ori

    Read the article

  • Why does my Doctrine DBAL query return no results when quoted?

    - by braveterry
    I'm using the Doctrine DataBase Abstraction Layer (DBAL) to perform some queries. For some reason, when I quote a parameter before passing it to the query, I get back no rows. When I pass it unquoted, it works fine. Here's the relevant snippet of code I'm using: public function get($game) { load::helper('doctrinehelper'); $conn = doctrinehelper::getconnection(); $statement = $conn->prepare('SELECT games.id as id, games.name as name, games.link_url, games.link_text, services.name as service_name, image_url FROM games, services WHERE games.name = ? AND services.key = games.service_key'); $quotedGame = $conn->quote($game); load::helper('loghelper'); $logger = loghelper::getLogger(); $logger->debug("Quoted Game: $quotedGame"); $logger->debug("Unquoted Game: $game"); $statement->execute(array($quotedGame)); $resultsArray = $statement->fetchAll(); $logger->debug("Number of rows returned: " . count($resultsArray)); return $resultsArray; } Here's what the log shows: 01/01/11 17:00:13,269 [2112] DEBUG root - Quoted Game: 'Diablo II Lord of Destruction' 01/01/11 17:00:13,269 [2112] DEBUG root - Unquoted Game: Diablo II Lord of Destruction 01/01/11 17:00:13,270 [2112] DEBUG root - Number of rows returned: 0 If I change this line: $statement->execute(array($quotedGame)); to this: $statement->execute(array($game)); I get this in the log: 01/01/11 16:51:42,934 [2112] DEBUG root - Quoted Game: 'Diablo II Lord of Destruction' 01/01/11 16:51:42,935 [2112] DEBUG root - Unquoted Game: Diablo II Lord of Destruction 01/01/11 16:51:42,936 [2112] DEBUG root - Number of rows returned: 1 Have I fat-fingered something?

    Read the article

  • SQL Server search filter and order by performance issues

    - by John Leidegren
    We have a table value function that returns a list of people you may access, and we have a relation between a search and a person called search result. What we want to do is that wan't to select all people from the search and present them. The query looks like this SELECT qm.PersonID, p.FullName FROM QueryMembership qm INNER JOIN dbo.GetPersonAccess(1) ON GetPersonAccess.PersonID = qm.PersonID INNER JOIN Person p ON p.PersonID = qm.PersonID WHERE qm.QueryID = 1234 There are only 25 rows with QueryID=1234 but there are almost 5 million rows total in the QueryMembership table. The person table has about 40K people in it. QueryID is not a PK, but it is an index. The query plan tells me 97% of the total cost is spent doing "Key Lookup" witht the seek predicate. QueryMembershipID = Scalar Operator (QueryMembership.QueryMembershipID as QM.QueryMembershipID) Why is the PK in there when it's not used in the query at all? and why is it taking so long time? The number of people total 25, with the index, this should be a table scan for all the QueryMembership rows that have QueryID=1234 and then a JOIN on the 25 people that exists in the table value function. Which btw only have to be evaluated once and completes in less than 1 second.

    Read the article

  • Can't empty JS array....

    - by solefald
    Yes, I am having issues with this very basic (or so it seems) thing. I am pretty new to JS and still trying to get my head around it, but I am pretty familiar with PHP and have never experienced anything like this. I just can not empty this damn array, and stuff keeps getting added to the end every time i run this. I have no idea why, and i am starting to think that it is somehow related to the way chekbox id's are named, but i may be mistaking.... id="alias[1321-213]", id="alias[1128-397]", id="alias[77-5467]" and so on. I have tried sticking checkboxes = []; and checkboxes.length = 0; In every place possible. Right after the beginning of the function, at the end of the function, even outside, right before the function, but it does not help, and the only way to empty this array is to reload the page. Please tell me what I am doing wrong, or at least point me to a place where i can RTFM. I am completely out of ideas here. function() { var checkboxes = new Array(); checkboxes = $(':input[name="checkbox"]'); $.each(checkboxes, function(key, value) { console.log(value.id); alert(value.id); } ); checkboxes.length = 0; } I have also read Mastering Javascript Arrays 3 times to make sure I am not doing something wrong, but still can't figure it out....

    Read the article

  • Efficient method to calculate the rank vector of a list in Python

    - by Tamás
    I'm looking for an efficient way to calculate the rank vector of a list in Python, similar to R's rank function. In a simple list with no ties between the elements, element i of the rank vector of a list l should be x if and only if l[i] is the x-th element in the sorted list. This is simple so far, the following code snippet does the trick: def rank_simple(vector): return [rank for rank in sorted(range(n), key=vector.__getitem__)] Things get complicated, however, if the original list has ties (i.e. multiple elements with the same value). In that case, all the elements having the same value should have the same rank, which is the average of their ranks obtained using the naive method above. So, for instance, if I have [1, 2, 3, 3, 3, 4, 5], the naive ranking gives me [0, 1, 2, 3, 4, 5, 6], but what I would like to have is [0, 1, 3, 3, 3, 5, 6]. Which one would be the most efficient way to do this in Python? Footnote: I don't know if NumPy already has a method to achieve this or not; if it does, please let me know, but I would be interested in a pure Python solution anyway as I'm developing a tool which should work without NumPy as well.

    Read the article

  • Post method in Servlet is not being called again after being executed once

    - by SaurabhCsIITKgp
    I am implementing a database based web application using servlets. Now, when I input a parameter using a form in the jsp page, it redirects it to a servlet which subsequently adds the value to the database (the servlet creates a new table if the table doesn't already exist). The creation of the table and the addition of value works fine if the table doesn't already exists. Once it is created however and the parameter is inputted again in the form, the submit button no longer redirects it to the servlet. Nor is the value added to the database. Kindly advise me as to where I am going wrong. Following are the snippets of my code: From the JSP page (/showmanager is the urlpattern of the servlet): <form action="showmanager" method="post"> <h3>Enter name of the show: </h3> <input type="text" name="showname" value=""> <input type="hidden" name="task" value="addshow" /> <input type="button" value="Add Show"> </form> From the servlet (POST method): p rotected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { if(request.getParameter("task").equals("addshow")){ this.addShow(request.getParameter("showname")); response.sendRedirect("showmanager.jsp"); } } Method to add in database: protected boolean addShow(String showname){ try{ statement =con.prepareStatement("INSERT INTO showdb10(name) VALUES ('"+showname+"')"); if(statement.executeUpdate()>0){ return true; } } catch(Exception e) { try{ statement =con.prepareStatement("create table showdb10 (id int NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, date varchar(20),time varchar(20), b_total int, o_total int, b_avbl int, o_avbl int, b_price double(10,2), o_price double(10,2), seat_no varchar(20), transaction_id varchar(255), total_sales double(10,2), paymnt_artists double(10,2), paymnt_othr double(10,2), flag varchar(20), PRIMARY KEY(id))"); statement.executeUpdate(); statement =con.prepareStatement("INSERT INTO showdb10(name) VALUES ('"+showname+"')"); if(statement.executeUpdate()>0){ return true; } }catch(Exception e2){} } return false; }

    Read the article

  • copy entire row (without knowing field names)

    - by Todd Webb
    Using SQL Server 2008, I would like to duplicate one row of a table, without knowing the field names. My key issue: as the table grows and mutates over time, I would like this copy-script to keep working, without me having to write out 30+ ever-changing fields, ugh. Also at issue, of course, is IDENTITY fields cannot be copied. My code below does work, but I wonder if there's a more appropriate method than my thrown-together text string SQL statement? So thank you in advance. Here's my (yes, working) code - I welcome suggestions on improving it. Todd alter procedure spEventCopy @EventID int as begin -- VARS... declare @SQL varchar(8000) -- LIST ALL FIELDS (*EXCLUDE* IDENTITY FIELDS). -- USE [BRACKETS] FOR ANY SILLY FIELD-NAMES WITH SPACES, OR RESERVED WORDS... select @SQL = coalesce(@SQL + ', ', '') + '[' + column_name + ']' from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'EventsTable' and COLUMNPROPERTY(OBJECT_ID('EventsTable'), COLUMN_NAME, 'IsIdentity') = 0 -- FINISH SQL COPY STATEMENT... set @SQL = 'insert into EventsTable ' + ' select ' + @SQL + ' from EventsTable ' + ' where EventID = ' + ltrim(str(@EventID)) -- COPY ROW... exec(@SQL) -- REMEMBER NEW ID... set @EventID = @@IDENTITY -- (do other stuff here) -- DONE... -- JUST FOR KICKS, RETURN THE SQL STATEMENT SO I CAN REVIEW IT IF I WISH... select EventID = @EventID, SQL = @SQL end

    Read the article

  • Run a PHP script every second using CLI

    - by Saif Bechan
    Hello, I have a dedicated server running Cent OS with a Parallel PLESK panel. I need to run a php script every second, that updates my database. These is no alternative way timewise, i have checked every method, it needs to be updated every second. I can find my script using the url: http://www.mysite.com/phpfile.php?key=123, and this has to be executed every second. Does anyone have any knowledge at all on doing this, i can not seem to find the answer. I heard about doing it with CLI and putty, but i have no knowledge of this at all. Or can this be done using the PLESK Panel? And can the file be executed locally every second. Like \phpfile.php If someone helps me on answering these question i would really appreciate it. Regards EDIT It has been a few months since i added this question. I ended up using the following code: #!/user/bin/php $start = microtime(true); set_time_limit(60); for (i = 0; i < 59; ++$i) { doMyThings(); time_sleep_until($start + $i + 1); } Thank you for this code guys! My cronjob is set to every minute. I have been running this for some time now in a test environment, and this works out great. It works really supperfast, and i see no increase in CPU nor Memory usage.

    Read the article

  • archiving strategies and limitations of data in a table

    - by Samuel
    Environment: Jboss, Mysql, JPA, Hibernate Our web application will be catering to a large amount of users (~ 1,000,000) and there are a lots of child table where user specific data are stored (e.g. personal, health, forum contributions ...). What would be the best practice to archive user & user specific information. [a] Would it be wise to move the archived user & user specific information to their respective tables within the same database (e.g. user_archive, user_forum_comments_archive ...) OR [b] Would you just mark the database entries with a flag in the original table(s) and just query only non archived entries. We have a unique constraint on User.loginid, how do you handle this requirement if the users are archived via 1-[a] (i.e if a user with loginid 'samuel' gets moved into the archive table and if a new user gets added with the same name in the original table, how would you prevent this. What would be the best strategy to address the unique key constraints. We have a requirement to selectively archive records and bring it back if necessary, will you rely on database tools are would you handle this via your persistence APIs exposed by the JPA entity model.

    Read the article

  • Return and Save XML Object From Sharepoint List Web Service

    - by HurnsMobile
    I am trying to populate a variable with an XML response from an ajax call on page load so that on keyup I can filter through that list without making repeated get requests (think very rudimentary autocomplete). The trouble that I am having seems to be potentially related to variable scoping but I am fairly new to js/jQuery so I am not quite certain. The following code doesn't do anything on key up and adding alerts to it tells me that it is executing leadResults() on keyup and that the variable is returning an XML response object but it appears to be empty. The strange bit is that if I move the leadResults() call into the getResults() function the UL is populated with the results correctly. Im beating my head against the wall on this one, please help! var resultsXml; $(document).ready( function() { var leadLookupCaml = "<Query> \ <Where> \ <Eq> \ <FieldRef Name=\"Lead_x0020_Status\"/> \ <Value Type=\"Text\">Active</Value> \ </Eq> \ </Where> \ </Query>" $().SPServices({ operation: "GetListItems", webURL: "http://sharepoint/departments/sales", listName: "Leads", CAMLQuery: leadLookupCaml, CAMLRowLimit: 0, completefunc: getResults }); }) $("#lead_search").keyup( function(e) { leadResults(); }) function getResults(xData, status) { resultsXml = xData; } function leadResults() { xData = resultsXml; $("#lead_results li").remove(); $(xData.responseXML).find("z\\:row").each(function() { var selectHtml = "<li>" + "<a href=\"http://sharepoint/departments/sales/Lists/Lead%20Groups/DispForm.aspx?ID=" + $(this).attr("ows_ID") + ">" + $(this).attr("ows_Title")+" : " + $(this).attr("ows_Phone") + "</a>\ </li>"; $("#lead_results").append(selectHtml); }); }

    Read the article

  • DB2 Child Table Not Working - Create Table

    - by gamerzfuse
    I have a bit of a task before me. (DB2 Database) I need to create a table that will be a child table (is that what it is called in SQL?) I need it so that it has a foreign key constraint with my other table, so when the parent table is modified (record deleted) the child table also loses that record. Once I have the table, I also need to populate it with the data from the other table (if there is an easy way to UPDATE this). If you could point me in the right direction, this would help alot, as I do not even know what syntax to look for. Thanks in advance The table I have in place: create table titleauthors ( au_id char(11), title_id char(6), au_ord integer, royaltyshare decimal(5,2)); The table I am creating: create table titles ( title_id char(6), title varchar(80), type varchar(12), pub_id char(4), price decimal(9,2), advance decimal(9,2), ytd_sales integer, contract integer, notes varchar(200), pubdate date); I need the title_id to be matched with the title_id from the parent table AND use the ON DELETE CASCADE syntax to delete when that table is deleted from. My Attempt: CREATE TABLE BookTitles ( title_id char(6) NOT NULL CONSTRAINT BookTitles_title_id_pk REFERENCES titleauthors(title_id) ON DELETE CASCADE, title varchar(80) NOT NULL, type varchar(12), pub_id char(4), price decimal(9,2), advance decimal(9,2), ytd_sales integer, contract integer, notes varchar(200), pubdate date) ; Thanks in advance!

    Read the article

  • How to merge duplicates in 2D python arrays

    - by Wei Lou
    Hi, I have a set of data similar to this: No Start Time End Time CallType Info 1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms 2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms 3 13:19:12.754 13:19:14.757 Ping3_1 RTT(Avr):620ms 3 13:19:12.754 Ping3_2 RTT(Avr):210ms 4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms 5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms 6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms ... when i parse through it, i need merge the results of Ping3_1 and Ping3_2. Then take average of those two row export as one row. So the end of result would be like this: No Start Time End Time CallType Info 1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms 2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms 3 13:19:12.754 13:19:14.757 Ping3 RTT(Avr):415ms 4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms 5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms 6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms currently i am concatenating column 0 and 1 to make a unique key, find duplication there then doing rest of special treatment for those parallel Pings. It is not elegant at all. Just wonder what is the better way to do it. Thanks!

    Read the article

  • Specifiy package path for Androd:entries

    - by Priyank
    Hi. I am using following code in preferences page in android to show a list of items. The list and values are located in a file at location "app/res/xml/time.xml" <ListPreference android:title="Time unit list" android:summary="Select the time unit" android:dependency="Main_Option" android:key="listPref" android:defaultValue="1" android:entries="?xml:time/timet" android:entryValues="@xml:time/timet_values" /> The code for the time.xml is as follow: <?xml version="1.0" encoding="utf-8"?> <resources> <string-array name="timet"> <item>seconds</item> <item>minutes</item> <item>hours</item> </string-array> <string-array name="timet_values"> <item>3600</item> <item>60</item> <item>1</item> </string-array> </resources> I am not able to reference these values in my preference xml file. (The code snippet above). It gives an error. How can I specify packaged path for the List preferences entry and entry_values Any help is appreciated. Cheers

    Read the article

  • displaying video from webcam in OpenCV

    - by Adi
    hi all, I have installed VS2008 and am able to run the demo codes "camshiftdemo and lkdemo " which comes in the opencv library. With this done, now I am trying to run some simple codes from the internet to get acquainted with the OpenCV. I am just trying to display video from webcam and I am getting the following error.. Error I am getting is : Unhandled exception at 0x5e7e3d10 (highgui200.dll) in opencv.exe: 0xC0000005: Access violation reading location 0x719b3856. The code I am trying to run is : include include void main(int argc,char argv[]) { int c; IplImage color_img; CvCapture* cv_cap = cvCaptureFromCAM(-1); // -1 = only one cam or doesn't matter cvNamedWindow("Video",1); // create window for(;;) { color_img = cvQueryFrame(cv_cap); // get frame if(color_img != 0) cvShowImage("Video", color_img); // show frame c = cvWaitKey(10); // wait 10 ms or for key stroke if(c == 27) break; // if ESC, break and quit } /* clean up */ cvReleaseCapture( &cv_cap ); cvDestroyWindow("Video"); } Any help in this will be greatly appreciated. Thanks aditya

    Read the article

  • How do I populate these fields from existing data?

    - by dmanexe
    I'm not sure how to make a few parts of my form to populate from data from an array I'm passing from the database. First is this <select> object. The key estimate_lead_id in the database holds the value, and I want the dropdown to auto-select based on the value from the database. <select name="estimate_lead_id"> <? foreach($leads->result() as $lead) { ?> <option value="<?=$lead->id?>"><?=$lead->lead_name?></option> <? } ?> </select> Second (this is a little more complex), I have created a script that loops through and gets all of the items from each respective category. What I can't get it to do though is loop through these items and only display the items in the estimate, and then fill the fields with the respective values from the database. <?php foreach ($items['items_poolconcretedecking']->result() as $item) { ?> <input type="checkbox" name="measure[<?=$item->id?>][checkmark]" value="<?=$item->id?>"> <input class="item_mult" value="" type="text" name="measure[<?=$item->id?>][quantity]" /> <?php } ?>

    Read the article

  • Testing performance of queries in mysl

    - by Unreason
    I am trying to setup a script that would test performance of queries on a development mysql server. Here are more details: I have root access I am the only user accessing the server Mostly interested in InnoDB performance The queries I am optimizing are mostly search queries (SELECT ... LIKE '%xy%') What I want to do is to create reliable testing environment for measuring the speed of a single query, free from dependencies on other variables. Till now I have been using SQL_NO_CACHE, but sometimes the results of such tests also show caching behaviour - taking much longer to execute on the first run and taking less time on subsequent runs. If someone can explain this behaviour in full detail I might stick to using SQL_NO_CACHE; I do believe that it might be due to file system cache and/or caching of indexes used to execute the query, as this post explains. It is not clear to me when Buffer Pool and Key Buffer get invalidated or how they might interfere with testing. So, short of restarting mysql server, how would you recommend to setup an environment that would be reliable in determining if one query performs better then the other?

    Read the article

  • Using MongoDB's map/reduce to "group by" two fields

    - by ibz
    I need something slightly more complex than the examples in the MongoDB docs and I can't seem to be able to wrap my head around it. Say I have a collection of objects of the form {date: "2010-10-10", type: "EVENT_TYPE_1", user_id: 123, ...} Now I want to get something similar to a SQL GROUP BY query, grouping over both date and type. That is, I want the number of events of each type in each day. Also, I'd like to make it unique by user_id, ie. if a user has more events in the same day, count it only once. I'm trying to do this with map/reduce. I do db.logs.mapReduce(function() { emit(this.type, 1); }, function(k, vals) { var total = 0; for (var i = 0; i < vals.length; i++) total += vals[i]; return total; }}) which nicely groups by type, but now, how can I group by date at the same time? Seems the key in emit() can't be an array (I thought about doing emit([this.date, this.type], 1)). Also, how can I ensure the per-user uniqueness? I'm just starting with MongoDB and I'm still having trouble grasping the basic concepts. Also, there is not much documentation available out there. Any help from more experienced users is appreciated. Thanks!

    Read the article

  • Temporary storage for keeping data between program iterations?

    - by mr.b
    I am working on an application that works like this: It fetches data from many sources, resulting in pool of about 500,000-1,500,000 records (depends on time/day) Data is parsed Part of data is processed in a way to compare it to pre-existing data (read from database), calculations are made, and stored in database. Resulting dataset that has to be stored in database is, however, much smaller in size (compared to original data set), and ranges from 5,000-50,000 records. This process almost always updates existing data, perhaps adds few more records. Then, data from step 2 should be kept somehow, somewhere, so that next time data is fetched, there is a data set which can be used to perform calculations, without touching pre-existing data in database. I should point out that this data can be lost, it's not irreplaceable (key information can be read from database if needed), but it would speed up the process next time. Application components can (and will be) run off different computers (in the same network), so storage has to be reachable from multiple hosts. I have considered using memcached, but I'm not quite sure should I do so, because one record is usually no smaller than 200 bytes, and if I have 1,500,000 records, I guess that it would amount to over 300 MB of memcached cache... But that doesn't seem scalable to me - what if data was 5x that amount? If it were to consume 1-2 GB of cache only to keep data in between iterations (which could easily happen)? So, the question is: which temporary storage mechanism would be most suitable for this kind of processing? I haven't considered using mysql temporary tables, as I'm not sure if they can persist between sessions, and be used by other hosts in network... Any other suggestion? Something I should consider?

    Read the article

  • win32 console - form example!

    - by Bach
    I'm trying to build a simple form in a c++ win32 console application. instead of using cin and keep prompting the user to enter the details, i would like to display the form labels then using the tab key, allow the user to tab through. What is the simplest way of doing this, without having to use ncurses? all I need is cout the below all at once: Name: Username: Email: set the cursor position next to name Field, then each time you hit tab, i gotoxy, and set the cursor at the next position, then set the the cin to the next variable eg. at startup gotoxy(nameX, nameY); cin >> name; Hit Tab/enter gotoxy(usernameX, usernameY); cin >> username; Hit Tab/enter gotoxy(emailX, emailY); cin >> email; is this even doable? I tried while loops with, GetAsyncKeyState, and keyboard events, but the cin is not working properly in that loop. is there any good example for a super simple form, or reference for doing that? I know how to SetConsoleCursorPosition, but how to implement the tabbing while still being able to capture cin? thanks

    Read the article

  • random data using php & mysql

    - by Prakash
    I have mysql database structure like below: CREATE TABLE test ( id int(11) NOT NULL auto_increment, title text NULL, tags text NULL, PRIMARY KEY (id) ); data on field tags is stored as a comma separated text like html,php,mysql,website,html etc... now I need create an array that contains around 50 randomly selected tags from random records. currently I am using rand() to select 15 random mysql data from database and then holding all the tags from 15 records in an array. Then I am using array_rand() for randomizing the array and selecting only 50 random records. $query=mysql_query("select * from test order by id asc, RAND() limit 15"); $tags=""; while ($eachData=mysql_fetch_array($query)) { $additionalTags=$eachData['tags']; if ($tags=="") { $tags.=$additionalTags; } else { $tags.=$tags.",".$additionalTags; } } $tags=explode(",", $tags); $newTags=array(); foreach ($tags as $tag) { $tag=trim($tag); if ($tag!="") { if (!in_array($tag, $newTags)) { $newTags[]=$tag; } } } $random_newTags=array_rand($newTags, 50); Now I have huge records on the database, and because of that; rand() is performing very slow and sometimes it doesn't work. So can anyone let me know how to handle this situation correctly so that my page will work normally.

    Read the article

< Previous Page | 695 696 697 698 699 700 701 702 703 704 705 706  | Next Page >